Categories
Feature Articles

Trust me, I’m wearing my lanyard

The medical student’s first lanyard represents much more than a device which holds clinical identification cards – it symbolises their very identity, first as medical students and eventually as medical practitioners. The lanyard allows access to hospitals and a ready way to discern who’s who in a fast paced environment. It is the magic ticket that allows us to wander hospital corridors (often aimlessly) without being questioned.

Despite this, the utility of the lanyard as a symbol of “an insider” is being questioned, with mounting evidence showing it to be a harbour for the indirect transmission of bacteria from health care staff to patients. It may be time for the lanyard, like the white coat before it, to be retired as a symbolic but potentially harmful relic of the past.  This essay investigates the validity of these concerns by examining available literature and the results of a small pilot study.

v6_i1_a25

Background

In  May  2014  Singapore  General  Hospital  announced  a  new  dress policy for all staff. Hanging lanyards were banned and replaced with retractable identification card holders. Dr Ling Moi Lin, the hospital’s director of infection control, explained that the hospital aimed “to ensure that ties and lanyards do not flap around when staff examine patients, these objects can easily collect germs and bacteria – we do not want to carry them to other patients.” [1]

This hospital is not alone on their stance against hanging lanyards. The British National Health Service (NHS) Standard Infection Prevention and   Control   guidelines   published   in   March  2013  lists   wearing neckties or lanyards during direct patient care as “bad practice”. The guidelines state that lanyards “come into contact with patients, are rarely laundered and play no part in patient care”. [2] Closer to home the current 2013 Bare Below the Elbows campaign, a Queensland Government initiative aiming to improve the effectiveness of hand hygiene performed by health care workers, recommends that retractable (or similar) identification card holders are used in place of lanyards. [3] Other Australian states and many individual hospitals have adopted similar recommendations. [4,5]

However, some hospitals and medical schools continue to require staff and students to wear lanyards. For example James Cook University medical students are provided with one lanyard, which must be worn in all clinical settings (whether that be at the medical school during clinical skills sessions or at the hospital) for the entire duration of their six-year degree. [6] The University of Queensland 2013 medical student guide for their Sunshine Coast clinical school states that students must wear their lanyards and display identification badges at all times in teaching locations. [7] This is not concordant with the current Queensland Government initiative recommendations.

The NHS Standard Infection Prevention and Control guidelines are also being breached by medical schools requiring their students to wear lanyards. London Global University states that lanyards are important in that they remind patients who students are and clinical teachers and other professionals that they are in a teaching hospital. However students are required to use a safety pin to attach the end of their lanyard to fixed clothing. [8] A similar policy is in place in Cardiff University where students must wear lanyards but ensure that they are not “dangling” freely when carrying out examinations and procedures. [9] So how harmful could the humble, dangling lanyard really be?

How harmful could the lanyard be?

Each year there are around 200,000 healthcare-associated infections in Australian  acute  healthcare  facilities.  Nosocomial  infections  are the most common complication affecting patients in hospital. These potentially preventable adverse effects cause unnecessary pain and suffering for patients and their families, prolong hospital stays and are costly to the health care system. [10]

Improving hand hygiene among healthcare workers is currently the single most effective intervention to reduce the risk of nosocomial infections in Australian hospitals. [11] The World Health Organisation guidelines on Hand Hygiene in Health Care indicate five moments when the hands must be washed. Two of these are before and after contact with a patient. [12]

In between these two crucial hand washes several objects are frequently touched by health care staff. Objects such as a doctor’s neckties [13-17], stethoscopes [18-20] and pens [21,22] have all been shown to carry pathogenic bacteria. The bacteria isolated include methicillin resistant Staphylococcus aureus (MRSA), found on doctors’ ties [14,16] and stethoscopes. [19] Making contact with these objects during an examination can result in the indirect transmission of microorganisms, transferring the infectious agents to a susceptible host via an intermediate object termed a fomite.

The infectious agents must be transferred from the fomites to the hands of the health care practitioners before they can be spread to patients. The efficiency of transfer of a pathogen on a surface to a practitioner’s hand after a single contact was tested by a recent study published in 2013. It isolated five known nosocomial pathogens and placed  them  on  non-porous  surfaces;  after 10  seconds  of  contact time between a finger and the surface under a known pressure the microorganism transferred to the finger was examined. It showed that under relative humidity non-porous surfaces had a transfer efficiency of up to 79.5%. [23] This study indicates that after one contact with a contaminated fomite there is a significant transfer of microorganisms to the hands, these can then be transferred to patients.

Furthermore,  if  no  regular  preventative  disinfection  is  performed the most common nosocomial pathogens may survive or persist on inanimate surfaces for months and can therefore be a continuous source of transmission. [24] One study conducted in the United Kingdom in 2008 approached 100 hospital staff randomly and asked them to state the frequency and method by which their lanyards were washed or decontaminated. Only 27% had ever washed their lanyards and 35% of lanyards appeared noticeably soiled. [25] This suggests that the lanyards, which doctors carry with them daily, could potentially harbour acquired infectious agents for an extended periods of time.

Two recent studies have shown that lanyards do carry pathogenic bacteria. [25,26] An Australian study by Kotsanas et al. tested lanyards and identification cards for pathogenic bacteria and found that 38% of lanyards harboured these. Nearly 10% of lanyards grew MRSA, and other pathogens found included methicillin sensitive Staphylococcus aureus,  enterococci  and  Gram-negative  bacilli.  The  bacterial  load on lanyards was 10 times greater per unit surface area than the identification cards themselves. [26]

It has been suggested that contaminated fomites are a result of poor hand hygiene. As such it is assumed that with good hand hygiene practices wearing these objects is acceptable. It has been widely reported that nurses have far better hand hygiene habits than doctors. A  recent  Australian  study  conducted  in  82  hospitals  reports  that nurses consistently have significantly higher levels of hand hygiene compliance. [27] If the fomite pathogenic carriage is dependent on hand hygiene then one might expect that lanyards worn by nurses would have lower pathogenic carriage. However Kotsanas et al. showed that although there was a difference in organism composition there was no significant difference between total median bacterial counts isolated from nurses’ and doctors’ lanyards. [26] This suggests that the carriage of pathogens on lanyards is not solely dependent on compliance with hand hygiene protocols.

Lanyards have thus been shown to carry bacteria, which may remain on them for months, regardless of hand hygiene practices, and have high rates of transfer to the hands of practitioners. However there have been no studies conducted to directly show that their use results in the increased transmission of bacteria. There are however some studies which have shown bacterial transfer from neckties to patients. Lanyards are similar to neckties in that they have been shown to carry pathogenic bacteria, are made of a textile material which is rarely laundered, are positioned at the waistline, have a nature to swing and inadvertently touch patients or the practitioner’s cleansed hands and have no direct role in patient care. [13-17]

A study in Pakistan found that the bacteria collected from the lower part of neckties worn by physicians correlated with bacteria isolated from their patients’ wounds after surgical review. [17] This suggests that bacterial transmission occurred. More convincingly, a recent study by Weber et al. tested the transmission of bacteria, to dummies, from doctors wearing different combinations of clothing inoculated with comparable levels of bacteria to those previously reported. After a brief 2.5-minute history and exam, cultures were obtained from the dummies at three sites. The number of contaminated mock patients was six times higher and total colony units cultured was 26 times higher when the examiner was wearing an unsecured necktie. [28] This showed that unsecured neckties do result in greater transmission of bacteria from doctors to patients. The ties may swing to directly transmit bacteria to the patient or to the cleansed hands of the doctor, which are then transferred to the patient. Lanyards would likely pose a similar risk.

In my clinical experience, unlike ties, lanyards are often inadvertently touched and fiddled with by medical students and doctors during the clinical examination of a patient. This can recontaminate hands with pathogens even after hand-washing procedures have been followed. Thus, because of this additional contact, lanyards potentially have a higher rate of bacterial transmission than neckties.

What did my pilot study show?

To test this theory I conducted a small observational study, in which 20 James Cook University fourth-year medical students were observed during the focused examination of a volunteer, posing as a patient in an imitated hospital bed setting. Twelve students conducted a focused head and neck examination whilst eight conducted an abdominal examination. The students were unaware of the nature of the study. All students observed washed their hands prior to and at the end of each clinical examination. I observed the students from when they washed their hands prior to the physical exam until their last physical contact with the patient. The mean time taken was 12 minutes. During this period two things were noted: the number of times that their hands made contact with their lanyards and the number of times that the lanyard made contact with the patient. 70% of the students’ lanyards touched their patient during the exam at least once; the mean number of times was 2.65 (SD = 2.99). 95% of students touched their lanyards during the exam; the mean number of times was 7.35 (SD = 5.28).

Many made contact with their lanyard as part of their introduction to the patient, holding their lanyard to “show” that they are in fact a medical student. Some held the lanyards to their abdomen with one hand whilst examining the patient with the other hand to prevent it making contact with the patient. Others fiddled with the lanyard whilst talking to the patient. During hand gestures, the lanyards often collided with the student’s hands and the students’ stethoscopes, prominently displayed around their necks, were often entangled with their lanyards. The amount of contact was to the extent that some students default position was standing with their hands holding their lanyards. After each forced hand movement their hands were returned to holding their lanyards.

It is also interesting to note that several students had attached objects such as pens, USBs and keypads to their lanyards. The attachment of additional objects had a slightly increased correlation with the amount of times that their hands made contact with the lanyard but almost doubled the times the lanyard made contact with the patient (2.65 to 4.67).

One student had a lanyard clip which fastened the end of his lanyard to his shirt. This student did not touch his lanyard once during the exam and his lanyard also did not make contact with the patient. There may thus be some benefit in following the lead of London Global University and Cardiff University in enforcing the use of lanyard clips or safety pins to prevent their students’ lanyards from dangling. [8,9]

This observational study adds another dimension to the argument against wearing lanyards. Like neckties, lanyards have been shown to carry pathogenic bacteria, swing to make contact with the patient, are rarely laundered, and have no direct part in patient care. This small observational study confirmed my assumption that lanyards also come into contact with examiners’ hands a significant number of times during an examination.

Role models

During the influential years at some medical schools, it is standard policy that students are required to wear a hanging lanyard even though there is a growing body of evidence which indicates that hanging lanyards should not be worn. These students can only dream of the day when their blue medical student lanyards are replaced with the lanyards with “DOCTOR” repeatedly printed. Our role models are wearing improved, larger, better lanyards. It has been proposed that advocating the presentation of up-to-date evidence based information with an emphasis on role modelling should be made an educational priority to improve hand hygiene rates. [29] Research has indicated that targeting medical students may be an effective approach to raising the low compliance rates of hand hygiene procedures of doctors. [29] Clearly advocating the role that fomites like lanyards play in the spread of nosocomial infections has not been made an educational priority and may be part of the reason why compliance with current health hygiene policies regarding their use are low.

It seems contradictory that if I do not to wash my hands at the start of a clinical examination I will fail but I could, like one student in the observation study did, touch an object shown to carry pathogenic bacteria, which I am required to wear, 23 times and still pass. Making contact with an object shown to carry pathogenic bacteria more than once per minute of clinical examination is alarming and arguably diminishes the purpose of rigorous hand washing procedures.

Conclusion

Lanyards are an easy way to carry identification cards that identify who’s who in a fast paced environment. However there is a growing body of evidence that indicates that they may be the harbour for the indirect transmission of infectious agents to patients. Several health hygiene policies have been updated to encourage health professionals not to wear lanyards during direct patient care. Some medical schools have not followed these guidelines and still require students to wear lanyards. While there is no definitive link showing the transmission of an acquired infection from the tip of a medical student’s lanyard, there is very reasonable circumstantial evidence indicating that this could easily happen. Obeying current state infection prevention guidelines and  swapping  hanging  lanyards  for  retractable  identification cards or simply preventing them from dangling may be useful in reducing nosocomial infections in Australia. It is about time that the lanyard is retired as a symbolic but potentially harmful relic of the past.

Acknowledgements

James Cook University clinical staff and fourth year medical students for allowing me to observe their clinical skills assessment.

Conflict of interest

None declared.

Correspondence

E de Jager: elzerie.dejager@my.jcu.edu.au

References

[1] Cheong K. SGH staff roll up their sleeves – under new dress code for better hygiene. The Straits Times [Internet]. 2014 May 16 [cited 2014 Jun 25]. Available from: www.straitstimes.com/news/singapore/health/story/sgh-staff-roll-their-sleeves-under-new- dress-code-better-hygiene-2014050

[2] NHS: National Health Service. CG1 Standard infection prevention and control guidelines [Internet]. 2014 Mar [cited 2014 Jun 25]. Available from: http://www.nhsprofessionals. nhs.uk/download/comms/cg1%20standard%20infection%20prevention%20and%20 control%20guidelines%20v4%20march%202013.pdf

[3] Queensland Government Department of Health. Bare below the elbows [Internet]. 2013 Sep [cited 2014 Jun 24]. Available from: http://www.health.qld.gov.au/chrisp/hand_hygiene/fsheet_BBE.pdf

[4] Tasmanian Government Department of Health and Human Services. Hand hygiene policy [Internet]. 2013 Apr 1[cited 2014 Jun 24] Available from: http://www.dhhs.tas.gov.au/data/assets/pdf_file/0006/72393/Hand_Hygiene_Policy_2010.pdf

[5] Australian Capital Territory Government Health. Standard operating procedure 1, hand hygiene [Internet]. 2014 March [cited 2014 June 24]. Available from: http://health.act.gov.au/c/health?a=dlpubpoldoc&document=2723

[6] James Cook University School of Medicine. Medicine student lanyards [Internet]. 2014 [cited 2014 June 24] Available from: https://learnjcu.jcu.edu.au/

[7] The University of Queensland Sunshine Coast Clinical School. 2013 Medical student guide [Internet]. 2013 [cited 2014 July 5]. Available from: https://my.som.uq.edu.au/mc/media/25223/sccs%20medical%20student%20guide%202013.pdf

[8] University College London Medical School. Policies and regulations, identity cards and name badges [Internet]. 2014 [cited 2014 July 5]. Available from: http://www.ucl.ac.uk/medicalschool/staff-students/general-information/a-z/#dress

[9]   Cardiff  University   School   of   Medicine.   Personal   presentation  [Internet].   2013 August  22  [cited  2014  July  5].  Available  from:  http://medicine.cf.ac.uk/media/filer_public/2013/08/22/perspres.pdf. Published August 22, 2013

[10] Australian Government National Health and Medical Research Council. Australianguidelines for the prevention and control of infection in healthcare [Internet]. 2010 [cited2014  June  24].  Available  from: http://www.nhmrc.gov.au/book/australian-guidelines-prevention-and-control-infection-healthcare-2010/introduction

[11] Australian Commission on Safety and Quality in Healthcare Hand Hygiene Australia. 5 Moments for hand hygiene [Internet]. 2009 [cited 2014 July 7]. Available from: http://www.hha.org.au/UserFiles/file/Manual/ManualJuly2009v2(Nov09).pdf

[12]  World  Health  Organisation.  WHO  guidelines  on  hand  hygiene  in  health  care [Internet]. Geneva. 2009 [cited 2014 July 7]. Available from: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf

[13] Dixon M. Neck ties as vectors for nosocomial infection. ICM. 2000;26(2):250.

[14] Ditchburn I. Should doctors wear ties? J Hosp Infect. 2006;63(2):227-8.

[15] Lopez PJ, Ron O, Parthasarathy P, Soothill J, Spitz L. Bacterial counts from hospital doctors’ ties are higher than those from shirts. Am J Infect Control. 2009; 37(1):79-80.

[16] Bhattacharya S. Doctors’ ties harbour disease- causing germs. NewScientist.com [Internet]. 2004 May 24 [cited 2014 June 20]. Available from: http://www.newscientist. com/arti- cle/dn5029-doctors-ties-harbour-diseasecaus- ing-germs.html

[17] Shabbir M, Ahmed I, Iqbal A, Najam M. Incidence of necktie as a vector in nosocomial infection. Pak J Surg. 2013; 29(3):224-225.

[18]  Marinella  MA,  Pierson  C,  Chenoweth  C.  The  stethoscope.  A  potential  source  of nosocomial infection? Arch Intern Med. 1997;157(7):786-90.

[19]  Madar  R,  Novakova  E,  Baska  T.  The  role  of  non-critical health-care  tools  in  the transmission of nosocomial infections. Bratisl Med J. 2005;106(11):348-50.

[20] Lokkur PP, Nagaraj S. The prevalence of bacterial contamination of stethoscope diaphragms: a cross sectional study, among health care workers of a tertiary care hospital. Indian I Med Microbiol. 2014;32(2):201-2.

[21] French G, Rayner D, Branson M, Walsh M. Contamination of doctors’ and nurses’ pens with nosocomial pathogens. Lancet. 1998;351(9097):213.

[22] Datz C, Jungwirth A, Dusch H, Galvan G, Weiger T. What’s on doctors’ ball point pens? Lancet. 1997;350(9094):1824.

[23] Lopez GU, Gerba CP, Tamimi AH, Kitajima M, Maxwell SL, Rose JB. Transfer efficiency of bacteria and viruses from porous and nonporous fomites to fingers under different relative humidity conditions. Appl Environ Microbiol. 2013;79(18):5728-34.

[24] Kramer A, Schwebke I, Kampf G. How long do nosocomial pathogens persist on inanimate surfaces? A systematic review. BMC Infect Dis. 2006;6:130.

[25] Alexander R, Volpe NG, Catchpole C, Allen R, Cope S. Are lanyards a risk for nosocomialtransmission of potentially pathogenic bacteria? J Hosp Infect. 2008;70(1):92-3.

[26] Kotsanas D, Scott C, Gillespie EE, Korman TM, Stuart RL. What’s hanging around your neck? Pathogenic bacteria on identity badges and lanyards. Med J Aust. 2008;188(1):5-8.

[27] Azim S, McLaws ML. Doctor, do you have a moment? National Hand Hygiene Initiative compliance in Australian hospitals. Med J Aust. 2014;200(9):534-7.

[28] Weber RL, Khan PD, Fader RC, Weber RA. Prospective study on the effect of shirt sleeves and ties on the transmission of bacteria to patients. J Hosp Infect. 2012;80(3):252-4.

[29] Hall L, Keane L, Mayoh S, Olesen D. Changing learning to improve practice – Handhygiene  education  in  Queensland  medical  schools.  Healthcare  Infection.  2010  Dec;15(4):126-129.

Categories
Feature Articles

The blind spot on Australia’s PBS: review of anti-VEGF therapy for neovascular age-related macular degeneration

v6_i1_a24

Case scenario

A 72 year old male with a two-day history of sudden blurred vision in his left eye was referred to an ophthalmologist at a regional Australian setting. On best corrected visual acuity (BCVA) testing his left eye had reduced vision (6/12-1) with metamorphopsia. Fundoscopy showed an area of swelling around the left macula and optical coherence tomography and fundus fluorescein angiography later confirmed pigment epithelial detachment of his left macula and subfoveal choroidal  neovascularisation.  He  was  given  a  diagnosis  of  wet macular degeneration and was commenced on monthly ranibizumab (Lucentis®) injections – a drug that costs the Australian health care system approximately AUD $1430 per injection and will require lifelong treatment. Recent debate has risen regarding the optimum frequency of dosing and the necessity of this expensive drug, given the availability of a cheaper alternative.

Introduction

Age-related macular degeneration (AMD) is the leading cause of blindness in Australia. [1] It predominantly affects people aged over 50 years and impairs central vision.  In Australia the cumulative incidence of early AMD for those aged over 49 years is 14.1% and 3.7% for late AMD. [1] Macular degeneration occurs in two forms. Dry macular (non- neovascular) disease comprises 90% of AMD and has a slow progression characterised by drussen deposition underneath the retinal pigment epithelium. [2] Currently there is no agreed treatment of advanced dry AMD and is managed only by diet and lifestyle. [3,4] Late stages of dry macular degeneration can result in “geographic atrophy” causing progressive atrophy of the retinal pigment epithelium, choriocapillaries and photoreceptors. [2]

Wet (neovascular) macular degeneration is less common and affects 10% of AMD patients but causes rapid visual loss. [2] It is characterised by choroidal  neovascularisation  (CNV)  secondary  to  the  effects of vascular endothelial growth factor (VEGF) causing blood vessels to grow from the choroid towards the retina. Leakage of these vessels leads to retinal oedema, haemorrhage and fibrous scarring. When the central and paracentral areas are affected it can result in loss of central vision. [2,5] Untreated, this condition can result in one to three lines of visual acuity lost on the LogMAR chart at three months and three to four lines by one year. [6] Hence visual impairment from late AMD leads to significant loss of vision and quality of life.

Currently there are three main anti-VEGF drugs available for wet macular degeneration:   ranibizumab    (Lucentis®),   bevacizumab    (Avastin®) and aflibercept (Eylea®). This feature article attempts to summarise the development in treatments of wet macular degeneration and highlights the current controversies regarding the optimal drug and frequency of dosing in context of cost to the Australian Pharmaceutical Benefits Scheme (PBS).

Earlier treatments for wet AMD

Neovascular (wet) AMD was largely untreatable over a decade ago but the management has transformed over this period. [2] Initially laser  photocoagulation  was  used  in  the  treatment  of  wet  AMD with the aim of destroying the choroidal neovascular membrane by coagulation.  During the 1980s, the Macular Photocoagulation study reported favourable outcomes for direct laser photocoagulation in small classic extrafoveal and juxtafoveal choroidal neovascularisation (CNV). However the outcomes for subfoveal lesions were poor and laser photocoagulation was limited by lack of stabilisation of vision, high reoccurrence rates in 50%, risk of immediate moderate visual loss in 41% and laser induced permanent central scotomata in sub-foveal lesions. [2,7]

During the 1990s photodynamic therapy (PDT) with verteporfin was introduced. It involved a two stage process: an intravenous infusion of verteporfin that preferentially accumulated in the neovascular membranes, followed by activation with infrared light that generated free radicals promoting closure of blood vessels. The TAP trial reported that the visual acuity benefits of verteporfin therapy in predominantly classic CNV subfoveal lesions was safely sustained for five years. [8] However the mean visual change was still a 13-letter average loss for PDT compared with a 19-letter average loss for untreated controls. [2,9] In addition, photosensitivity, headaches, back pain, chorioretinal atrophy and acute visual loss were observed in 4% as adverse effects. [2]

Anti-VEGF therapies

A breakthrough in treatment came during the mid-2000s with the identification of VEGF as the pathophysiological mechanism in driving the choroidal neovascularisation and associated oedema. This led to the establishment of the first anti VEGF drug, pegatanib sodium, an RNA aptamer that specifically targeted VEGF-165. [10] The VISION trial,  involving 1186 patients with subfoveal AMD receiving pegatanib injections every six weeks, had 70% of patients with stabilised vision (less than three lines of vision loss) compared to 55% of sham controls; yet still only a minority of patients actually gained vision. [10]

A second anti-VEGF agent, bevacizumab (Avastin®) soon came into off- label use. Bevacizumab was initially developed by the pharmaceutical company Genetech® to inhibit the tumour angiogenesis in colorectal cancer but its mechanism of action as a full length antibody that binds to all VEGF isoforms proved to have multiple purposes. Despite a lack of clinical trials to support its use in wet AMD, anecdotal evidence led ophthalmologists to use it in an off-label fashion to inhibit angiogenesis associated with wet macular degeneration. [11,12]

In 2006, however, Genetech® gained Food and Drug Administration (FDA) approval for the anti-VEGF drug ranibizumab, a drug derived from the same bevacizumab molecule, as a fragment but with a smaller molecular size to theoretically aid retinal penetration. [13] Landmark clinical trials established that ranibizumab not only prevented vision loss but also led to a significant gain in vision in almost one-third of patients. [14,15] The ANCHOR trial, involving 423 patients,  compared ranibizumab dosed at 0.3 mg and 0.5 mg given monthly over two years with PDT and verteporfin given as required. This trial found 90% of ranibizumab treated patients achieved visual stabilisation with a loss of < 15 letters compared to 65.7% of PDT patients. Furthermore, up to 41% of the ranibizumab treated group actually gained >15 letters compared to 6.3% of the PDT group. [15]

Further trials including the MARINA, [14] PRONTO, [16] SUSTAIN, [17] and PIER [18] studies confirmed the effectiveness of ranibizumab. Despite these results and the purpose built nature of ranibizumab for the eye, in countries like the US and other countries around the world where patients and health insurance companies bear the cost burden of treatment, bevacizumab (Avastin®) is more frequently used, and constitutes nearly 60% of injections in the US. [19] This occurrence is explained by the large cost difference between ranibizumab (USD $1593) and bevacizumab (USD $42) in context of apparent similar efficacy. [19] The cost difference is due to the fact that one vial of bevacizumab can be fractioned by a compounding pharmacy into numerous unit doses for the eye. [20]

Given the popular off-label use of bevacizumab, the CATT trial was conducted by the US National Eye Institute to establish its efficacy. The CATT trial was a large US multicentre study involving 1208 patients randomised to receive either bevacizumab 1.25 mg or ranibizumab 0.5 mg (monthly or as needed). The CATT trial results demonstrated that monthly bevacizumab was equivalent to monthly ranibizumab (mean gain of 8.0 vs 8.5 letters on ETDRS visual acuity chart in one year). [21] The IVAN trial, a UK multi-centre randomised controlled trial (RCT) involving 628 patients, showed similar results to the CATT trial with a statistically insignificant mean difference in BCVA of 1.37 letters between the two drugs. [22]

Hence debate has mounted in regards to the substantial cost difference in the face of apparent efficacy. [23] On the backdrop of this costly dilemma are three major pharmaceutical companies: Genetech®, Roche®  and Novartis®  Although  bevacizumab  was  developed  in 2004 by the pharmaceutical company Genetech®, the company was taken over in 2009 by the Swiss pharmaceutical giant Roche®, which is one-third owned by another pharmaceutical company, Novartis®. [24] Given that both ranibizumab and bevacizamab are produced essentially by the same pharmaceutical companies (Genetech/Roche/ Novartis) there is no financial incentive for the company to seek FDA or Therapeutic Goods Administration (TGA) approval for the cheaper alternative, bevacizumab. [13,24]

Another  major  concern  that  is  emphasised  in  the  literature  is the potentially increased systemic adverse effects reported with bevacizumab. [22] The systemic half-life of bevacizumab is six days compared to ranibizumab at 0.5 days and in theory it is postulated that systemic inhibition of VEGF could cause higher systemic vascular events. [2] The CATT trial reported similar rates of adverse reactions (myocardial infarction, stroke and death) in both bevacizumab and ranibizumab groups. [21] However, a meta-analysis of the CATT and IVAN data showed that there was an increased risk of serious systemic side effects requiring hospitalisation in the bevacizumab group (24.9% vs 19.0%). Yet this statement is controversial as most events reported were not identified in the original cancer trials involving patients receiving intravenous doses of bevacizumab (500 times the intravitreal dose). [21,22] Hence it has been questioned whether this is more attributable to chance or imbalance in the baseline health status of participants. [2,22] An analysis of US Medicare claims demonstrated that  patients  treated  with  bevacizumab  had  significantly  higher stroke and mortality rates than ranibizumab. [25] However this data is inherently prone to confounding bias considering the elderly at risk of macular degeneration are likely to have risk factors for systemic vascular  disease.  When  corrected  for  comorbidities  there  were no  significant  differences  in  outcomes  between  ranibizumab  and bevacizumab. [23,25] It has been argued that trials to date have been underpowered to investigate adverse events in bevacizumab. Hence until further evidence is available, the risk of systemic adverse effects favouring the use of ranibizumab over bevacizumab is unclear. [22]

Adding to the debate regarding the optimum drug choice for AMD, is the newest anti-VEGF, aflibercept (Eylea®) which attained FDA approval in late 2011. Aflibercept was created by the pharmaceutical companies Regeneron/Bayer® and is a novel recombinant fusion protein designed to bind to all isoforms of VEGF-A, VEGF-B and placental growth factor. [20] Aflibercept has a dispensed price the same as ranibizumab at AUD $1430 per injection on the PBS. [26] The binding affinity of aflibercept to VEGF is greater than ranibizumab and bevacizumab which allows for longer duration of action and hence extended dosing intervals. [27]

The VIEW 1 study, a North American multicentre RCT with 1217 patients, and the VIEW 2 study, with 1240 patients enrolled across Europe, the Middle East, Asia-Pacific and Latin America, assigned patients into one of four groups: 1) 0.5 mg aflibercept given monthly, 2) 2 mg aflibercept given monthly, 3) 2 mg aflibercept at two-monthly intervals after an initial 2 mg aflibercept monthly for three months, or 4) ranibizumab 0.5 mg monthly. The VIEW 1 trial demonstrated that vision was maintained (defined as losing less than 15 ETDRS letters) in 96% of patients on 0.5 mg aflibercept monthly, 95% of patients receiving 2 mg monthly, 95% of patients on 2 mg every two months and 94% of patients on ranibizumab 0.5 mg monthly. [28] Safety profiles of the drugs in both the VIEW 1 and VIEW 2 trials showed no difference between aflibercept and ranibizumab in terms of severe systemic side effects. Hence aflibercept has been regarded as equivalent in efficacy to ranibizumab with potentially less frequent dosing.

Frequency of injections

In addition to the optimal drug of choice for AMD, the optimal frequency of injection has come into question. Given the treatment burden of regular intravitreal injections and risk of endophthalmitis with each injection, extending treatment using “as-required” dosing is often used in clinical practice. Evidence from the integrated analysis of VIEW trials is encouraging as it showed that aflibercept given every two months after an initial loading phase of monthly injections for three months was non-inferior to ranibizumab given monthly in stabilising visual outcomes [28] Although the cost is similar to ranibizumab, the reduced number of injections may represent significant cost savings.

A meta-analysis of the IVAN and CATT trials showed that continuous monthly treatment of ranibizumab and bevacizumab, gives better visual function than discontinuous treatment with a mean difference in BCVA at two years of -2.23 letters. [22] The pooled estimates of macular exudation as determined by optical coherence tomography (OCT) favoured a continuous monthly regimen. However, there was an increase in the risk of developing new geographic atrophy of the retinal pigment epithelium (RPE) with monthly treatment when compared to the as-needed therapy, therefore visual benefits from the monthly treatment may not be maintained long-term. [22] It is unclear whether the atrophy of the RPE represents a drug effect or the natural history of AMD. Interestingly, mortality at two years was lower with the continuous compared to the discontinuous group. In relation to systemic side effects, the pooled results slightly favoured continuous therapy although this was not statistically significant. This appears to contradict the normal dose response framework, however it is hypothesised that immunological sensitisation with intermittent dosing may account for this. [22]

Hence it appears that continuous therapy for bevacizumab and ranibizumab may be favourable in terms of visual outcome. However in clinical practice, given the treatment burden for patients and their carers, the risk of rare sight threatening endopthalmitis and possible sustained rise in intraocular pressure with each injection, [29] the frequency of injections is often individualised based on maintenance of visual acuity and anatomic parameters of macular thickness on OCT.

Currently the “inject and extend” model is recommended, whereby after three monthly injections treatment is extended to five or six weeks if the OCT shows no fluid. Depending on signs of exudation and BCVA, treatment may be reduced or extended by one or two weeks per visit to a maximum interval of ten weeks. Although there are no large prospective studies to support this, smaller studies have reported encouraging results which offers another cost saving strategy. [30] However, given the use of the more expensive ranibizumab, it is still a costly endeavour in Australia.

Current Australian situation

Other practical issues play a role in the choice of anti-VEGF therapy in Australia. For instance, the subsidised cost of ranibizumab to the patient is lower than the unsubsidised full cost of bevacizumab. [13] Patients must pay between AUD $80 and $159 out-of-pocket per injection for bevacizumab, whilst ranibizumab costs the government AUD $1430 and the maximum out of pocket cost for the patient is around AUD $36. [26] Among ophthalmologists there is favour towards the use of ranibizumab because of its purpose built status for the eye. [13] It seems the quantity and quality of evidence for ranibizumab compared to bevacizumab is greater. [29] As bevacizumab is used off- label, its use is not monitored, hence there is no surveillance. Lack of appropriate surveillance has been argued as a case to favour the use of the FDA approved ranibizumab. Essentially the dilemma faced by ophthalmologists is summarised in the statement: “I would personally be reluctant to say to my patients, ‘The best available evidence supports the use of this treatment which is funded, but are you interested in changing to an unapproved treatment [Avastin] for the sake of saving the community some money?” [31]

Another issue in Australia is the need for bevacizumab to be altered and divided by a compounding pharmacist into a product that is suitable and safe for ocular injection. A recent cluster of infectious endophthalmitis  resulting  in  vision  loss  occurred  in  the  US  from non-compliance to recognised standards. [32] The CATT and IVAN studies had stringent quality and safety control with the bevacizumab repackaged in glass vials using aseptic methods. In these trials, the risk of sight-threatening endophthalmitis was rare for both ranibizumab (0.04%) and bevacizumab injections (0.07%). [21] However, in clinical practice, it is argued that many of the compounding pharmacies may not be as regulated as that of the clinical trials to give comparable inferences about safety.

Conclusion

Prior to development of anti-VEGF therapies, patients with wet macular degeneration were faced with a progressive and permanent decline in vision. Today the available treatments not only stabilise vision but also lead to an improvement in vision in a significant portion of patients. Currently there are no published “head-to-head” trials comparing the three available drugs – bevacizumab, ranibizumab and aflibercept – together, which is warranted. In addition, further analyses of the safety concerns of bevacizumab are required. Current research is focusing on improving anti-VEGF protocols to reduce injection burden and combination therapies with photodynamic therapy or corticosteroids. [3] However, topical therapies such as pazopanib, a tyrosine kinase inhibitor that targets VEGF receptors, currently in the pipeline, may offer a possible non-invasive therapy in the future. [2,33]

At present, the evidence and expert opinion is not unanimous in allowing health policy makers to rationalise the substitution of bevacizumab over ranibizumab or aflibercept. Practical concerns in terms of FDA or TGA approval, surveillance, compounding pharmacy and safety are still major issues.  In 2013, ranibizumab was the third- highest costing drug on the PBS at AUD $286.9 million and aflibercept prescriptions cost the Australian government AUD $60.5 million per annum. [26] From a public health policy perspective, Australia has an ageing population and with eye health burden only to increase, there is need to prioritise resources. The cost–benefit analysis is not limited to AMD but applies to other indications of anti-VEGF therapy such as diabetic macular oedema and retinal vein occlusion. Substitution of first-line treatment with bevacizumab, which has occurred elsewhere in the world, has the potential to save the PBS billions of tax-payer dollars over a few years and its review should be considered a high priority in current health policy.

Acknowledgements

Thanks to Dr. Jack Tan (JCU adjunct lecturer (ophthalmology), MMED (OphthalSci)) for reviewing and editing this submission.

Conflict of interest

None declared.

Correspondence

M R Seneviratne: ridmee.seneviratne@my.jcu.edu.au

References

[1]  Wang  JJ,  Rochtchina  E,  Lee  AJ,  Chia  EM,  Smith  W,  Cumming  RG,  et  al.  Ten-year incidence and progression of age-related maculopathy: the blue Mountains Eye Study. Ophthalmology. 2007;114(1):92-8.

[2] Lim LS, Mitchell P, Seddon JM, Holz FG, Wong T. Age-related macular degeneration. Lancet. 2012;379(9827):1728-38.

[3] Cunnusamy K, Ufret-Vincenty R, Wang S. Next generation therapeutic solutions for age-related macular degeneration. Pharmaceutical patent analyst. 2012;1(2):193-206.

[4] Meleth AD, Wong WT, Chew EY. Treatment for atrophic macular degeneration. Current Opinion in Ophthalmology. 2011;22(3):190-3.

[5] Spilsbury K, Garrett KL, Shen WY, Constable IJ, Rakoczy PE. Overexpression of vascular endothelial growth factor (VEGF) in the retinal pigment epithelium leads to the development of choroidal neovascularization. The American Journal of Pathology. 2000;157(1):13544.

[6] Wong TY, Chakravarthy U, Klein R, Mitchell P, Zlateva G, Buggage R, et al. The natural history  and  prognosis  of  neovascular  age-related  macular  degeneration:  a  systematic review of the literature and meta-analysis. Ophthalmology. 2008;115(1):116-26.

[7]  Photocoagulation  Study  Group  .  Argon  laser  photocoagulation  for neovascular maculopathy.  Five-year  results  from  randomized  clinical trials.  Macular.  Archives  of Ophthalmology. 1991;109(8):1109-14.

[8] Kaiser PK. Verteporfin therapy of subfoveal choroidal neovascularization in age-related macular degeneration: 5-year results of two randomized clinical trials with an open-label extension: TAP report no. 8. Graefe’s archive for clinical and experimental ophthalmology. Albrecht   von   Graefes   Archiv   fur   klinische   und   experimentelle   Ophthalmologie. 2006;244(9):1132-42.

[9] Bressler NM. Photodynamic therapy of subfoveal choroidal neovascularization in age-related macular degeneration with verteporfin: two year results of 2 randomized clinical trials. Archives of Ophthalmology. 2001;119(2):198-207.

[10] Gragoudas ES, Adamis AP, Cunningham ET, Feinsod M, Guyer DR. Pegaptanib for neovascular age-related macular degeneration. The New England Journal of Medicine. 2004;351(27):2805-16.

[11] Madhusudhana KC, Hannan SR, Williams CP, Goverdhan SV, Rennie C, Lotery AJ, et al. Intravitreal bevacizumab (Avastin) for the treatment of choroidal neovascularization in  age-related  macular  degeneration: results  from  118  cases.  The  British  Journal  of Ophthalmology. 2007;91(12):1716-7.

[12] Rosenfeld PJ, Moshfeghi AA, Puliafito CA. Optical coherence tomography findings after  an  intravitreal  injection  of  bevacizumab (avastin)  for  neovascular  age-related macular degeneration. Ophthalmic surgery, lasers & imaging : The Official Journal of the International Society for Imaging in the Eye. 2005;36(4):331-5.

[13] Chen S. Lucentis vs Avastin: A local viewpoint, INSIGHT. 2011. Available from: http://www.visioneyeinstitute.com.au/wp-content/uploads/2013/05/Avastin-vs-Lucentis-Insight-article-Nov-2011.pdf.

[14] Rosenfeld PJ, Brown DM, Heier JS, Boyer DS, Kaiser PK, Chung CY, et al. Ranibizumab for neovascular age-related macular degeneration. The New England Journal of Medicine. 2006;355(14):1419-31.

[15] Brown DM, Michels M, Kaiser PK, Heier JS, Sy JP, Ianchulev T. Ranibizumab versus verteporfin photodynamic  therapy  for  neovascular  age-related  macular  degeneration: Two-year results of the ANCHOR study. Ophthalmology. 2009;116(1):57-65.

[16] Lalwani GA, Rosenfeld PJ, Fung AE, Dubovy SR, Michels S, Feuer W, et al. A variable-dosing  regimen  with  intravitreal  ranibizumab  for  neovascular  age-related  macular degeneration:  year  2  of  the  PrONTO  Study.  American  Journal  of  Ophthalmology. 2009;148(1):43-58.

[17] Holz FG, Amoaku W, Donate J, Guymer RH, Kellner U, Schlingemann RO, et al. Safety and efficacy of a flexible dosing regimen of ranibizumab in neovascular age-related macular degeneration: the SUSTAIN study. Ophthalmology. 2011;118(4):663-71.

[18]  Abraham  P,  Yue  H,  Wilson  L.  Randomized,  double-masked,  sham-controlled  trial of ranibizumab  for  neovascular age-related macular degeneration: PIER study year 2. American Journal of Ophthalmology. 2010;150(3):315-24.

[19] Brechner RJ, Rosenfeld PJ, Babish JD, Caplan S. Pharmacotherapy for neovascular age-related macular degeneration: an analysis of the 100% 2008 medicare fee-for-service part B claims file. American Journal of Ophthalmology. 2011;151(5):887-95.

[20] Ohr M, Kaiser PK. Aflibercept in wet age-related macular degeneration: a perspective review. Therapeutic Advances in Chronic Disease. 2012;3(4):153-61.

[21] Martin DF, Maguire MG, Ying GS, Grunwald JE, Fine SL, Jaffe GJ. Ranibizumab and bevacizumab for neovascular age-related macular degeneration. The New England Journal

[22] Chakravarthy U, Harding SP, Rogers CA, Downes SM, Lotery AJ, Culliford LA, et al.Alternative treatments to inhibit VEGF in age-related choroidal neovascularisation: 2-year findings of the IVAN randomised controlled trial. Lancet. 2013;382(9900):1258-67.

[23] Aujla JS. Replacing ranibizumab with bevacizumab on the Pharmaceutical Benefits Scheme: where does the current evidence leave us? Clinical & experimental optometry : Journal of the Australian Optometrical Association. 2012;95(5):538-40.

[24] Seccombe M. Australia’s Billion Dollar Blind Spot. The Global Mail. 2013 June 4:5.

[25] Curtis LH, Hammill BG, Schulman KA, Cousins SW. Risks of mortality, myocardial infarction,  bleeding,  and  stroke  associated  with  therapies  for  age-related  macular degeneration. Archives of Ophthalmology. 2010;128(10):1273-9.

[26]  PBS.  Expenditure  and  prescriptions  twelve  months  to  30  June  2013.  Canberra, Australia Pharmaceutical Policy Branch; 2012-2013 [4th of June 2014]. Available from: http://www.pbs.gov.au/statistics/2012-2013-files/expenditure-and-prescriptions-12-months-to-30-06-2013.pdf.

[27] Stewart MW, Rosenfeld PJ. Predicted biological activity of intravitreal VEGF Trap. The British Journal of Ophthalmology. 2008;92(5):667-8.

[28] Heier JS, Brown DM, Chong V, Korobelnik JF, Kaiser PK, Nguyen QD, et al. Intravitre-al aflibercept (VEGF trap-eye) in wet age-related macular degeneration. Ophthalmology. 2012;119(12):2537-48.

[29] Tseng JJ, Vance SK, Della Torre KE, Mendonca LS, Cooney MJ, Klancnik JM, et al. Sustained increased intraocular pressure related to intravitreal antivascular endothelial growth  factor  therapy  for  neovascular  age-related  macular  degeneration.  Journal  of Glaucoma. 2012;21(4):241-7.

[30] Engelbert M, Zweifel SA, Freund KB. Long-term follow-up for type 1 (subretinal pigment epithelium) neovascularization using a modified “treat and extend” dosing regimen of intravitreal antivascular endothelial growth factor therapy. Retina (Philadelphia, Pa). 2010;30(9):1368-75.

[31] McNamara S. Expensive AMD drug remains favourite. MJA InSight. 2011. Available from;                     https://www.mja.com.au/insight/2011/16/expensive-amd-drug-remains-favourite?0=ip_login_no_cache%3D22034246524ebb55d312462db14c89f0.

[32] Gonzalez S, Rosenfeld PJ, Stewart MW, Brown J, Murphy SP. Avastin doesn’t blind people, people blind people. American Journal of Ophthalmology. 2012;153(2):196-203.

[33] Danis R, McLaughlin MM, Tolentino M, Staurenghi G, Ye L, Xu CF, et al. Pazopanib eye drops: a randomised trial in neovascular age-related macular degeneration. The British Journal of Ophthalmology. 2014;98(2):172-8.

Categories
Feature Articles

Personal reflection: how much do we really know?

v6_i1_a23

“Hurry up with that blood pressure and pulse,” blurts the ED registrar. “And make sure to do it on both arms this time.” Before I can ask him what’s going on, he’s teleported to the next bed. Great. I’m alone again. But I don’t blame him; it’s a Saturday night, and a volunteer medical student is the least of his worries.

I fumble for what seems like an eternity with the blood pressure cuff, but eventually get it on, much to the amusement of a charge nurse eyeballing me from the nurses’ station. Recording the right arm was textbook, so now it was just the left arm to do. I listen hard for the Korotkoff sounds, but there was nothing. I shut my eyes in a squeamish hope that it might heighten my hearing, but nothing again. I can feel the charge nurse staring again; I fluster and break a cold sweat. I feel for the left radial pulse, but it repeatedly flutters away the moment I find it. I remember thinking: Gosh. Am I really that incompetent? Embarrassed, I eventually concede defeat and ask for a nurse who tells me she’ll be there “in a minute.”

Amidst all this confusion, was John—my patient. I’d gotten so caught up with ‘Operation Blood Pressure’ that I completely forgot that he was lying there with a kind of graceful patience. I quickly apologised and introduced myself as one of the students on the team.

“It’s all right. You’re young; you’ll eventually get the hang of it… Have to start somewhere, right?” His voice had a raspy crispness to it, which was quite calming to actually hear against the dull rapture of a chaotic emergency room.

John was one of those lovely elderly persons who you immediately came to admire and respect for their warm resilience; you don’t meet too many gentlemen like John anymore. Despite his discomfort, he gave me a kind smile and reached out with his right hand to reassuringly touch my hand. It was a beautifully ironic moment: There he lay in bed, and there I stood by his bedside. And for a moment, there I was the patient in distress, and there he was the physician offering me the reassurance I so desperately needed.

Patients teach us to be doctors. Whether it is a lesson in humility or a rare diagnostic finding, patients are the cornerstone of our ongoing clinical expertise and development; they are why we exist. The more we see, the more we learn. The more we learn, the better doctors we become. Sir William Osler was perhaps the first one to formally adopt this into modern medical education. After all, the three-year hospital residency program for training junior medicos was his idea, and is now a curriculum so widely adopted that it’s almost a rite of passage all doctors make.

But how much clinical exposure are we really getting nowadays? With the betterment of societal health, there is a reduced prevalence and incidence  for  rarer  diseases.  Epidemiologically  this  is  undoubtedly a good thing, but it does sadly reduce learning opportunities for upcoming generations of doctors. Our clinical accruement is premised on seeing and doing; through experiences that shape our clinical approach. Earlier this year, an African child presented with mild gut disturbances and some paralysis of his lower limbs. The case baffled three residents and a registrar, but after a quick glance from a consultant, the child was immediately diagnosed with polio (which was confirmed later by one of the myriad of tests the panicking residents had ordered earlier). We’d all read about polio, but either through the lack of clinical exposure or careless assumptions that polio was all cured; we were quick to overlook it. We can only diagnose if we know what we are looking for.

It’s not surprising that preceding generations of senior doctors (and those before them) have such perceived superior clinical intellect, not just with the breadth of their clinical knowledge but with their almost Sherlock Holmes senses of acuity to formulate diagnosis based primarily off history taking and physical examination. Traditionally it is advertised in textbooks that 90% of diagnoses should be made from the history and examination alone. Nowadays, with the advent of improving diagnostic technologies in radiology and pathology, it isn’t surprising that a number of us have replaced this fundamental skill with an apparent dependence on expensive invasive tests. In a recent study physicians at their respective levels were assessed on their ability to identify heart murmurs and associate it with the correct cardiac problem. Out of the 12 murmurs: interns correctly identified 5, senior residents 6, registrars 8 and consultants 9. Makes you wonder how long ago it was when physicians could identify all twelve. I remember an ambitious surgical resident saying – Why bother diagnosing murmurs when you can just order an echocardiogram? And I remembered the humbling answer a grandfather consultant had for him – Because I’m a real doctor and I can.

As for poor John, I was still stuck with getting a blood pressure for his left arm. Two hours earlier, I responded with the ambulance to John at his home, a conscious and breathing 68 year old complaining of severe headaches and back pain. John was a war veteran who lived independently and sadly had no remaining family to care for him. He has had a month’s history of worsening headaches and lumbar back pain with associated sensory loss particularly in his lower limbs that has been affecting his walking recently. Physical exam confirmed his story and he was slightly hypotensive at 100/65 mmHg, but otherwise his ECG and vitals were generally unremarkable. He otherwise looked to be a healthy 68 year old with no significant past medical history. Funnily enough, he’d just been sent home from ED earlier in the day for the same complaint.  As far as we could tell, he was just another old guy with a bad headache, back pain, and possibly sciatica. It wasn’t surprising that he was sent home from ED this morning with a script for Celecoxib, Nurofen, and instructions to follow-up with his GP.

I’ll remember from this moment onwards that when a nurse says that they’ll be a minute, it’s actually a metaphor of an ice age. I eventually decide to fess up to the registrar that I couldn’t do the blood pressure properly. He gives me a disappointing look but I concluded that honesty is usually the best option in healthcare — well, at least, over pride. I remembered reading a case earlier that week about a medical student who failed to admit that he was unable to palpate the femoral and left radial pulses in a neonate, and subsequently missed an early diagnosis of a serious aortic coarctation, which in the end was discovered the following morning after the baby had already become significantly blue and cyanosed overnight.

Much to my relief, the registrar couldn’t find the blood pressure either and ruled it as pathologic. He disappeared to have a word with his consultant, with both of them quickly returning to the bedside to take a brief history from the patient. By that point, the nurse had finally arrived along with a couple more students and an intern. John had an audience. It was bedside teaching time.

“So apparently you’re God?” John asked the consultant, breaking the seriousness of the moment. We all simultaneously swivel our heads to face the consultant liked starved seagulls, only we weren’t looking for a fried chip but craving for a smart response to scribble in our notebooks.

“To them,” the consultant looks at us, “I am. As for you, I’m not sure.” “I survived getting shot you know, during the war…it just nicked some major artery in my chest…clean shot, in the front and out the back… army docs made some stitches, and I healed up just fine by the end of the month. I’ve been fit as a fiddle since—well, at least, up until these last few months.”

The rest of the history was similar to what I’d found out earlier, but I was slightly annoyed and almost felt betrayed that he’d failed to mention this to me earlier.

The fictional TV Dr Gregory House has a saying that “everybody lies.” It’s true to an extent, but I don’t think patients do it deliberately. They generally might discount or overlook facts that are actually an essential part of the diagnostic process; they are human after all (and so are we). There are the psychiatric exceptions, but for the most part, patients do have the good faith of wanting to help us to help them get better. While sending a team of residents to break into a patient’s house is not usually the preferable choice (unless you’re Dr House), we usually try and pick up these extra clues by knowing what questions to ask and through the comfortable rapport we build with our patients as we come to understand them as a person. The trick is to do all of this in a 10 to 15 minute consult.

 

The consultant quickly did a physical exam on John. He closed his eyes as he listened to his chest. And then, a very faint smile briefly came across his face — the epiphany of a pitifully murmuring heart.

“We’re probably going to run some tests to confirm this,” he informs John before turning to us, “but I suspect we might have a case of a dissecting aorta.” Of course; why didn’t I think of that? Hindsight’s always 20-20, but I continue to kick myself for missing that murmur, and not making the (now obvious) connection.

The consultant continues to command his lackeys to request an alphabet of tests. Soon enough the CT images return and it’s evident that there was blood dividing into a false lumen of the descending aorta (likely to have torn at the site where his gunshot injury had traumatised the vascular tissues from decades ago). Urgent surgery was booked, a range of cardiac medications commenced, and by the time I returned from documenting the notes, there was now a bunch of tubes sticking out of him.

The next time I see John is after his surgery and before he was transferred to the rehabilitation unit. I treasure our final meeting.

“So I beat the odds,” John threw a beaming smile towards me. He’s a trooper — I’ll definitely give him that. Assuming his initial dissectional tear occurred when he reported the onset of his headaches and lower back pain, he’d survived a dissecting aortic aneurysm for at least one whole month, not to mention a war before that. (The odds of dropping dead from an aortic dissection in the first 24 hours alone it’s 25%, in 48 hours it’s 50%, in the first week it’s 75% and in the first month it’s 90%.)

“Yes, you definitely beat the odds.” I smile back at him with a certain amount of gained confidence. Our eyes meet briefly, and beneath the toughened exterior of this brave man is the all-too-familiar softened reservoir of unannounced fear. Finally, I extend my hand to shake his and gently squeeze it; it is the blessing of trust and reassurance he first showed me as a patient that I am now returning to him as a physician.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

E Teo: eteo@outlook.com

Categories
Feature Articles

Cutaneous manifestations of neonatal bacterial infection

Introduction

v5_i1_a22Skin forms a dynamic interface with the external environment and is a complex organisation of cell types and associated structures that performs many essential functions. Although the stratum corneum of full-term neonates is analogous to that of adult skin, structural and compositional differences of the skin renders the newborn more susceptible to bacterial colonisation. Particularly for the preterm neonate, impaired cutaneous barrier function and an immature immune system reduce the capacity to defend against bacterial pathogens. The majority of cutaneous bacterial infections are localised to the skin and are easily treated, however, systemic bacterial infection and disseminated disease in the neonatal period may be life-threatening.

Differences in neonatal skin

Newborn skin is fundamentally different from that of the adult and adapts to the extrauterine environment during the first year of life through ongoing structural and functional changes. [1] Skin is a complex, selectively permeable membrane that performs a number of roles, including protection from infection and external stressors such as ultraviolet (UV) light damage, temperature regulation, sensation, and physical appearance. Protection against the external environment is primarily due to the most superficial layer of the epidermis, the stratum corneum, and recent advances in fluorescence spectroscopy and electron microscopy have helped elucidate the differences between adult and newborn skin. [1-3] The stratum corneum is a layer of lipid-depleted, protein-rich corneocytes embedded in a matrix of extracellular lipids, resulting from the continuous proliferation of keratinocytes in the basal epidermis. [3] Compared with adult skin, newborn skin produces smaller corneocytes, a thinner epidermis, and an increased density of microrelief grooves (Table 1). [1] The corneocytes of newborns also have a higher degree of irregularity and decreased organisation in both development and subsequent desquamation phases. [2] The change in neonatal skin pH from neutral, at birth, towards a more acidic mantle is also likely to impact on stratum corneum integrity, as incomplete skin surface acidification is linked to variable rates of desquamation. [3,4] Overall, decreased corneocyte size and a thinner, less cohesive stratum corneum has negative implications for skin barrier function, as indicated by increased transepidermal water loss (TEWL) in newborn skin. [5]

Compared with full-term infants born at 37-42 weeks’ gestation, preterm newborns do not develop the same level of protection provided by the stratum corneum until 2-4 weeks after birth. [6,7] The most significant difference is an increase in the stratum corneum from two to three cell layers at 28 weeks’ gestation to the equivalent of adult skin with 15 layers by 32 weeks’ gestation. [8] Due to the role of the stratum corneum in barrier protection, the premature newborn is at considerably greater risk of cutaneous complications.

Changes in TEWL, skin pH, and sebaceous activity all lead to the creation of a skin environment that promotes colonisation of certain microbial skin flora. [9] Colonisation of resident flora commences at birth, but newborns have a unique skin microbiome profile that develops throughout the first year of life and beyond. [9] The protection offered by resident commensal and mutualistic skin flora in the adult is therefore not immediately present in the newborn, leading to different patterns of subsequent infection.

Table 1. Differences between newborn and adult skin. [1,2]
Table 1. Differences between newborn and adult skin. [1,2]

Skin defences and immune response of the newborn:

Skin has antimicrobial function afforded by the innate immune system and antigen presenting cells (APCs) of the epidermis and dermis, as well as circulating immune cells that migrate into the dermis. This innate system works together with the adaptive immune system to defend against infection. In the newborn, innate immunity is the most important mechanism of defence, as this system is present at birth as a result of pattern recognition receptors encoded by germline DNA. This system responds to biochemical structures common to a number of pathogens, producing a rapid response with no residual immunity or memory. In contrast, adaptive immunity develops slowly and involves specific antigen receptors of T- and B-lymphocytes as part of a system that develops memory for faster successive responses.

Innate immune defences comprise the physical barrier formed by the skin itself, antimicrobial peptides (AMPs), complement pathways, and immune cells including monophages, macrophages, dendritic cells, and natural killer cells. [10,11] While the keratinocytes of the skin are typically considered to be static ‘bricks’ of the physical skin barrier, they are also dynamically involved in immunity by their secretion of cytokines and chemokines, AMPs, and complement components. AMPs secreted by keratinocytes are cationic proteins termed cathelicidins and defensins, and they appear to be particularly important in neonatal immunity, with selective bactericidal activity against common cutaneous pathogens. [12,13] Cathelicidins (LL-37) and beta-defensins (BD-1, BD-2, BD-3) are attracted to negatively charged bacteria, viruses, and fungi and exert their influence by membrane insertion and pore formation. [13] Neonates display higher baseline concentrations of cutaneous AMPs than adults, suggesting a greater role for these proteins in newborn skin defences. In the absence of specific antibodies, pattern recognition receptors such as Toll-like receptors (TLRs) play a pivotal role, with subsequent cytokine production changing with increasing age and correlating to age-specific pathogen susceptibility. [14]

Figure 1. Innate immunity of the skin in the newborn
Figure 1. Innate immunity of the skin in the newborn

Bacterial infections

Cutaneous staphylococcal and streptococcal infections cause a variety of clinical presentations depending on site of infection, strain of organism, and neonatal immunity. Impetigo is a superficial bacterial cutaneous infection that may present with or without bulla formation, as described by the conditions non-bullous and bullous impetigo. The bullae of bullous impetigo are invariably due to infection with S. aureus and the subsequent production of epidermolytic toxin, which is also responsible for the widespread bullae and desquamation in staphylococcal scalded skin syndrome (SSSS). Development of resistant strains, overcrowding, and poor infection control have been linked to nosocomial outbreaks of S. aureus and is of particular concern in neonatal intensive care units where neonates are more susceptible to infection. [15]

Non-bullous impetigo

Both Streptococcus pyogenes and S. aureus are associated with the non-bullous form of impetigo, which presents as an erythematous macular rash before developing eroded lesions with a honey-coloured crust. [16] Isolated staphylococcal pustules and paronychia are also common in neonates. Although mild non-bullous impetigo has the capacity self-resolve, treatment with topical mupirocin and fusidic acid limits the opportunity for disease to persist. [16]

Bullous impetigo

Localised cutaneous S. aureus infection presents with an erythematous vesiculopustular rash that preferentially affects the diaper area and skin folds, coalescing to form large flaccid bullae that rupture easily and appear as honey-crusted erosions. [16,17] Bacteria are present in the lesions, and the infection usually responds to first-line systemic flucloxacillin, which may be used in conjunction with topical fusidic acid. [16,17] Certain strains of S. aureus are associated with epidermolytic toxins, which facilitate pathogen entry beneath the stratum corneum and limit disease to the superficial epidermis. [18,19] The distinct bullae present in bullous impetigo are due to the toxin-induced cleavage of desmosomal cadherin proteins in the granular layer of the epidermis, which are normally responsible for maintaining functional adhesion between keratinocytes. [18] These same toxins are produced in SSSS.

Staphylococcal scalded skin syndrome

Haematogenous spread of S. aureus is facilitated by inoculation at a distant site such as the conjunctiva, umbilicus, or perineum, and the effects of bulla formation and desquamation are the direct result of circulating epidermolytic toxins. [20] This haematogenous spread results in a widespread infection that is more severe than the localised infection of bullous impetigo. Generalised erythema and skin tenderness are the initial clinical features, with evolution into large flaccid bullae and desquamation of the entire cutaneous surface. The Nikolsky sign is present, where blistering can be elicited by light stroking of the skin. [21] Bacterial cultures of cutaneous lesions are typically negative and S. aureus is only found at the distant sites of infection. Skin biopsy is considered the gold standard of diagnosis and is particularly relevant when considering toxic epidermal necrolysis (TEN) as a differential. In contrast to SSSS, TEN results in subepidermal blisters and keratinocyte necrosis rather than epidermal cleavage and typically involves oral mucous membranes. [20,21] Although biopsy is helpful in providing a definite diagnosis, neonatal biopsies are rarely performed due to the characteristic clinical presentation of both conditions. Despite the apparent polarity of the cutaneous and haematogenous forms of S. aureus infection, a handful of mild SSSS cases have been reported, lending support to a likely clinical spectrum ranging from a mild form to the classic severe disease. [22]

Omphalitis

After birth and separation of the umbilical cord, necrosis of the stump is followed by epithelialisation. The healing stump may become colonised, with the exposed umbilical vessels forming a potential portal of entry for pathogenic bacteria. [23] Omphalitis is characterised by stump erythema and periumbilical oedema, with or without discharge, and is frequently due to S. aureus. It is more common in developing countries, and the risk is increased in cases of protracted labour, non-sterile delivery, and prematurity. [23] A recent Cochrane systematic review identified significant evidence to support the use of topical chlorhexidine on the umbilical stump to reduce omphalitis and neonatal mortality in developing countries. However, this benefit could not be demonstrated in developed countries, possibly owing to reduced risk factors for omphalitis. [24]

Necrotising fasciitis

Infection of the fascia and overlying soft tissues is a rapidly progressive neonatal emergency. Pathogens gain entry by cutaneous breaches such as omphalitis, birth trauma, and superficial skin wounds, with group A streptococci most commonly implicated as the causative organism. [25] Infection may also be polymicrobial, with a combination of organisms detected on wound cultures. [26] The infection follows the fascial plane, causing thromboses in the blood supply to overlying tissues and leading to tissue necrosis, and the skin becomes progressively more discoloured, tender, and warm. [26] While the initial presentation may not appear concerning, neonates rapidly become disproportionately tender and toxic. [26] Necrotising fasciitis has a high morbidity and mortality and requires immediate identification for surgical debridement. [25]

Ecthyma gangrenosum

Pseudomonas aeruginosa septicaemia is the most common underlying cause for this cutaneous manifestation, which typically presents with macules that progress via a necrotising vasculitis to form indurated necrotic ulcers with surrounding erythema. [27,28] Prematurity, immune deficiencies, and neutropaenia are the main predisposing factors, but lesions may develop in the absence of immunodeficiency when direct inoculation occurs through a breach in the skin barrier. [28]

Antimicrobial resistance and prevention

The treatment of neonatal bacterial infection depends on the pathogen and sensitivities to antibiotic treatments. In the Australian healthcare setting and internationally, antibiotic resistance poses a growing problem in this ‘post-antibiotic’ era. Methicillin-resistant S. aureus (MRSA) has become increasingly prevalent, particularly in the intensive care setting such as the neonatal intensive care unit (NICU). [29] As colonised neonates are continually admitted, the introduction of many unique sources and various strains over time adds to the ongoing burden and is likely to contribute to difficulties in fully eradicating MRSA from the NICU. [30] Transmission of organisms such as S. aureus most commonly occurs secondary to direct contact with colonised caregivers, and this problem is compounded when hand hygiene and barrier protection is inadequate. Premature infants in the NICU are particularly susceptible, due to their immature immune systems and the increased risk of nosocomial infection with invasive monitoring and frequent healthcare worker contact. [31] The identification of previous treatment with third-generation cephalosporins and carbapenem as independent risk factors for the development of multidrug-resistant Gram-negative bacteraemia in the NICU highlights the issue of antibiotic resistance and underscores the importance of judicious antibiotic use. [32]

Conclusion

Skin is the first line of defence against invading pathogens, and there are a number of unique cellular, functional, and immunological factors that underpin an increased susceptibility to bacterial infection in the newborn. Premature newborns are at particular risk of infection, owing to potential deficits in cutaneous barrier function. Future practice in treating bacterial infections is likely to be influenced by the emergence of multi-resistant strains and may shift the focus toward improved prevention measures.

Conflict of Interest

None declared.

Correspondence

J Read: jazlyn.read@griffithuni.edu.au

References

[1] Stamatas GN, Nikolovski J, Luedtke MA, Kollias N, Wiegand BC. Infant skin microstructure assessed in vivo differs from adult skin in organization and at the cellular level. Pediatr Dermatol. 2010;27(2):125-31.

[2] Fluhr JW, Lachmann N, Baudouin C, Msika P, Darlenski R, De Belilovsky C, et al. Development and organization of human stratum corneum after birth. Electron microscopy isotropy score and immunocytochemical corneocyte labelling as epidermal maturation’s markers in infancy. Br J Dermatol. 2014 Feb 7. DOI:10.1111/bjd.12880.

[3] Stamatas GN, Nikolovski J, Mack MC, Kollias N. Infant skin physiology and development during the first years of life: a review of recent findings based on in vivo studies. Int J Cosmet Sci. 2011;33(1):17-24.

[4] Fluhr JW, Man MQ, Hachem JP, Crumrine D, Mauro TM, Elias PM, et al. Topical peroxisome proliferator activated receptor activators accelerate postnatal stratum corneum acidification. J Invest Dermatol. 2009;129(2):365-74.

[5] Raone B, Raboni R, Rizzo N, Simonazzi G, Patrizi A. Transepidermal water loss in newborns within the first 24 hours of life: baseline values and comparison with adults. Pediatr Dermatol. 2014;31(2):191-5.

[6] Fluhr JW, Darlenski R, Taieb A, Hachem JP, Baudouin C, Msika P, et al. Functional skin adaptation in infancy — almost complete but not fully competent. Exp Dermatol. 2010;19(6):483-92.

[7] Fluhr JW, Darlenski R, Lachmann N, Baudouin C, Msika P, De Belilovsky C, et al. Infant epidermal skin physiology: adaptation after birth. Br J Dermatol. 2012;166(3):483-90.

[8] Taeusch HM, Ballard RA, Gleason CA, editors. Avery’s diseases of the newborn. 8th ed. Philadelphia: Elsevier Saunders; 2005.

[9] Capone KA, Dowd SE, Stamatas GN, Nikolovski J. Diversity of the human skin microbiome early in life. J Invest Dermatol. 2011;131(10):2026-32.

[10] Cuenca AG, Wynn JL, Moldawer LL, Levy O. Role of innate immunity in neonatal infection. Am J Perinatol. 2013;30(2):105-12.

[11] Power Coombs MR, Kronforst K, Levy O. Neonatal host defense against staphylococcal infections. Clin Dev Immunol. 2013;2013:826303. DOI:10.1155/2013/826303.

[12] Nelson A, Hultenby K, Hell E, Riedel HM, Brismar H, Flock JI, et al. Staphylococcus epidermidis isolated from newborn infants express pilus-like structures and are inhibited by the cathelicidin-derived antimicrobial peptide LL37. Pediatr Res. 2009;66(2):174-8.

[13] Yoshio H, Lagercrantz H, Gudmundsson GH, Agerberth B. First line of defense in early human life. Semin Perinatol. 2004;28(4):304-11.

[14] Kollmann TR, Levy O, Montgomery RR, Goriely S. Innate immune sensing by Toll-like receptors in newborns and the elderly. Immunity. 2012;37(5):771-83.

[15] Bertini G, Nicoletti P, Scopetti F, Manoocher P, Dani C, Orefici G. Staphylococcus aureus epidemic in a neonatal nursery: a strategy of infection control. Eur J Pediatr. 2006 Aug;165(8):530-5.

[16] Sladden MJ, Johnston GA. Current options for the treatment of impetigo in children. Expert Opin Pharmacother. 2005;6(13):2245-56.

[17] Johnston GA. Treatment of bullous impetigo and the staphylococcal scalded skin syndrome in infants. Expert Rev Anti Infect Ther. 2004;2(3):439-46.

[18] Hanakawa Y, Schechter NM, Lin C, Garza L, Li H, Yamaguchi T, et al. Molecular mechanisms of blister formation in bullous impetigo and staphylococcal scalded skin syndrome. J Clin Invest. 2002;110(1):53-60.

[19] Yamasaki O, Yamaguchi T, Sugai M, Chapuis-Cellier C, Arnaud F, Vandenesch F, et al. Clinical manifestations of staphylococcal scalded-skin syndrome depend on serotypes of exfoliative toxins. J Clin Microbiol. 2005;43(4):1890-3.

[20] Stanley JR, Amagai M. Pemphigus, bullous impetigo, and the staphylococcal scalded-skin syndrome. N Engl J Med. 2006;355(17):1800-10.

[21] Berk DR, Bayliss SJ. MRSA, staphylococcal scalded skin syndrome, and other cutaneous bacterial emergencies. Pediatr Ann. 2010;39(10):627-33.

[22] Hubiche T, Bes M, Roudiere L, Langlaude F, Etienne J, Del Giudice P. Mild staphylococcal scalded skin syndrome: an underdiagnosed clinical disorder. Br J Dermatol. 2012;166(1):213-5.

[23] Fraser N, Davies BW, Cusack J. Neonatal omphalitis: a review of its serious complications. Acta Paediatr. 2006;95(5):519-22.

[24] Imdad A, Bautista RM, Senen KA, Uy ME, Mantaring JB 3rd, Bhutta ZA. Umbilical cord antiseptics for preventing sepsis and death among newborns. Cochrane Database Syst Rev. 2013;5:CD008635.

[25] Das DK, Baker MG, Venugopal K. Increasing incidence of necrotizing fasciitis in New Zealand: a nationwide study over the period 1990 to 2006. J Infect. 2011;63(6):429-33.

[26] Jamal N, Teach SJ. Necrotizing fasciitis. Pediatr Emerg Care. 2011;27(12):1195-9.

[27] Pathak A, Singh P, Yadav Y, Dhaneria M. Ecthyma gangrenosum in a neonate: not always pseudomonas. BMJ Case Rep 2013 May 27. DOI:10.1136/bcr-2013-009287.

[28] Athappan G, Unnikrishnan A, Chandraprakasam S. Ecthyma gangrenosum: presentation in a normal neonate. Dermatol Online J. 2008;14(2):17.

[29] Isaacs D, Fraser S, Hogg G, Li HY. Staphylococcus aureus infections in Australasian neonatal nurseries. Arch Dis Child Fetal Neonatal Ed. 2004;89(4):F331-5.

[30] Gregory ML, Eichenwald EC, Puopolo KM. Seven-year experience with a surveillance program to reduce methicillin-resistant Staphylococcus aureus colonization in a neonatal intensive care unit. Pediatrics. 2009;123(5):e790-6.

[31] Cipolla D, Giuffre M, Mammina C, Corsello G. Prevention of nosocomial infections and surveillance of emerging resistances in NICU. J Matern Fetal Neonatal Med. 2011;24 Suppl 1:23-6.

[32] Tsai MH, Chu SM, Hsu JF, Lien R, Huang HR, Chiang MC, et al. Risk factors and outcomes for multidrug-resistant Gram-negative bacteremia in the NICU. Pediatrics. 2014;133(2):e322-9.

Categories
Feature Articles

The history of modern general anaesthesia

Safe and effective anaesthesia is among the greatest advances in medical history. Modern surgery and the considerable benefits it brings would be impossible without the significant academic, pharmacological, and practical advances in anaesthesia over the past 200 years. At the forefront of these are the major developments in general anaesthesia and airway management. This article aims to provide a basic framework to understand the development of modern general anaesthesia.

A brief history of general anaesthesia

v5_i1_a21Anaesthesia is a relatively new field in modern medicine. Prior to its development, most surgical procedures were either minor or emergency operations. [1] It is clear that modern surgery and the considerable benefits it brings would be impossible without the significant academic, pharmacological, and practical advances in anaesthesia during the 19th and 20th centuries. First and foremost among these is the development of safe and effective general anaesthesia.

Carbon dioxide was first explored as an anaesthetic in the 1820s by the English physician Henry H. Hickman. By inducing partial asphyxiation, Hickman demonstrated that animals could be rendered unconscious for a prolonged period, enabling surgical procedures to be performed. [2] This was a major breakthrough, however the risks associated with hypoxic anaesthesia were too great to see the widespread adoption of carbon dioxide as an anaesthetic.

Diethyl ether, a solvent commonly referred to simply as ‘ether’, was first used clinically by American physician William E. Clarke for a tooth extraction in January 1842. [3, 4] Several months later Crawford W. Long, an American surgeon and pharmacist, famously used ether as a surgical anaesthetic to remove a growth on a young man’s neck. He published his findings after seven years, revealing that the patient felt nothing throughout the procedure. [5] The discovery of ether’s clinical utility represented a significant advance in effective general anaesthesia, spurring a flurry of interest in potential anaesthetic agents.

Still used today for its anaesthetic properties, nitrous oxide was experimented with during the 19th century. Positive experiences by chemist Humphrey Davy lead to public gatherings, in which members of the public would inhale nitrous oxide for its exhilarating and pleasurable effects. [2] A medical student, Gardner Quincy Colton, made over $400 profit from one such affair that attracted three to four thousand attendees. [6] These events were similar to those held for ether, known as ‘ether frolics.’ [2] Following the observation in these gatherings of nitrous oxide’s analgesic and anaesthetic effects, it was formally tested in December 1844. Horace Wells, a dentist who had attended one of Colton’s exhibitions, persuaded a colleague to extract one of Wells’ teeth while Colton administered nitrous oxide gas. [7] The procedure was performed successfully, reportedly the first tooth ever removed painlessly. [8]

A former student of Wells, William T. G. Morton, was instrumental in the popularisation of ether as an anaesthetic. Morton performed a successful public demonstration of the anaesthetic capabilities of ether in October 1846 at Massachusetts General Hospital. [1] This event is often considered to mark the birth of modern anaesthesia, following which ether was widely adopted around the world. Later that year, Oliver W. Holmes, a writer and professor of anatomy, named the process which Morton demonstrated anaesthesia, derived from the Greek for ‘without sensation.’ [2]

Scottish obstetrician James Y. Simpson was the first to adopt the organic compound chloroform to relieve the pain of childbirth in 1847. Chloroform anaesthesia grew in popularity around the world and was in wide use when Queen Victoria gave birth to Prince Leopold under its influence in 1853. The chloroform was administered by the famous physician and epidemiologist John Snow. [2] In the early 20th century, chloroform came to supersede ether as a general anaesthetic in light of its less offensive odour, and rapid induction and emergence.

Though the first intravenous injections took place in 1656, [9] the first intravenous anaesthetic, sodium thiopental (thiopentone), was not synthesised until 1934. [10] Thiopentone is a short-acting, rapid-onset barbiturate sometimes used for anaesthetic induction. Its earliest documented use in humans was later in 1934 by Ralph Waters, an American anaesthetist. [2] Intravenous anaesthesia allowed more precise dosing and a less confrontational experience for the patient, and thiopentone rapidly entered common usage. Although it remained popular for many years, thiopentone was gradually replaced by propofol as the preferred induction agent. Introduced in the late 1980s, propofol allowed rapid induction and emergence, reliable hypnosis, and has antiemetic properties. [1]

Significant advances were also made in the 20th century in developing better halogenated inhaled agents. The advent of improved volatile agents, in parallel with a rising interest and focus on patient safety, saw a shift from ether and chloroform anaesthesia to the use of newer intravenous and inhalational agents with more favourable characteristics. Routinely used today, these agents provide fast induction and emergence, and are ideally suited for maintenance of anaesthesia. After halothane and enflurane came isoflurane, then sevoflurane, and finally desflurane in the early 1990s. [1] These new volatile agents had a number of desirable properties including low solubility, minimal cardiorespiratory depression, and unlike ether, are non-flammable. In contrast to ether and chloroform however, they lack analgesic effects, necessitating the use of other agents such as opioids, local anaesthetics, or nitrous oxide to ensure adequate pain relief. Nitrous oxide saw a steady decline in use over the following years, in part due to the availability of these newer agents, but also because of concerns about potential toxicity and its link with postoperative nausea and vomiting. [1]

The introduction of muscle relaxants to clinical practice in the early 1950s allowed for major advances in anaesthetic techniques and thereby surgery. Curare, a natural alkaloid historically used on poison darts and arrows by aboriginal people across Africa, Asia, and the Americas, [11] was the first non-depolarising muscle relaxant used. Through the late 1970s to 1990s, quaternary ammonium muscle relaxants were developed, including vecuronium, atracurium, and rocuronium. These compounds brought several advantages, including more favourable cardiovascular effects and minimal release of histamine. [12] Due to their clearance by Hoffman elimination rather than renal excretion, atracrurium and subsequently cisatracurium also possess the additional benefit of predictably rapid recovery with little cumulative effect following repeated administration. Suxamethonium, still in use today, was also developed in the 1950s. It is a depolarising neuromuscular blocking agent with fast onset and offset of action. It is considered by many as the agent of choice for rapid-onset neuromuscular blockade and has a short duration of action, although its side-effects of potassium release and increased intra-thoracic, intra-abdominal, and intra-cranial pressures will sometimes contraindicate its use.

Advances in monitoring have significantly impacted upon the practice of anaesthesia including the introduction of pulse oximetry and capnography in the 1980s. [1] The routinely used combination of these has contributed to a reduction in the proportion of anaesthesia-related complications that are preventable by monitoring from 39% in the 1970s to only 9% in the 1990s. [13] Other advances have included the measurement of inspired and end-tidal gases, including oxygen, nitrous oxide, and the volatile agents. The advent of ‘depth of anaesthesia’ monitors such as bispectral index (BIS) monitors has advanced our understanding of anaesthetic practice. Modern anaesthesia has achieved such an impressive degree of safety that the anaesthesia-related mortality in Australia is less than 3 deaths per million annually. [14]

A brief history of endotracheal intubation
In the late 19th century great advances were made in airway management for patients undergoing general anaesthesia. Without advanced airway support, the great safety and efficacy of modern anaesthesia would be impossible. The laryngeal tube had reportedly existed since at least 1791, and was used for a range of purposes including to facilitate breathing in oedema of the glottis, for direct delivery of medications to lung tissue, and for artificial respiration. [15]

Charles Trueheart of Texas published an account in 1869 describing a biphasic artificial respiration device, which included a laryngeal airway. However, the first successful delivery of endotracheal general anaesthesia was performed through tracheotomy by German surgeon Friedrich Trendelenburg in 1871. [16] Over the following decades, this technique was adapted in multiple settings to be delivered by oro-tracheal intubation and thus avoid the need for a surgical airway. [16, 17]

A further breakthrough in intubation came in 1895, when German physician Alfred Kirstein performed the first laryngoscopy with direct visualisation of the vocal cords. [18] Previously, direct visualisation was thought impossible, and the glottis and larynx had been visible only by indirect vision using mirrors. Kirstein called his device the autoscope, now known as a laryngoscope, and in the process of its development he established many of the principles of laryngoscopy which continue to be used in clinical practice. [18]

In 1913, Chevalier Jackson introduced a new laryngoscope blade with a light source at the distal tip, rather than the proximal light source used by Kirstein. [19] That same year, Henry Janeway expanded upon this, also including batteries in the handle, a central notch for maintaining the tracheal tube in the midline of the oropharynx, and a slight curve to the tip of the blade. [20] These changes were instrumental in popularising the use of direct laryngoscopy and tracheal intubation in anaesthesia, and the use of endotracheal intubation spread greatly following the First World War. [15]

Sir Ivan Magill went further with his invention, the Magill laryngoscope blade. The most significant features of this blade included a flat and wide distal end of the speculum, improving control of the epiglottis, and a slot on the side allowing the passage of catheters and tubes without obscuring vision. [21, 22] He also developed the technique of awake blind nasotracheal intubation in 1928, along with a new type of angulated forceps (the Magill forceps) for nasotracheal intubation, and a new endotracheal tube. [21] The Magill laryngoscope blade remains in use today, however in 1943 Sir Robert Macintosh introduced the Macintosh blade, a curved model which is currently the most widely used laryngoscope blade. [23] Other specific blades may be used for certain patient subsets, such as straight laryngoscope blades in infants or the McCoy laryngoscope blade for difficult intubations.

The laryngeal mask airway (LMA) was first used in 1981 before being officially released in 1988. [24] The LMA revolutionised airway management – it provides a clear airway, forms an effective seal at the glottic inlet, and largely avoids the risk of trauma associated with intubation. [24] Endotracheal intubation remains an indispensable skill for the anaesthetist, with both LMAs and endotracheal tubes widely used.

A range of equipment for intubation is now available, including video laryngoscopes and fibre-optic bronchoscopes to aid in visualising the difficult airway, tubes with and without cuffs, reinforced tubes, and double-lumen tubes. Measurement of end-tidal carbon dioxide by capnometry has also provided a useful adjunct to direct visualisation for confirming correct placement of the endotracheal tube. [25, 26] Despite these advances, the modern endotracheal intubation still relies heavily on the principles laid down by Kirstein and his successors.

Conclusion
Considerable progress has been made in the field of anaesthesia over the past two centuries. The development of safe, effective general anaesthesia is one of the most important advances in medical history, allowing the widespread expansion of surgery and the considerable benefits it brings. Significant advances beyond the scope of this article include the developments of local anaesthesia, regional anaesthesia, conscious sedation, and analgesia.

Conflict of interest

None declared.

References

[1] Urman RD, Desai SP. History of anesthesia for ambulatory surgery. Curr Opin Anaesthesiol. 2012 Dec;25(6):641-7.

[2] Miller R. Anesthesia. Philadelphia: Churchill Livingstone; 2000.

[3] Lyman H. Artificial anaesthesia and anaesthetics. New York: William Wood and Co.; 1881.

[4] Keys T. The history of surgical anesthesia. Huntington: Robert E. Krieger Publishing Company; 1978.

[5] Long C. An account of the first use of sulphuric ether by inhalation as an anaesthetic in surgical operations. South Med Surg J. 1849;5:705-13.

[6] Smith GB, Hirsch NP. Gardner Quincy Colton: pioneer of nitrous oxide anesthesia. Anesth Analg. 1991 Mar;72(3):382-91.

[7] LeVasseur R, Desai SP. Ebenezer Hopkins Frost (1824-1866): William T.G. Morton’s first identified patient and why he was invited to the ether demonstration of October 16, 1846. Anesthesiology. 2012 Aug;117(2):238-42.

[8] Colton G. Boyhood and manhood recollections. The story of a busy life. New York: A. G. Sherwood; 1897.

[9.] Dagnino J. Wren, Boyle, and the origins of intravenous injections and the Royal Society of London. Anesthesiology. 2009 Oct;111(4):923-4.

[10] Tabern D, Volwiler E. Sulfur-containing barbiturate hypnotics. J Am Chem Soc. 1935;57(10):1961–3.

[11] Dorkins HR. Suxamethonium-the development of a modern drug from 1906 to the present day. Med Hist. 1982 Apr;26(2):145-68.

[12] Ball C, Westhorpe RN. Muscle relaxants: pancuronium and vecuronium. Anaesth Intensive Care. 2006 Apr;34(2):137.

[13] Lee LA, Domino KB. The closed claims project: has it influenced anesthetic practice and outcome? Anesth Clin N Am. 2002 Sep;20(3):485-501.

[14] Gibbs N, editor. Safety of anaesthesia: a review of anaesthesia-related mortality reporting in Australia and New Zealand 2006-2008. Melbourne: Australian and New Zealand College of Anaesthetists; 2009.

[15] Waters R, Rovenstine E, Guedel A. Endotracheal anesthesia and its historical development. Anesth Analg. 1933;12:196-203.

[16] Trubuhovich RV. Early artificial ventilation: the mystery of “Truehead of Galveston” – was he Dr Charles William Trueheart? Crit Care Resusc. 2008 Dec;10(4):338.

[17] Macewen W. General observations on the introduction of tracheal tubes by the mouth, instead of performing tracheotomy or laryngotomy. Br Med J. 1880 Jul 24;2(1021):122-4.

[18] Hirsch NP, Smith GB, Hirsch PO. Alfred Kirstein: pioneer of direct laryngoscopy. Anaesthesia. 1986 Jan;41(1):42-5.

[19] Zeitels SM. Chevalier Jackson’s contributions to direct laryngoscopy. J Voice. 1998 Mar;12(1):1-6.

[20] Burkle CM, Zepeda FA, Bacon DR, Rose SH. A historical perspective on use of the laryngoscope as a tool in anesthesiology. Anesthesiology. 2004 Apr;100(4):1003-6.

[21] McLachlan G. Sir Ivan Magill KCVO, DSc, MB, BCh, BAO, FRCS, FFARCS (Hon), FFARCSI (Hon), DA, (1888-1986). Ulster Med J. 2008 Sep;77(3):146-52.

[22] Magill I. An improved laryngoscope for anaesthetists. Lancet. 1926;207(5349):500.

[23] Scott J, Baker PA. How did the Macintosh laryngoscope become so popular? Paediatr Anaesth. 2009 Jul;19 Suppl 1:24-9.

[24] van Zundert TC, Brimacombe JR, Ferson DZ, Bacon DR, Wilkinson DJ. Archie Brain: celebrating 30 years of development in laryngeal mask airways. Anaesthesia. 2012 Dec;67(12):1375-85.

[25] Grmec S. Comparison of three different methods to confirm tracheal tube placement in emergency intubation. Intensive Care Med. 2002 Jun;28(6):701-4.

[26] Rudraraju P, Eisen LA. Confirmation of endotracheal tube position: a narrative review. J Intensive Care Med. 2009 Sep-Oct;24(5):283-92.

Categories
Feature Articles

Penicillin allergies: facts, fiction and development of a protocol

Penicillins, a member of the beta-lactam family, are the most commonly prescribed antibiotic class in Australia. Beta-lactam agents are used in a sexual health setting for the management of syphilis, uncomplicated gonococcal infections and pelvic inflammatory disease. Patients frequently report allergies to penicillin, which can be protective but also counterproductive if it does not represent a ‘true’ allergy. Features of a reported reaction may be stratified as either high or low risk, which has implications for both re-exposure to penicillins; but also cross-reactivity to other members of the beta-lactam family such as cephalosporins. We reviewed the evidence surrounding penicillin allergies, in the context of developing a local protocol for penicillin-allergic patients at a sexual health clinic.

Case scenario

v5_i1_a20A 27-year-old male is referred to a sexual health clinic by a general practitioner (GP). He presents with a widespread maculopapular rash, fever and malaise for the past four days. Whilst he does not describe any other symptoms, he did notice a painless genital ulcer approximately four weeks ago. The ulcer resolved spontaneously; hence he initially did not seek medical advice. He does not have a stable sexual partner and mentions engaging in several episodes of unprotected sex with both women and men in the previous three months. Secondary syphilis is the suspected diagnosis given the widespread rash and preceding chancre, and testing confirms this with a positive syphilis enzyme-linked immunoassay (EIA) screening test, Treponema pallidum particle agglutination assay (TPPA), and an rapid plasma reagin (RPR) of 1:32. As part of a sexual health screen, he tested positive for rectal gonorrhoea by culture. The treatment regime includes 1.8 g intramuscular benzathine penicillin for syphilis, in addition to 500 mg intramuscular ceftriaxone and 1g oral azithromycin for gonorrhoea, all given as stat doses. Before signing the drug order, the clinician questions about any allergies. The patient mentions having an allergic reaction to penicillin when he was six years old but cannot remember any particular details. What is the plan now?

Introduction

This case presents a challenging scenario for the clinician. In this article, we hope to outline some of the facts surrounding penicillin allergies, dismiss some myths and provide a systematic approach to aid decision-making, especially in the sexual health setting where there are limited treatment options for gonorrhoea and syphilis.

The beta-lactam family of antibiotics are one of the most commonly prescribed antibiotic classes in medicine. The beta-lactam ring forms the structural commonality between different types of penicillins and this is also shared with other drug classes, such as the cephalosporins and carbapenems.

Penicillin allergy is the most commonly reported medication allergy, either by the patient or medical providers. [1] The implications of this ‘label’ can be either protective or counterproductive. For those patients with a severe previous reaction, such as anaphylaxis, this allergy is important and re-exposure can prove disastrous and potentially fatal. However, for other patients with a minor or inconclusive reaction, not administering penicillin may be denying the patient first-line, efficacious treatment. Additionally, there are concerns that treating patients with alternative agents in this context contribute to the development of resistance, which is of public health concern. [2]

Whilst the use of beta-lactam antibiotics crosses many realms of medicine, this article is written in the context of developing a protocol for the management of patients with penicillin allergy in an urban sexual health clinic. It is not designed to provide guidance outside of this setting, nor to replace existing protocols in other clinical units.

Rationale for a protocol

Management of patients with penicillin allergy requiring beta-lactam treatment was reviewed as part of an overall revision of internal treatment guidelines. Sexually transmitted infections pose treatment challenges whereby certain conditions or patient sub-groups (e.g. pregnant women) have no equally efficacious or appropriate alternatives to penicillins. [3] For example, one acceptable alternative to penicillin for syphilis treatment is to administer oral doxycycline, however the potential harm associated with this treatment (permanent dental staining) contraindicates its use in pregnancy. [4]

Development process

Senior clinicians at the sexual health clinic provided the protocol brief in May 2013. This included, but was not limited to: reviewing existing guidelines for the management of penicillin-allergic patients both at a national and international level; reviewing literature about the incidence of penicillin allergy, and cross-reactivity rates in patients with a documented history of penicillin allergy; formulating a protocol, based on existing protocols and evidence which would be applicable for managing patients with a penicillin allergy; designing a flowchart which summarises the protocol in a clear manner, including clear decision making and referral points in addition to an estimation of risk; engaging senior nursing staff and the director to assess the usability and practicality of the protocol; and presenting a draft for consideration to medical and nursing personnel, with subsequent review, endorsement and implementation.

Use of penicillins and cephalosporins for treatment of sexually transmitted infections

Current protocols in the clinic suggest the use of beta-lactam antibiotics as first line treatment for the following conditions, in accordance with national guidelines [5,6]:

Syphilis

IM benzathine penicillin 1.8 g stat single dose for early syphilis (including early latent syphilis), 3 weekly doses for late latent syphilis

Uncomplicated gonococcal infections

IM ceftriaxone 500 mg stat (in conjunction with azithromycin 1 g orally)

Pelvic inflammatory disease (PID)

IM ceftriaxone 500 mg stat (in conjunction with doxycycline 100 mg BD for 14 days and metronidazole 400 mg BD for 14 days)

Mechanism of penicillin allergy and associated reactions

This article focuses on the main concern with penicillin allergy, which is the possibility of anaphylaxis, an IgE-mediated (type I) hypersensitivity reaction. However, the clinician should be aware that delayed type hypersensitivity reactions (type IV) can also occur, causing exanthema or other skin eruptions, such as morbilliform reactions. These are not determined by the beta-lactam ring or side chains of the antibiotics, but rather, the ability for a drug to act independently as a hapten and become antigenic in nature. [7] This antigenicity triggers an immune response through interaction with antigen-presenting cells (APC) and T-cells. [8]

The major determinant of anaphylactic reactions to beta-lactam antibiotics is the beta-lactam ring, which is shared amongst the penicillin class, as this binds to endogenous lysine proteins to form a hapten. [9] However, there is also evidence to suggest that an IgE-mediated reaction can occur with the minor determinants of the molecule, which is the R chain (acyl) side group of individual penicillins (Figure 1). IgE binding results in mast cell activation and histamine release, in addition to the release of inflammatory mediators.

In patients who develop an IgE-mediated reaction, there is subsequent risk of a more severe reaction on re-exposure. IgE-mediated reactions can have effects on the following body systems: dermatologic (urticarial rashes, angioedema, macroglossia), respiratory (asthma, bronchospasm, wheezing, laryngeal swelling), gastrointestinal (abdominal pain, vomiting, diarrhoea, cramping), and cardiovascular (hypotension, vascular collapse, altered consciousness, shock).

Whilst no universal definition exists for anaphylaxis, two commonly accepted definitions in Australia are: (1) the acute onset of illness, with typical skin features (urticarial rash, erythema/flushing, angioedema) plus involvement of one other body system; or (2) the acute onset of hypotension, bronchospasm or upper airway obstruction with or without skin features. [10]

Figure 1.The beta-lactam ring is found in penicillins and several other closely related drug classes. Variants in the R chain are found within each drug class.
Figure 1.The beta-lactam ring is found in penicillins and several other closely
related drug classes. Variants in the R chain are found within each drug class.

Incidence of penicillin allergy and cross reactivity with cephalosporins
Various early studies suggested the incidence of penicillin allergy to be approximately 2% per course, with anaphylaxis estimated in 0.05% of all penicillin courses. [1] However, a large retrospective cohort study in the UK, which looked at 3,375,162 patients prescribed subsequent courses of penicillin, found a much lower incidence of only 0.18%. [11] It must be noted when quoting these figures that the definition of an ‘event’ in this study did not include asthma or eczema; however, when included, the incidence increased from 0.18% to 9% which makes interpretation difficult as we consider asthma a feature of allergy in Australian definitions. [11]

It is difficult to accurately identify if a trend over time exists in patients having a penicillin allergy. Multiple protocols exist internationally about the diagnosis of penicillin allergy and subsequent testing. [12] Furthermore, an element of bias may be present from both the patient and the clinician as the label of an ‘allergy’ can be highly subjective, and a permanent feature on a health record without subsequent confirmation.

Many early studies have quoted 10% cross-reactivity between penicillins and cephalosporins. [13,14] Unfortunately, these original studies assessing cross-reactivity over three decades ago were flawed, poorly designed open studies, lacking control groups, and have consequently overestimated this figure. [15] Furthermore, it is also postulated that the original manufacturing processes of cephalosporins contributed to inflated allergy rates, due to cross-contamination with penicillin compounds. [15] Since manufacturing processes have been refined, there has been a reduced incidence in cross-reactivity. If studies past 1980 are exclusively considered, a patient with a confirmed penicillin allergy (by positive skin test) will react with a cephalosporin in less than 2% of occasions. [15]

There is further evidence to suggest that there is less cross-reactivity with newer cephalosporins (second generation and onwards). A recent review and meta-analyses have found that third generation cephalosporins, such as ceftriaxone, have a cross reactivity rate of only 0.8% in those patients who are confirmed to be penicillin-allergic by skin testing, compared to 2.9% with older cephalosporins such as cephalexin. [16,17] Furthermore, these papers established that the risk of anaphylaxis due to cephalosporin cross-reactivity is quite small, as there was a higher incidence of anaphylactic reactions to cephalosporins with a negative penicillin skin test, compared to positive skin test patients. [16] This demonstrates that anaphylaxis reactions are often unpredictable.

Assessing the type and severity of penicillin allergy

Evidence suggests that history from the patient, especially when vague or not documented, is insufficient for assessing the degree of penicillin allergy. [18] Furthermore, the potential for allergy changes over time, with 80% of individuals with a documented IgE-mediated reaction having no evidence of reactive IgE after 10 years from initial reaction. [19] Most IgE-mediated reactions occur within seconds (IV administration) or up to an hour if administered orally with food. [20] Reactions outside of this timeframe are less likely to be IgE-mediated.

Hence, when taking a history, specific information should be sought to identify the presence of low- or high-risk features of the previous penicillin-related reaction, and consequently stratify the risk of future allergic reactions to a penicillin or cephalosporin.

High-risk features include: reaction occurred within one hour of administration; reaction occurred within the last 10 years; well documented history of features suggestive of anaphylaxis; required hospitalisation; any features suggestive of anaphylaxis as defined previously; and features of type IV hypersensitivity reactions including blistering, mucosal involvement, early onset desquamation (peeling), blood abnormalities such as derangements in liver function, renal function, or eosinophils. [15]

Low-risk features include: reaction occurred more than one hour after administration; reaction occurred more than 10 years ago; history is vague, unclear or poorly documented; localised reaction of mild severity involving one system only (rash not displaying any ‘high-risk features’ or stomach cramping); and a reaction that is not a true allergy; for example, an amoxicillin-Epstein Barr virus reaction. [10,15]

Role of the radioallergosorbent test, skin testing and desensitisation for penicillin allergy

In any patient with features of penicillin allergy, there should be consideration of referral for immunological skin testing and/or desensitisation, which can be performed quickly and is cost effective. The radioallergosorbent test (RAST) is more expensive, takes time to analyse and has poor positive predictive value. Desensitisation is performed in a supervised setting, can take 4-12 hours to complete in an acute setting, and results in a temporary reduction in immunogenic potential towards penicillins or associated medications. [21] Immediately following desensitisation, the first dose of penicillin is usually given. It must be noted that penicillin skin testing should also include testing against related beta-lactams such as cephalosporins, as a positive penicillin skin test cannot accurately predict cross-reactivity. [17]

Recommended drug choice in penicillin allergic patients

Based on the information outlined, and current existing guidelines, the following recommendations have been derived:

  1. Any patient who has high-risk features on history should not receive a beta-lactam agent. [22] An alternative efficacious drug should be prescribed. If there is no efficacious alternative, or a cephalosporin is required, referral to immunology should occur for skin testing and desensitisation if needed.
  2. 2. In those patients who have only low-risk features on history, the following question must be addressed: ‘which antibiotic is required?’ If a penicillin-based compound (i.e., benthazine penicillin) is required, the same precautions should be taken as mentioned above. However, if a cephalosporin such as ceftriaxone is required, the medication may be administered as the risk is less than 1% for third-generation cephalosporins. In this setting, the patient should be advised of the small but possible risk of an allergic reaction. [16]

Routine monitoring

Regardless of what antibiotic is prescribed, routine monitoring is advised as all serious allergic reactions need appropriate medical care. Observation facilities in addition to life support equipment and staff trained in first aid are essential in administering stat doses of antibiotics for the treatment of sexually transmitted infections. Signs and symptoms to look for during the observation period include the following: rash, swelling around the face/tongue/eyes, breathing difficulty, wheeze, vomiting, diarrhoea, abdominal pain, syncope or pre-syncope (low blood pressure), altered consciousness or shock. If any of these features are present or there is concern, refer to the local anaphylaxis and emergency protocols.

Case outcome

Although specific details of the previous allergic reaction could not be recalled by the patient, collateral history from his family suggested an urticarial reaction at the age of seven, with no other systemic features and not requiring hospitalisation. Consequently, the patient was deemed low-risk for a cross-reactivity allergic reaction towards ceftriaxone. He received the original prescribed treatment of 500 mg IM ceftriaxone and 1 g oral azithromycin without any adverse effects, or features of an allergic reaction. To treat his syphilis, he underwent a rapid desensitisation and subsequently received 1.8 g intramuscular benzathine penicillin, with no adverse effects.

Conflicts of interest

None declared.

Correspondence

J Floridis: john.floridis@nt.gov.au

References

[1] Solensky R. Allergy to penicillins. In: UpToDate. [Online].; 2012 [cited 2013 June 1] Available from: http://www.uptodate.com/contents/allergy-to-penicillins?source=search_result&search=allergy+to+penicillins&selectedTitle=1~150.

[2] Solensky R. Penicillin allergy as a public health measure. J Allergy Clin Immun. 2014;133(3):797-8.

[3] Emerson C. Syphilis: A review of the diagnosis and treatment. Open Infect Dis J. 2009;3:143-7.

[4] Australian Medicines Handbook Pty Ltd. Australian Medicines Handbook Rossi S, editor. Adelaide; 2013.

[5] STD Services. Diagnosis and Management of STDs (including HIV infection). 7th ed. Adelaide: Royal Adelaide Hospital; 2013.

[6] Antibiotic Expert Group. Therapeutic Guidelines: Antibiotics. Version 14. Melbourne: Therapeutic Guidelines Limited; 2010.

[7] Adam J, Pichler W, Yerly D. Delayed drug hypersensitivity: models of T-cell stimulation. Brit J Clin Pharmaco. 2011;71(5):701-7.

[8] Friedmann P, Pickard C, Ardern-Jones M, Bircher A. Drug-induced exanthemata: a source of clinical and intellectual confusion. Eur J Dermatol. 2010;20(3):255-9.

[9] Arroliga M, Pien L. Penicillin allergy: Consider trying penicillin again. Clev Clin J Med. 2003;70(4):313-26.

[10] Australasian Society of Clinical Immunology and Allergy. ASCIA. [Online].; 2012 [cited 2013 May 28] Available from: www.allergy.org.au/health-professionals/anaphylaxis-resources/adrenaline-autoinjector-prescription.

[11] Apter A, Kinman J, Bilker W, Herlim M, Margolis D, Lautenbach E, et al. Represcription of penicillin after allergic-like events. J Allergy Clin Immun. 2004;113(4):764-70.

[12] Macy E. The clinical evaluation of penicillin allergy: what is necessary, sufficient and safe given the materials currently available? Clin Exp Allergy. 2011;41(11):1498-1501.

[13] Dash C. Penicillin allergy and the cephalosporins. J Antimicrob Chemoth. 1975;1(3):107-18.

[14] Petz L. Immunologic cross-reactivity between penicillins and cephalosporins: a review. J Infect Dis. 1978;137 Suppl:S74-S79.

[15] Solensky R. Penicillin-allergic patients: Use of cephalosporins, carbapenems and monobactams. In: UpToDate. [Online].; 2013 [cited 2013 June 1] Available from: http://www.uptodate.com/contents/penicillin-allergic-patients-use-of-cephalosporins-carbapenems-and-monobactams?source=search_result&search=penicillin+allergic+patients&selectedTitle=1~150.

[16] Pichichero M. A review of evidence supporting the American Academy of Pediatrics recommendation for prescribing cephalosporin antibiotics for penicillin-allergic patients. Pediatrics. 2005;115(4):1048-57.

[17] Pichichero M, Casey J. Safe use of selected cephalosporins in penicillin-allergic patients: a meta-analysis. Otolaryngol Head Neck Surg. 2007;136(3):340-7.

[18] Wong B, Keith P, Waserman S. Clinical history as a predictor of penicillin skin test outcome. Ann Allergy Asthma Im. 2006;97(2):169-74.

[19] Blanca M, Torres M, Garcia J. Natural evolution of skin test sensitivity in patients allergic to beta-lactam antibiotics. J Allergy Clin Immun. 1999;103(5 Pt 1):918-24.

[20] Pichler W. Drug allergy: Classification and clinical features. In: UpToDate. [Online].; 2013 [cited 2013 May 28] Available from: http://www.uptodate.com/contents/drug-allergy-classification-and-clinical-features?source=search_result&search=drug+allergy%3A+classification&selectedTitle=1~150.

[21] Centers for Disease Control and Prevention. Sexually Transmitted Diseases Treatment Guidelines. Management of persons who have a history of penicillin allergy. 2011 January 28.

[22] Smith W. Adverse drug reactions. Allergy? Side-effect? Intolerance? Aust Fam Physician. 2013;42(1-2):12-6.

Categories
Feature Articles

Exercising patient-centred care: A review of structured physical activity, depression and medical student engagement

Structured physical activity has a wide range of benefits that include improving mood and preventing chronic disease. Recently, there has been an explosion of research aimed at treating diseases such as depression using nothing more than exercise. This article presents an overview of research conducted into the use of exercise to treat depression. As a body of work, the literature finds it to be a practice that has significant clinical benefits; however, its implementation is not straightforward. Issues concerning exercise adherence have hampered studies and force us to ask whether prescribing exercise for sufferers of depression is indeed appropriate. Nonetheless, is there a role for medical students in encouraging physical activity as treatment? If we re-examine the use of exercise from a patient-centred perspective, medical students have an opportunity to engage with patients, promote exercise and possibly prevent depression.

Introduction

v5_i1_a19Structured physical activity has a wide range of health benefits that include the prevention of chronic diseases such as cancer, cardiovascular disease, diabetes and obesity. [1] There is also consensus that exercise has a short-term ‘feel good’ effect that improves mood and wellbeing. [2-4] Recently, converging interest in these two areas has spawned an explosion of research aimed at treating diseases such as depression using nothing more than exercise. [5-7] A 2012 meta-analysis of over 25 trials found that prescribing exercise to treat depression is on par with pharmacological and physiological interventions, and that exercise alone is moderately more effective than no therapy. [8] The study also highlights that prescribing exercise for depression is not straightforward. Poor patient attendance rates, along with issues including exercise adherence and the type, duration and intensity of exercise, all question whether prescribing exercise for sufferers of depression is indeed appropriate. The authors of the study admit the implementation is complex and that further study is required. Nonetheless, is there a role for medical students in encouraging physical activity as treatment? Can our skills in motivational interviewing and goal setting play a role? If we re-examine the use of exercise from a patient-centred perspective, medical students can promote exercise adherence and support those who are already exercising to stay exercising. In doing so, we can facilitate the prescription of exercise and possibly prevent depression. [5,7,9]

A background to depression

Depression affects a staggering 350 million people globally, with sufferers commonly reporting changes in emotional, cognitive and physical behaviour. [10] Depression also presents with high rates of comorbidity (the occurrence of more than one condition or disease). [11] More locally, a 2007 National Survey of Mental Health and Wellbeing found that almost half of the Australian population aged 18-85 (7.3 million people) had experienced a mental illness at some point in their lifetime. [3] A study by the Australian Bureau of Statistics (ABS) in the same year revealed that of those people suffering from mental illness, only 35% actually sought treatment, suggesting that within Australia there are 2.1 million potential patients going without much-needed assistance. [12]

Mental illness, while indiscriminate, has a higher incidence within certain sub-populations. Perhaps surprisingly, doctors and medical students experience higher rates of depression and stress than the general population. Medical students report increased depressive symptoms as a result of medical school while a significant number of doctors report that they are less likely to seek treatment for depression despite their awareness of the condition. [13,14] With a reported 3668 students admitted to medical schools in Australia during 2013, these statistics highlight the importance for medical students to identify and understand depression. [15]

Pathophysiology and current treatment options

Therapeutic treatment options for patients who are depressed fall into two broad categories: psychological and pharmacological. Typically, people are treated using cognitive behavioural therapy (CBT), anti-depressant medication, or a combination of both. [4,16,17] Despite being able to treat depression, a simple pathogenesis is yet to be found. Current opinion centres on depression being a result of chemical imbalances within the brain, specifically the action of monoamine neurotransmitters, including dopamine, serotonin, and norepinephrine. [17] This approach has allowed the pharmaceutical industry to develop medications including selective serotonin reuptake inhibitors (SSRIs), which target neurotransmitter reuptake to help restore their usual balance and so reduce symptoms. [18]

One promising new development in the quest to understand the pathophysiology of depression concerns the emergence of inflammation as a mediator of depression. A recent meta-analysis of 24 separate studies found that depression is accompanied by immune dysregulation and activation of the inflammatory response system (IRS). [19] Specifically, when compared to non-depressed patients, sufferers of major depression were found to have significantly higher (p < 0.001) concentrations of the pro-inflammatory cytokines tumour necrosis factor-α and interleukin-6 in their blood. Once the increased cytokine signal reaches the brain, it is able to down-regulate the synthesis, release and reuptake of the very same monoamines targeted by antidepressant medications. [20] Understanding and preventing this interaction from occurring could lead to promising new treatments for depression. Despite widespread prescription, the use of antidepressant medication is not without its drawbacks. Many patients experience unwanted side effects and stop taking their medication, while others simply do not want to take medicine that will make them feel worse. [21] Additionally, a 2012 report on Australia’s overall health revealed that some patients who experience depression suffer worse health outcomes due to the social stigma associated with taking antidepressants. [22] As a result, many patients choose to shun medication altogether, opting for alternative therapies such as acupuncture and yoga to help manage depression. [21] Viewed from the patient’s perspective, the use of antidepressants can be seen as a choice between the lesser of two evils: treatment or depression. Exercise and depression

The link between exercise and relieving depression has been a difficult one to make. A 2009 meta-analysis published by The Cochrane Collaboration evaluated the use of exercise, defined as “repetitive bodily movement done to improve or maintain … physical fitness”, in treating depression. [23] Initially, the Cochrane review pooled data from 25 trials and found a large clinical effect in the reduction of depressive symptoms when compared to a placebo or no treatment at all. However, in 2012, when the authors repeated the study, correcting for what they saw as errors and bias in the previous analysis, the data found only a moderate effect in relieving depressive symptoms. [8] Interestingly, in both studies, the data concerned with treating depression with exercise was found to be on par with the use of antidepressant medication. Nonetheless, despite the downgraded clinical effect, the authors of the study continue to find it reasonable to prescribe exercise to people with depressive symptoms; however, they caution to expect only a moderate effect at best.

Research into a preventative, rather than curative approach to depression is also being conducted. In a large-scale longitudinal study of depression, researchers hypothesised that an inverse relationship existed between the level of physical activity and depressive symptoms: those who were more active should not become depressed. [9] Researchers grouped exercise into three categories: light activity such as ‘necessary household chores’, moderate activity which included regular walks, and strenuous activity such as ‘participation in competitive sports’. The study revealed that participants in the ‘competitive sports’ category reported less depressive symptoms than those in the ‘necessary household chores’ category – an unsurprising finding given the ability of exercise to lift mood. But perhaps more revealing is the finding that participants who decreased their activity from moderate to light, and from strenuous to light, reported the greatest increase in depressive symptoms, implying that exercise may act as a buffer against depression.

Exercising patient-centred care

While scientific analysis and treatment forms the foundation of modern health care, a focus on the patient as a person should be paramount. In 2010, the Australian government endorsed patient-centred care, a framework that enshrines the values and individual needs of the patient. Since then, patient-centred care has broadened to encompass ‘an approach to the planning, delivery, and evaluation of health care that is grounded in mutually beneficial partnerships among healthcare providers, patients, and families’. [24] While relatively new, patient-centred care now shapes the delivery of health care in Australia. It has also reaffirmed the rights of the individual in developing and delivering health care.

Patient-centred care forces us to examine whether exercise as an intervention is at all appropriate for the depressed. While there may be some clinical benefit, is it really patient-centred? The Cochrane Collaboration study of exercise as a treatment for depression highlights low exercise adherence and high dropout rates amongst participants. [8] While the reasoning behind the dropouts was absent from the report, one could easily imagine the potential challenges in asking a person who is already lacking in drive and motivation to participate in an exercise program. Is it possible that in considering the prescription of exercise to the depressed we are violating key Hippocratic notions including that of non-maleficence? Are we exposing an already vulnerable individual to a situation where they are likely to fail and experience a decline in their mental health as a result? [1] It would seem that when it comes to exercise and depression, the patient-centred perspective would advise against such risks. [24]

While it is clear that exercise is good for us, it is also clear that its prescription for depression is fraught with ethical issues. As medical students, how then can we engage in such an uncertain and potentially lethal landscape when we do not possess the skills to interact with depression in any therapeutic manner? If we re-examine the issue from a patient-centred perspective, we are able to view exercise as a preventative, rather than curative, approach to depression. This opens the door to medical student engagement. For example, we can use the motivational and goal-setting skills taught as part of the preclinical curriculum to help patients achieve and maintain their exercise goals. This could involve scheduling a progress telephone call to maintain the patient’s motivation, a periodic home visit in an effort to reduce recidivism, or the discussion and identification of barriers that may prevent them from achieving their goals. Placements are the ideal environment for us to develop these mutually beneficial partnerships. Whatever the effort, promoting a healthier and more active lifestyle is a patient-centred perspective that all medical students should feel comfortable in advocating.

Conclusion
Re-examining the issue from a patient-centred perspective sees exercise as a multi-benefit, primary prevention tool that may also safeguard against developing depression. Moreover, this is wholly within our advocacy as medical students. Not only does this approach echo recommendations supported by the Australian Government to include collaborative, patient-centred care programs in undergraduate health programs, it also provides many practical opportunities for medical student engagement. For example, during placements in rural and remote areas, students often participate in community-based activities where we leverage the associative influence of our medical profession to promote the benefits of a healthier and more active lifestyle.

Despite reviews of studies inferring the protective effect of exercise against developing depressive symptoms, prescribing exercise as a treatment option requires skill and experiences beyond the scope of medical students. However, medical students do have skills in motivational interviewing and goal-setting strategies that enable us to promote exercise adherence. Therefore, if we consider exercise as a tool for disease prevention that may also safeguard against depression, patients will experience greater health outcomes and medical students can be active in its prescription.

If you or someone you care about is in crisis and you think immediate action is needed, call emergency services (triple zero – 000) or contact your doctor or local mental health crisis service, such as Lifeline (13 11 14).

Correspondence

D Lowden: danlowden@gmail.com

Acknowledgements

I’d like to thank the James Cook University Ecology of Health subject coordinators and Dominic Lopez for the valuable feedback provided in preparing this article.  

Conflict of interest declaration

None declared. 

References

[1] Allen K, Morey M. Physical activity and adherence. In: Bosworth H, editor. Improving patient treatment adherence. New York: Springer; 2010. p. 9-38.

[2] Ströhle A. Physical activity, exercise, depression and anxiety disorders. J Neural Transm. 2009;116(6):777-84.

[3] Australian Institute of Health and Welfare (AIHW). Comorbidity of mental disorders and physical conditions 2007 [Internet]. 2012 [cited 2014 May 30]. Available from: http://www.aihw.gov.au/publication-detail/?id=10737421146.

[4] Carlson VB. Mental health nursing: the nurse-patient journey. Philadelphia: W B Saunders Co.; 2000.

[5] Blake H. How effective are physical activity interventions for alleviating depressive symptoms in older people? A systematic review. Clin Rehabil. 2009;23(10):873-87.

[6] Carlson DL. The effects of exercise on depression: a review and meta-regression analysis [dissertation]. Milwaukee: University of Wisonin; 1991.

[7] Pinquart M DP, Lyness JM. Effects of psychotherapy and other behavioral interventions on clinically depressed older adults: a meta-analysis. Aging Ment Health. 2007;11(6):645-57.

[8] Rimer J DK, Lawlor DA, Greig CA, McMurdo M, Morley W, Mead GE. Exercise for depression. Cochrane Database Syst Rev. 2012;7:CD004366.

[9] Graddy JT, Neimeyer GJ. Effects of exercise on the prevention and treatment of depression. Journal of Clinical Activities, Assignments & Handouts in Psychotherapy Practice. 2002;2(3):63-76.

[10] World Health Organization. Depression: Fact sheet no. 369 [Internet]. 2012 [cited 2014 March 14]. Available from: http://www.who.int/mediacentre/factsheets/fs369/en/.

[11] World Health Organization. The world health report 2001: mental health – new understanding, new hope [Internet]. 2001 [cited 2014 May 24]. Available from: http://www.who.int/whr/2001/en/.

[12] Department of Health and Ageing. National mental health report 2010: Summary of 15 years of reform in Australia’s mental health services under the National Mental Health Strategy 1993-2008. Canberra: Commonwealth of Australia; 2010.

[13] beyondblue. Doctors’ mental health program [Internet]. 2014 [cited 2014 March 30]. Available from: http://www.beyondblue.org.au/about-us/programs/workplace-and-workforce-program/programs-resources-and-tools/doctors-mental-health-program.

[14] Rosal MC, Ockene IS, Ockene JK, Barrett SV, Ma Y, Hebert JR. A longitudinal study of students’ depression at one medical school. Acad Med. 1997;72(6):542-6.

[15] Medical Deans Australia and New Zealand Inc. Annual tables [Internet]. 2013 [30 March 2014]. Available from: http://www.medicaldeans.org.au/statistics/annualtables.

[16] Imel ZE, Malterer MB, McKay KM, Wampold BE. A meta-analysis of psychotherapy and medication in unipolar depression and dysthymia. J Affect Disord. 2008 Oct;110(3):197-206.

[17] Carlson NR. Foundations of physiological psychology. 7th ed. Boston, MA: Allyn & Bacon; 2008. p. 108-22.

[18] Nutt DJ. Relationship of neurotransmitters to the symptoms of major depressive disorder. J Clin Psychiatry. 2008;69 Suppl E1:4-7.

[19] Dowlati Y, Herrmann N, Swardfager W, Liu H, Sham L, Reim EK, et al. A meta-analysis of cytokines in major depression. Biol Psychiatry. 2010;67(5):446-57.

[20] Miller AH, Maletic V, Raison CL. Inflammation and its discontents: the role of cytokines in the pathophysiology of major depression. Biol Psychiatry. 2009;65(9):732-41.

[21] Sarris JK, Newton DJ. Depression and Exercise. J Comp Med. 2008;7(3):48-50.

[22] Australian Institute of Health and Welfare (AIHW). Australia’s health. Canberra: AIHW; 2012.

[23] Mead GE, Morley W, Campbell P, Greig CA, McMurdo M, Lawlor DA. Exercise for depression. Cochrane Database Syst Rev. 2009 Jul 8;(3):CD004366.

[24] Australian Commission on Safety and Quality in Health Care (ACSQHC). Patient-centred care: Improving quality and safety through partnerships with patients and consumers. Canberra: ACSQHC; 2011.

Categories
Feature Articles Articles

Approaching autism

Autism Spectrum Disorder is a social communication disorder in someone displaying repetitive and restrictive interests. Diagnosed in early childhood, children struggle to develop social relationships required for further learning and independent living. This article discusses changes to the diagnosis, how the diagnosis is made, the prevalence, causes and interventions. Importantly, this review guides medical students towards an understanding of what to expect in individuals with autism spectrum disorder and how to interact with them.

What is autism?

Autism is diagnosed according to the American Psychiatric Association (APA) guidelines in the Diagnostic and Statistical Manual of Mental Disorders (DSM), [1] or alternatively, according to the International Classification of Diseases (ICD) published by the World Health Organization. [2] Mostly because of convention, the DSM is more often used in the diagnosis of autism in Australia.

From 2013, ‘autism’ would be a short form of the term Autism Spectrum Disorder (ASD). A diagnosis of ASD applies where there is evidence of functional impairment caused by:

Problems reciprocating social or emotional interaction, including difficulty establishing or maintaining back-and-forth conversations and interactions, inability to initiate an interaction, and problems with shared attention or sharing of emotions and interests with others.

Severe problems maintaining relationships, ranging from a lack of interest in other people to difficulties in pretend play and engaging in age-appropriate social activities, and problems adjusting to different social expectations.

Nonverbal communication problems such as abnormal eye contact, posture, facial expressions, tone of voice and gestures, as well as an inability to understand these. [1]

Additionally, two of the four symptoms related to restricted and repetitive behaviour need to be present:

Stereotyped or repetitive speech, motor movements or use of objects.

Excessive adherence to routines, ritualised patterns of verbal or nonverbal behaviour, or excessive resistance to change.

Highly restricted interests that are abnormal in intensity or focus.

Hyper- or hypo-reactivity to sensory input or unusual interest in sensory aspects of the environment.

Symptoms must be present in early childhood, but may not become fully manifest until social demands exceed limited capacities. There must be an absence of general developmental delay. Finally, symptoms should not be better described by another DSM-5 diagnosis.

From the criteria it is important to note that there are many combinations of symptoms that can lead to a diagnosis of ASD. Therefore, not only do individuals with ASD share the same degree of uniqueness as the rest of the general population, but even the type of ASD they experience can be unique.

Confusion can arise in the use of the terms ‘autism’ and ‘autism spectrum disorders.’ Part of the reason for this emerges from the subtypes of Pervasive Developmental Disorders (PDD) previously recognised by the APA. PDD previously contained the subtypes Asperger’s Disorder, Childhood Disintegrative Disorder, Autistic Disorder and PDD-Not otherwise specified (NOS). While the ICD also recognizes Atypical Autism, recent changes to the DSM should simplify the nosology. [2]

The current ASD diagnosis criteria came into effect from May 2013 and is expected to identify approximately 90% of children with a clinical diagnosis from the previous DSM-IV. [3] A new diagnosis called Social Communication Disorder (SCD) has been created. Simply put, SCD is ASD, without the restrictive and repetitive interests. It is not yet clear how many children will receive a diagnosis of SCD, nor how many children previously identified with a DSM-IV diagnosis of PDD would meet the criteria for SCD only.

Diagnoses made under DSM-IV guidelines are still valid. A new diagnosis is not required simply because of the change in criteria. However, individuals being assessed for ASD now, and in the future, will be diagnosed under the new criteria.

Features of ASD

Social communication, interaction and motivation

ASD is principally characterized by social communication and interaction deficits. Individuals often experience difficulty with interpreting facial expressions, tone of voice, jokes, sarcasm, gestures and idioms. Imagine the literal meanings of “Fit as a fiddle,” “Bitter pill to swallow” and “Catch a cold.” Some people with ASD may have only limited speech or may be completely non-verbal. [4,5] Echolalia, pronoun reversal, unusual vocalisations and unusual accents are common. [4,5] Alternative, augmentative and visually based communication techniques may help when a child is unable to consistently follow verbal instructions. In this regard, touch-screen portable devices may appeal to their visual and pattern-orientated learning strengths (see Figure 1). [6]

Difficulties with social interactions mean that affected individuals often grow up without a social circle, and as a consequence, miss out on peer-initiated learning opportunities. [7] Such challenges include difficulty with understanding unwritten social rules such as personal space and initiating conversation. [7] They can appear to be insensitive, because they are unable to perceive how someone else is feeling. Turn taking and sharing is not intuitive or learned, and individuals need to be trained how to do this. An inability to express feelings, emotions or needs results in inappropriate behaviour such as unintentionally aggressive actions. [8] This can lead to isolation, a failure to seek comfort from others and signs of low self-esteem. [7] Individuals can also suffer from hypersensitivity to sensory stimuli, which may lead them to prefer limited social contact. Individuals do feel enjoyment and excitement; however, this tends to be a personal experience and often goes unshared, which may be due to a failure to need the reward of another’s attention and praise. [9]

Imagination

Individuals with ASD may experience challenges with social imagination. [10] Individuals are less likely to engage in make-believe play and activities. They are less likely to determine and interpret other’s thoughts, feelings and actions, and as a consequence unable to appreciate that other people may not be interested in their topic of interest.

Individuals with ASD are often unable to predict outcomes, or foresee what might occur next, including hazards. [11] This leads to a difficulty in coping in new or unfamiliar situations, or making plans for the future. Parents, carers and health professionals often need to stick to routines to avoid unpredicted events that could cause distress.

Sensory and motor processing

Sensory information processing is heightened for tactile input, but reduced for social input, in individuals with ASD. [12] Changes in these inputs are understood to contribute to the repetitive and restrictive interest criteria of the diagnosis. Hence, the state of the tactile environment is important to the wellbeing of individuals with ASD, potentially serving as an aggravation by, or as a refuge from, incomprehensible cues. Changes in sensory information processing may present as an inability to distinguish context-relevant stimuli, and varying capabilities and capacity to respond to a stimulus (e.g. ignoring some sounds but over-reactive to others). They may also experience difficulty with proprioception and responding to pain, including temperature extremes. [12,13] Individuals may explore their environment by smelling or mouthing objects, people and surfaces, and as a consequence, develop eating behaviours that relate to smell, texture or flavor, including inedible objects. Participating in repetitive movements such as rocking, bouncing, flapping arms and hands, or spinning with no apparent dizziness is sometimes as a means of coping with stress, or alternatively, could be used as a means of self-stimulus, providing pleasure. [12] Contrary to popular perception, savant skills are uncommon in individuals with ASD. [14]

What causes ASD?

ASD is a multi-factorial disorder. There is no one cause of ASD. The most prominent risk factor is genetics, both familial and de novo. [15] Studies have shown that among monozygotic twins, if one child has ASD, then the other will be affected about 36–95% of the time. [16] In dizygotic twins, if one child has an ASD, then the other is affected up to about 31% of the time. [16] Parents who already have a child with an ASD have a 2%–18% chance of having a second child who is also affected. [17]

In addition to the well-documented increase in chromosomal abnormalities associated with advanced maternal age, the risk of ASD is also associated with advanced paternal age. [18] The current hypothesis to explain this observation is that small, de novo genetic mutations and rearrangements accumulate in the sperm, which are then incorporated into the DNA of the child. [19,20] Other risk factors include premature birth or low birth weight, preeclampsia [21] and in utero exposure to medications, particularly sodium valproate. [22] There is no evidence to support the theory that the measles, mumps and rubella vaccine causes autism. [23]

Three cognitive theories have evolved to explain the behavioural challenges of ASD: the theory of mind deficit, executive dysfunction, and weak central coherence. The first posits that individuals with ASD are unable to recognize mental states in others, leading to social behaviour discordance. [24,25] The second attempts to explain a lack of goal directed behaviour, and lack of behavioural flexibility. [26] Both these cognitive explanations may be underpinned by an early deficit in social motivation, whereby underdevelopment of the brain regions involved in social recognition and response leads to a failure to learn social cues and their contexts. [27] This has been traditionally thought to impact on imitation learning from which social and food reward by infants is derived and goal-directed behaviour emerges, although research in the area is equivocal. [28] Whether this same underdevelopment also delays acquisition of receptive and expressive language is unclear. [29] Finally, the weak central coherence theory is a means of understanding why individuals focus on details and neglect the context. [30] Yet, it could be argued that a neglect of the context emerges from a deficit in social motivation.

Epidemiology

In the community there is considerable concern of an ASD epidemic. Parental reports indicate ASD prevalence may have increased from 1 in 88 in 2007 to 1 in 50 children in 2012. [31] Currently, the prevalence of ASD in Australia is about 1 in 165 children aged 6-12 years. [32] In the USA this prevalence is slightly higher with 1 in 50–88 eight-year olds receiving a diagnosis; [31,33] this breaks down to 1 in 31–54 boys and 1 in 143–252 girls. [31,33]

A true appreciation of the changes in ASD prevalence can only come from understanding the historical basis of the diagnosis. [34,35] In this regard, marked changes in prevalence have been caused by nosology changes (e.g. autism was once called childhood schizophrenia) and a number of changes to the APA PDD criteria in the DSM over the past 30 years. [36] Other contributing factors include demographic variables (e.g. where older individuals may have missed receiving a diagnosis, but receive one now with the help of a retrospective investigation such as archival home video), increased awareness of normal childhood development and developmental disorders, changes in testing and protocols, and the sampling of data, such as parents versus clinicians, or state schools versus all children. Other factors that often go unreported include socioeconomic factors (e.g. those with sufficient knowledge and resources are able to seek out professional assistance and are more likely to receive a diagnosis than those without) and pressure on clinicians to provide a diagnosis, thereby assisting struggling parents access services and financial support. Trying to account for all these factors in an epidemiological study is very difficult. Hence, the true historical prevalence of ASD is difficult to establish. Studies that have tried show no, or a slight, increase in ASD. [35]

How is a diagnosis made?

While parents often suspect developmental delay or ASD, the variability in child development during the first four years can lead to variability in the age of first diagnosis – typically around three years of age. [34] Reliability of the diagnosis also suffers as a consequence. [37] Restrictive and repetitive interests can be difficult to identify before the age of four because even typically developing two- and three-year-olds can show repetitive behaviours. Since the new diagnosis also requires behaviours to be demonstrably incompetent (such as during a child’s interaction at day care), a lag between symptoms and diagnosis is likely to continue.

In Australia, paediatricians, clinical psychologists, psychiatrists and speech pathologists specialising in the field of paediatrics or adolescence make a formal diagnosis of ASD. Further, within these specialisations, diagnoses are likely to be made by practitioners with experience in testing and diagnosing ASD.

A typical diagnostic evaluation involves a multi-disciplinary team including pediatricians, psychologists, speech and language pathologists. Testing takes a number of hours and can be exhausting for subjects, parents and clinicians. Because of this and other factors, waiting times for diagnosis can be up to 24 months across the country, with particular difficulties in rural and remote areas. [38]

In initial consultations, screening tools may be used such as the Autism Behavior Checklist (ABC), Checklist for Autism in Toddlers (CHAT), Modified Checklist for Autism in Toddlers (M-CHAT), Childhood Autism Rating Scale (CARS) and Gilliam Autism Rating Scale (GARS). However, for a diagnosis, the Autism Diagnostic Interview-Revised (ADI-R) and Autism Diagnostic Schedule (ADOS) are used. [39,40]

Differential diagnoses and comorbidities

There is no single test for ASD and there are no unique physical attributes. For this reason differential diagnoses such as hearing and specific language impairments, mutism, environmental conditions such as neglect and abuse, and attachment and conduct disorders need to be excluded. Disorders similar to ASD include Social Communication Disorder and Social Anxiety Disorder. In the latter, communication is preserved but a degree of social phobia persists.

Since ASD diagnoses are made according to a description of a set of behaviours rather than a developmental abnormality or genetic condition, it is not uncommon to find a diagnosis of ASD comorbid with another pre-existing condition. For example, approximately 20% of children with Down syndrome meet the diagnosis for ASD. [41] Theoretically, all children with genetic abnormalities such as Angelman syndrome or Rett syndrome would also meet criteria for a diagnosis for ASD. In the event of an existing condition, a diagnosis of ASD may also be warranted in order to guide the child’s behavioural management and education.

ASD often co-occurs with another developmental, psychiatric, neurologic, or medical diagnosis. [42] The co-occurrence of one or more non-ASD developmental diagnoses with ASD is approximately 80%. Co-morbidities often occur with attention deficit hyperactivity disorder, Tourette syndrome, anxiety disorders and dyspraxia. Although common, the majority (~60%) of children with ASD do not have an intellectual disability (ID; intelligent quotient ≤70). [33] Some individuals with an intellectual disability are likely to always remain dependent on health care services.

There is an increased risk of epilepsy in individuals with ASD. However, the increased co-morbidity of epilepsy is strongly linked to ID – 24% in those with ASD and ID, and 2% with ASD only. [43] There is no evidence that a particular epileptic disorder can be attributed to ASD (or vice versa). [44] The more common presentations include late infantile spasms, partial complex epilepsies and forms of Landau-Kleffner syndrome. Mutations in the tuberous sclerosis genes are particularly associated with ASD and epilepsy. [45]

Gastrointestinal disorders (GID) are a common complication in ASD. [46] Given that some “cognitive” genes of the brain are also expressed in the enteric nervous system, decreased visceral sensitivity, myogenic reflexes or even CNS integration of visceral input may be exacerbated in genetically susceptible individuals. Language impairments may be associated with toilet training difficulties, which can lead to constipation with overflow incontinence and soiling. However, medications and diet are not significantly associated with GID in individuals with ASD. [47]

What treatments are available?

At this point in time, it is thought that biological changes affect the function or structure of the brain over time, leading to different developmental, psychological and behavioural trajectories. There is no cure for ASD, but early intensive behavioural intervention (based on Advanced Behavioural Analysis) is somewhat successful towards promoting learning and independent living. [48] This intervention aims to addressing the core deficits of ASD in a structured, predictable setting with a low student-teacher ratio (initially 1:1). It promotes behavioural systems for generalization and maintenance, promotes family involvement and monitors progress over time. There is some evidence to suggest that participation in social skills groups also improves social interaction. [49] Other intervention strategies are pedagogical approaches to matching faces and actions to meanings.

The Australian Government has made available a support package called Helping Children with Autism. The package includes information, workshops and financial assistance for early intervention services.

At present, pharmacological intervention targets some symptoms associated with ASD. These include serotonin reuptake inhibitors, anti-psychotics, anti-epileptics, mood stabilisers and other medications to treat hyperactivity, aggression and sleep disruption. Given the degree of notable side effects in these pharmacotherapeutics, new generation compounds continue to be tested. There are high rates of complementary and alternative diet use in children with ASD, but a lack of rigorous studies means that the evidence for efficacy is poor. [50,51]

Guidelines on dealing with children with ASD

As medical students and interns, there may be opportunities to participate in a diagnostic clinic or therapy session. Community placements also provide insight into understanding special education interventions, respite and support. However, most interactions would occur during resident training or paediatric rotations. In these situations students will be observing or assisting specialists.

Children with ASD are only likely to be admitted into hospital with a medical problem distinct from their behavioural features. Nevertheless, children with ASD are twice as likely to become inpatients. [52] The more common reasons for hospital admission and general practice visits are seizures, sleep disturbances and constipation. [53]

Parents may describe their child as high-functioning, which tends to imply an IQ >70. This doesn’t usually reflect the social capacity of the child. Ultimately, as with other children, it is important to know the child’s strengths and weaknesses; for example, whether they respond more to visual or verbal communication. Most considerations when working generally with typically developing children also apply to children with ASD and developmental disorders. Suggestions for interacting with individuals with ASD can be found in the Table.

Ultimately, by engaging the parents and working with the child’s strengths, most issues can be resolved. Even for experienced clinicians, interactions can present challenges. It helps to be patient and adaptable. In some cases it may be pointless performing some examinations if doing them would be disruptive and not provide critical medical information. In such cases, working from the collaborative history will have to suffice.

ASD workshops and training opportunities

Workshops and training opportunities for community members range from one-hour presentations to nationally certified training programs. There are currently two nationally accredited training programs: CHCCS413B ‘Support individuals with ASD,’ and CHCEDS434A ‘Provide support to students with ASD.’ Associations around Australia and New Zealand also deliver community education programs and recognized training. The Autism Centre of Excellence (Griffith University, Queensland) provides tertiary training in ASD studies. The Olga Tennison Autism Research Centre (La Trobe University, Victoria) provides behavioural intervention strategy (Early Start Denver Model) training to qualified professionals.

Conferences of note include the annual International Meeting for Autism Research, the biennial Asia Pacific Autism Conference and the Australian Society for Autism Research conferences.

Concluding remarks

Autism is a spectrum disorder. As such, each child is unique. For this reason it is best not to get caught up with the ‘label’, but to focus on the individual’s abilities or disabilities, with an understanding that simplicity, patience and adaptability may be needed. Work with the parents or the carer to achieve the desired outcomes.

Acknowledgements

Thanks to Dr. Elisa Hill-Yardin and Dr. Naomi Bishop for a critical reading of a draft of this manuscript.

Disclosures

Dr. Moldrich is Adjunct Research Fellow of The University of Queensland investigating biological causes of autism. Together with Drs. Hill-Yardin and Bishop, he is co-organiser of autism research conferences. Dr. Moldrich is on the scientific advisory board of the Foundation for Angelman Syndrome Therapeutics.

Correspondence

R Moldrich: randal.moldrich@griffithuni.edu.au

References

[1] American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. 5th ed. Arlington: American Psychiatric Association; 2013.

[2] World Health Organization. International Classification of Diseases. Tenth edition. Geneva: 2010.

[3] Huerta M, Bishop SL, Duncan A, Hus V, Lord C. Application of DSM-5 criteria for autism spectrum disorder to three samples of children with DSM-IV diagnoses of pervasive developmental disorders. Am J Psychiatry. 2012;169(10):1056-64.

[4] Tierney CD, Kurtz M, Souders H. Clear as mud: another look at autism, childhood apraxia of speech and auditory processing. Curr Opin Pediatr. 2012;24(3):394-9.

[5] Boucher J. Research review: structural language in autistic spectrum disorder – characteristics and causes. J Child Psychol Psychiatry. 2012;53(3):219-33.

[6] Kagohara DM, van der Meer L, Ramdoss S, O’Reilly MF, Lancioni GE, Davis TN, et al. Using iPods((R)) and iPads((R)) in teaching programs for individuals with developmental disabilities: a systematic review. Res Dev Disabil. 2013;34(1):147-56.

[7] Volkmar FR. Understanding the social brain in autism. Dev Psychobiol. 2011;53(5):428-34.

[8] Anckarsater H. Central nervous changes in social dysfunction: autism, aggression, and psychopathy. Brain Res Bull. 2006;69(3):259-65.

[9] Tomasello M, Carpenter M, Call J, Behne T, Moll H. Understanding and sharing intentions: the origins of cultural cognition. Behav Brain Sci. 2005;28(5):675-91.

[10] Woodard CR, Van Reet J. Object identification and imagination: an alternative to the meta-representational explanation of autism. J Autism Dev Disord. 2011;41(2):213-26.

[11] Williams JH, Whiten A, Suddendorf T, Perrett DI. Imitation, mirror neurons and autism. Neurosci Biobehav Rev. 2001;25(4):287-95.

[12] Baron-Cohen S, Ashwin E, Ashwin C, Tavassoli T, Chakrabarti B. Talent in autism: hyper-systemizing, hyper-attention to detail and sensory hypersensitivity. Philos Trans R Soc Lond B Biol Sci. 2009;364(1522):1377-83.

[13] Gomes E, Pedroso FS, Wagner MB. Auditory hypersensitivity in the autistic spectrum disorder. Pro Fono. 2008;20(4):279-84.

[14] Howlin P, Goode S, Hutton J, Rutter M. Savant skills in autism: psychometric approaches and parental reports. Philos Trans R Soc Lond B Biol Sci. 2009;364(1522):1359-67.

[15] Betancur C. Etiological heterogeneity in autism spectrum disorders: more than 100 genetic and genomic disorders and still counting. Brain Res. 2011;1380:42-77.

[16] Hallmayer J, Cleveland S, Torres A, Phillips J, Cohen B, Torigoe T, et al. Genetic heritability and shared environmental factors among twin pairs with autism. Arch Gen Psychiatry. 2011;68(11):1095-102.

[17] Ozonoff S, Young GS, Carter A, Messinger D, Yirmiya N, Zwaigenbaum L, et al. Recurrence risk for autism spectrum disorders: a Baby Siblings Research Consortium study. Pediatrics. 2011;128(3):e488-95.

[18] Reichenberg A, Gross R, Weiser M, Bresnahan M, Silverman J, Harlap S, et al. Advancing paternal age and autism. Arch Gen Psychiatry. 2006;63(9):1026-32.

[19] Levy D, Ronemus M, Yamrom B, Lee YH, Leotta A, Kendall J, et al. Rare de novo and transmitted copy-number variation in autistic spectrum disorders. Neuron. 2011;70(5):886-97.

[20] Flatscher-Bader T, Foldi CJ, Chong S, Whitelaw E, Moser RJ, Burne TH, et al. Increased de novo copy number variants in the offspring of older males. Transl Psychiatry. 2011; 1:e34.

[21] Mann JR, McDermott S, Bao H, Hardin J, Gregg A. Pre-eclampsia, birth weight, and autism spectrum disorders. J Autism Develop Disorders. 2010;40(5):548-54.

[22] Nadebaum C, Anderson V, Vajda F, Reutens D, Wood A. Neurobehavioral consequences of prenatal antiepileptic drug exposure. Developmental Neuropsychology. 2012;37(1):1-29.

[23] Demicheli V, Rivetti A, Debalini MG, Di Pietrantonj C. Vaccines for measles, mumps and rubella in children. Cochrane Database Syst Rev. 2012;2:CD004407.

[24] Constantino JN. Social Impairment. In: Hollander E, Kolevzon A, Coyle JT, editors. Textbook of Autism Spectrum Disorders. Arlington, VA, USA: American Psychiatric Publishing; 2011. p. 627.

[25] Baron-Cohen S, Leslie AM, Frith U. Does the autistic child have a “theory of mind”? Cognition. 1985;21(1):37-46.

[26] Hill EL. Executive dysfunction in autism. Trends in Cognitive Sciences. 2004;8(1):26-32.

[27] Scott-Van Zeeland AA, Dapretto M, Ghahremani DG, Poldrack RA, Bookheimer SY. Reward processing in autism. Autism Research. 2010;3(2):53-67.

[28] Nielsen M, Slaughter V, Dissanayake C. Object-directed imitation in children with high-functioning autism: testing the social motivation hypothesis. Autism Research. 2013;6(1):23-32.

[29] Sassa Y, Sugiura M, Jeong H, Horie K, Sato S, Kawashima R. Cortical mechanism of communicative speech production. NeuroImage. 2007;37(3):985-92.

[30] Happe FG. Studying weak central coherence at low levels: children with autism do not succumb to visual illusions. A research note. J Child Psychol Psychiatry. 1996;37(7):873-7.

[31] Blumberg SJ, Bramlett MD, Kogan MD, Schieve LA, Jones JR. Changes in Prevalence of Parent-reported Autism Spectrum Disorder in School-aged U.S. Children: 2007 to 2011–2012. Hyattsville: Centers for Disease Control and Prevention, 2013 March 20, 2013. Report No.: 65.

[32] Williams K, MacDermott S, Ridley G, Glasson EJ, Wray JA. The prevalence of autism in Australia. Can it be established from existing data? J Paediatric Child Health. 2008;44(9):504-10.

[33] Baio J. Prevalence of Autism Spectrum Disorders — Autism and Developmental Disabilities Monitoring Network, 14 Sites, United States, 2008. Atlanta, USA: Centers for Disease Control and Prevention, 2012 March 30, 2012. Report No.: 61(SS03).

[34] Fountain C, King MD, Bearman PS. Age of diagnosis for autism: individual and community factors across 10 birth cohorts. J Epidemiol Community Health. 2011;65(6):503-10.

[35] Rutter M. Incidence of autism spectrum disorders: changes over time and their meaning. Acta Paediatrica. 2005;94(1):2-15.

[36] Evans B. How autism became autism: The radical transformation of a central concept of child development in Britain. Hist Human Sci. 2013;26(3):3-31.

[37] van Daalen E, Kemner C, Dietz C, Swinkels SH, Buitelaar JK, van Engeland H. Inter-rater reliability and stability of diagnoses of autism spectrum disorder in children identified through screening at a very young age. Euro Child Adolescent Psychiatry. 2009;18(11):663-74.

[38] Australian Advisory Board on Autism Spectrum Disorders. The Prevalence of Autism in Australia. Can it be established from existing data? 2007. Available from: http://www.autismadvisoryboard.org.au.

[39] Lord C, Rutter M, DiLavore P, Risi S. Autism Diagnostic Observation Schedule (ADOS). Los Angeles: Western Psychological Services; 1999.

[40] Rutter M, Le Couteur A, Lord C. Autism Diagnostic Interview–Revised (ADI-R). Los Angeles: Western Psychological Services; 2003.

[41] Moss J, Richards C, Nelson L, Oliver C. Prevalence of autism spectrum disorder symptomatology and related behavioural characteristics in individuals with Down syndrome. Autism. 2012;

[42] Levy SE, Giarelli E, Lee LC, Schieve LA, Kirby RS, Cunniff C, et al. Autism spectrum disorder and co-occurring developmental, psychiatric, and medical conditions among children in multiple populations of the United States. J Develop Behav Pediatrics. 2010;31(4):267-75.

[43] Woolfenden S, Sarkozy V, Ridley G, Coory M, Williams K. A systematic review of two outcomes in autism spectrum disorder – epilepsy and mortality. Developmental Medicine Child Neurol. 2012;54(4):306-12.

[44] Deonna T, Roulet E. Autistic spectrum disorder: evaluating a possible contributing or causal role of epilepsy. Epilepsia. 2006;47 Suppl 2:79-82.

[45] Numis AL, Major P, Montenegro MA, Muzykewicz DA, Pulsifer MB, Thiele EA. Identification of risk factors for autism spectrum disorders in tuberous sclerosis complex. Neurology. 2011;76(11):981-7.

[46] Buie T, Campbell DB, Fuchs GJ, 3rd, Furuta GT, Levy J, Vandewater J, et al. Evaluation, diagnosis, and treatment of gastrointestinal disorders in individuals with ASDs: a consensus report. Pediatrics. 2010;125 Suppl 1:S1-18.

[47] Gorrindo P, Williams KC, Lee EB, Walker LS, McGrew SG, Levitt P. Gastrointestinal dysfunction in autism: parental report, clinical evaluation, and associated factors. Autism Research. 2012;5(2):101-8.

[48] Reichow B, Barton EE, Boyd BA, Hume K. Early intensive behavioral intervention (EIBI) for young children with autism spectrum disorders (ASD). Cochrane Database Syst Rev. 2012;10:CD009260.

[49] Reichow B, Steiner AM, Volkmar F. Social skills groups for people aged 6 to 21 with autism spectrum disorders (ASD). Cochrane Database Syst Rev. 2012;7:CD008511.

[50] Millward C, Ferriter M, Calver S, Connell-Jones G. Gluten- and casein-free diets for autistic spectrum disorder. Cochrane Database Syst Rev. 2008(2):CD003498.

[51] James S, Montgomery P, Williams K. Omega-3 fatty acids supplementation for autism spectrum disorders (ASD). Cochrane Database Syst Rev. 2011(11):CD007992.

[52] Bebbington A, Glasson E, Bourke J, de Klerk N, Leonard H. Hospitalisation rates for children with intellectual disability or autism born in Western Australia 1983-1999: a population-based cohort study. BMJ Open. 2013;3(2):10.1136/bmjopen-2012-002356.

[53] Kohane IS, McMurry A, Weber G, MacFadden D, Rappaport L, Kunkel L, et al. The co-morbidity burden of children and young adults with autism spectrum disorders. PLoS One. 2012;7(4):e33224.


Categories
Feature Articles Articles

The B Positive Program as a model to reduce hepatitis B health disparities in high-risk communities in Australia

As the epicentre for the highest incidence of liver cancer diagnosis in New South Wales, southwest Sydney is simultaneously home to a large number of first generation migrants from Southeast Asia. Alarmingly, these individuals are six to twelve times more likely to be diagnosed with liver cancer than Australian born individuals. This article aims to explore some of the challenges in diagnosing and managing hepatitis B in culturally and linguistically diverse (CALD) communities as well as to introduce the B Positive Program, a health initiative by the Cancer Council of New South Wales, as a model to address chronic hepatitis B related health issues.

Challenges in Diagnosing and Managing Hepatitis B Infection in High Risk Communities in Australia
While hepatitis B vaccination is part of the immunisation program for infants and school-aged children in Australia, the incidence of hepatitis B induced hepatocellular carcinoma (HCC), the most common form of liver cancer, continues to be on the rise. This surge is largely attributed to chronic hepatitis B (CHB) infection amongst the migrant population from endemic areas such as Southeast Asia. [1] Alarmingly, these migrants are six to twelve times more likely to be diagnosed with liver cancer than Australian born individuals. [2] It is estimated that 90% of individuals acquire CHB at birth through mother-to-infant transmission.  [3] Thus, most  individuals suffering from CHB are unaware of their status due to the insidous nature of the disease in which individuals are asymptomatic until late adulthood. By the time CHB sufferers present for medical attention, a signifiant proportion of individuals have developed advanced HCC and treatment options are limited and survial rates are poor. In order to reduce the morbidity and mortality related to CHB related liver cancer, early screening, surveillance and treatment of high risk populations while in the asymptomatic phase are strongly indicated.

The National Cancer Prevention Policy for Liver Cancer recommend that hepatocellular carcinoma surveillance be based on abdominal ultrasound in high-risk groups at 6-month intervals. Blood tests screening for hepatitis B antigen and antibodies are used to diagnose CHB and individuals who are at high risk of hepatocellular carcinoma. If diagnosed early, antiviral agents and regular monitoring are extremely effective in preventing progression of CHB to HCC. As well, low grade HCC can be treated curatively by surgical resection, liver transplantation and percutaneous ablation.

Despite these guidelines and treatment options, it is well established that the culturally and linguistically diverse (CALD) communities in Australia remain undiagnosed or improperly managed due to difficulties in seeking equitable medical services. The following article aims to explore some of the challenges in diagnosing and managing hepatitis B in CALD communities as well as to introduce the B Positive Program, an ongoing health initiative by the Cancer Council of New South Wales, as a model to address chronic hepatitis B related health issues.

Language Barrier and Cultural Differences

Language barrier and cultural differences are often cited as two main challenges that adversely affect the diagnosis and management of hepatitis B infections in at-risk communities. [2,4] Due to language barriers, it is often difficult for patients to communicate with their healthcare providers unless the latter is well versed in the language. In these circumstances, an interpreter, often the patient’s family or friend, assumes the role of facilitating communication unless professional medical interpreters are available. This can be problematic because these novice interpreters might be unfamiliar with medical jargon and may misconstrue or censor physician messages. [4] In addition, the patient’s confidentiality and autonomy can be compromised when family or friends are involved. A systematic review study showed that patients benefit from professional interpreters instead of their family or friends. [5] At present, the prevailing solution for balancing patient confidentiality and autonomy while preserving cultural traditions for patients with limited English abilities is to consult language concordant healthcare providers. [6]

Stigmatization

Fear of stigmatization is a legitimate concern that individuals in CALD communities may experience. For example, a Chinese study conducted by Chao et al. showed that in China, healthcare professionals reported positive hepatitis B surface antigen (HBsAg) results to 38% and 25% of patients’ employers and schools respectively. [7]  Although this sort of disclosure practice is considered a breach of patient confidentiality in Australia, migrants coming from hepatitis B endemic countries may be reluctant to seek testing because of the aforementioned practices in their home countries. As a consequence, failure to screen and intervene promptly may result in chronic hepatitis B sufferers seeking health professionals as a last resort and possibly present at more advanced disease states. Presentation at these late stages would confer a worse prognosis to the patient and also increase the burden and cost to the healthcare system.

Knowledge Gaps amongst Healthcare Professionals

Avoiding disease progression is largely dependent on early recognition, monitoring and intervention. Unfortunately, some health professionals have unsatisfactory levels of knowledge. A study conducted by Stanford University in collaboration with the Asian Liver Centre showed that 34% (n=250) healthcare professionals in China who attended the ‘China National Conference on the Prevention and Control of Viral Hepatitis’ failed to recognize the natural history of hepatitis B infection or that a vaccine can be used as a prophylaxis for individuals who are seronegative. [7]

Not surprisingly, within Australia, similar surveys have shown similar gaps in knowledge amongst general practitioners. [8] Due to lack of knowledge, general practitioners may neglect treatment for individuals suffering from chronic hepatitis B or make inappropriate referrals to specialists for patients who are seronegative. In a system that is already overstretched with long waiting periods, this can be highly problematic. In addition, a major concern is that healthcare professionals fail to recognise that effective therapies are available for chronic hepatitis B and that hepatocellular carcinoma diagnosed at an early stage can be effectively treated especially with timely diagnosis, surveillance and treatment using antiviral agents and Fibroscan. Fibroscan is an accurate noninvasive investigation that is used regularly to assess the degree of liver scarring based on ultrasound technology. All healthcare professionals should be aware that recent developments in hepatitis B management, like antiviral therapy and FibroScan, have been extremely effective in preventing, monitoring and controlling the disease to progress to cirrhosis, liver failure and liver cancer amongst chronically infected individuals. The concept of “healthy carriers” no longer holds true. Yet, if healthcare practitioners are not imparted with this important knowledge, the wellbeing and health of many individuals suffering from chronic HBV will continue to be in jeopardy.

An added layer of complexity lies in the frequent use of complementary and alternative medical therapies within CALD communities. A 2012 study conducted by Guirgis et al. indicated conflicting advice about hepatitis B management given by conventional and complementary medical practitioners within Sydney. [9] The contradictory information patients receive can negatively affect their screening or management intentions. Hence, it is important to reconcile any conflicting management strategies by not only educating conventional medicine practitioners but also complementary medicine practitioners about hepatitis B screening and management so as to allow the two systems to co-exist and complement one another.

B Positive Program – A Program to Reduce Health Disparities in Hepatitis B Care

Given the increased incidence of hepatocellular cancer is clustered within specific geographical and ethnic regions within Sydney, the B Positive program is a good model in reaching a vulnerable Southeast Asian audience within New South Wales. The program, spearheaded by the Cancer Council of New South Wales, employs various strategies to address access issues at the patient, community and health professional levels.

At the patient level, the program initiates numerous educational campaigns and materials to educate the at-risk population and to remove the stigmatization of hepatitis B. One of the strengths of the program is the use of educational materials that are culturally and language concordant. For example, one of the campaign posters features Andy Lau, a prominent Chinese entertainer within the Asian community, as an individual with hepatitis B. Placing a public figure in the spotlight helps to demystify the condition for the CALD community and can demonstrate to carriers that medical therapy is effective given proper management and timely diagnosis. This campaign was highly effective and recently garnered the NSW Multicultural Health Communication Award 2013.

Another educational strategy employed by the program was the distribution of chopsticks engraved with the phrase “one cannot spread hepatitis B by sharing food” in Vietnamese and Chinese to shed light on the mode of transmission for the virus. A further strategy that the Cancer Council NSW employed was to collaborate and engage with Asian community-based health organizations like CanRevive and Australian Chinese Medical Association during outreach programs to enhance their cultural authenticity and receptivity within the CALD community.

At the community level, the program has recently developed a pilot project in May 2013 to engage high-risk migrant communities by creating a high school certificate course for south-west Sydney students called “Animating Hepatitis B”. This course involves ten weeks of lessons on hepatitis B and animation production before the high school students  create short animations to deliver hepatitis B health facts to their community.

At the health professional level, the program also assists general practitioners to better identify and screen patients belonging to high-risk groups. Community nurses visit medical clinics to remind general practitioners about enrolling at-risk patients into screening programs. The program also encourages regular monitoring programs of chronic hepatitis B patients for surveillance of hepatocellular carcinoma. General practitioners are also offered Hepatitis B seminars through the Continuing Medical Education Program, so that their knowledge is up to date.

Conclusion

With the appropriate use of cultural and language concordant educational campaigns and outreach programs for at-risk individuals, community healthcare workers to deliver these programs, community engagement and continuing medical education opportunities for healthcare professionals, the prospect of reducing the disparities in hepatitis B care within the CALD communities in Australia is highly positive.

Acknowledgements

Many thanks to Dr. Monica Robotin, Ms. Debbie Nguyen, Dr. Simone Strasser, Dr. Jacob George and Dr. Lilon Bandler for their guidance, mentorship and support.

Conflict of interest

None declared.

Correspondence

G Fong: gfon9247@uni.sydney.edu.au

References

[1] Robotin M, Patton Y, Kansil M, Penman A, George J. Cost of treating chronic hepatitis B: comparison of current treatment guidelines. World J. Gastroenterol. 2012 Nov; 14;18(42) :6106-13.

[2] Bosch FX, Ribes J, Diaz M, Cleries R. Primary liver cancer: worldwide incidence and trends. Gastroenterology. 2004 Nov;127(5 Suppl 1):S5-S16.

[3] Gellert L, Jalaludin B, Levy M. Hepatocellular carcinoma in Sydney South West: late symptomatic presentation and poor outcome for most. Intern. Med. 2007 Aug;37(8):516-22.

[4] Ngo-Metzger Q, Massagli MP, Clarridge BR, Manocchia M, Davis RB, Iezzoni LI, et al. Linguistic and cultural barriers to care. J Gen Intern Med. 2003 Jan;18(1):44-52.

[5] Karliner LS, Jacobs EA, Chen AH, Mutha S. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res. 2007 Apr;42(2):727-54.

[6] Wilson E, Chen AH, Grumbach K, Wang F, Fernandez A. Effects of limited English proficiency and physician language on health care comprehension. J Gen Intern Med. 2005 Sep;20(9):800-6.

[7] Chao J, Chang ET, So SK. Hepatitis B and liver cancer knowledge and practices among healthcare and public health professionals in China: a cross-sectional study. BMC Public Health. 2010 Feb;10:98.

[8] Guirgis M, Yan K, Bu YM, Zekry A. General practitioners’ knowledge and management of viral hepatitis in the migrant population. Intern. Med. 2012 May;42(5):497-504.

[9] Guirgis M, Nusair F, Bu YM, Yan K, Zekry AT. Barriers faced by migrants in accessing healthcare for viral hepatitis infection. Intern. Med. 2012 May;42(5):491-6.

Categories
Feature Articles Articles

Oral Health – An important target for public policy?

Introduction

A healthy mouth is something we take for granted. We use our mouths to speak, to eat and to socialise without pain or significant embarrassment. Yet when oral disorders develop the impacts can extend well beyond the domains of speech, chewing, and swallowing to sleep, productivity, self-esteem and consequently quality of life. Despite the significant improvements made in oral health on a national scale over the last 20 years, there are still persistently high levels of oral disease and disability among Australians. This is most evident among Aboriginal and Torres Strait Islander peoples. This paper aims to review current medical literature concerning the overlap between oral health and Indigenous health outcomes and whether it may represent an important target for public health policy.

Methodology

A literature review was performed through a search of The Cochrane Library, Google Scholar and Ovid Medline as well as government databases such as the Australian Institute of Health and Welfare. The terms used in the searches included: ‘Indigenous health’, ‘Aboriginal and Torres Strait Islanders’, ‘oral health’, ‘dental caries’, ‘cardiovascular disease’ and ‘education’. Limits were also set to include only studies published in the English Language and to papers published between 1995 and 2013.

Results

Searches using combinations of the above keywords yielded more than 100 results. Article titles and abstracts were analysed for relevance to the research question and in particular any reference to Indigenous health. Specific key word limits such as ‘education’ and ‘Indigenous health’ were used to restrict the yield. Relevant articles were collated from the individual searches and bibliographies were searched for any additional points of interest. This process yielded the 17 papers reviewed for this paper.

Discussion

Indigenous Health

In 2004-2008 the age-standardised death rate for Indigenous people was 1.8 times that of the non-Indigenous population; a representation of just one aspect of the ongoing issue of Indigenous disadvantage in Australia. [1] In terms of the domain of oral health there is also a wide discrepancy between both population groups. However, current literature suggests that, in the past, Indigenous Australians actually enjoyed better oral health than those who were non-Indigenous. [2] Historically, throughout the 19th and 20th centuries caries was considered to be a “disease of affluence” [3] whereas today it could potentially be a better “indicator of deprivation.” [4] Foods rich in fermentable carbohydrates are plentiful in Indigenous communities today and so is dental decay. [3] The current Indigenous health situation provides the perfect example of how a non-Western society can be detrimentally impacted upon by the introduction of Western lifestyle [5] and whilst it is not possible to discuss every aspect of this complex issue, the importance of oral health in these communities is something that requires further consideration.

The risk factors for poor oral health are the same whether someone is Indigenous or not, yet there is a disparity between the standard of oral health of both groups. According to the World Health Organisation (WHO) oral health is “being free of chronic mouth and facial pain…and disorders that affect the mouth and oral cavity.” [6] The ‘Oral health of Aboriginal and Torres Strait Islander children’ report, published by the Australian Institute of Health and Welfare, found that a higher percentage of Aboriginal and Torres Strait Islanders had experienced dental caries than other Australian children aged between four and 14 years. [7] The report further stated that children aged less than five years had almost one and a half times the rate of hospitalization for dental care when compared to their non-Indigenous counterparts. [7] A rising trend was also demonstrated in the prevalence of caries among Indigenous children, particularly in the deciduous dentition. [7] In extrapolating the causes of these inequalities it is important to consider current structural and social circumstances. These social determinants of health include aspects like socioeconomic status, transport and access, racism and housing and with the recognition of these inequalities being embedded in “a history of conflict and dispossession, loss of traditional roles…and passive welfare” a more accurate snapshot of the complicated Indigenous situation can be established. [8]

In order to best understand the issues of Indigenous health it is also important to understand how Indigenous people themselves conceptualise health. The traditional Indigenous notion of health is holistic and encompasses everything from a person’s life, body and environment to relationships, community and law; [3] a significant overlap with the social determinants model mentioned above. Whilst following a reductionist approach to medical care may be helpful in treating and managing disease, alone it is inadequate in addressing health disadvantage at a population level where a more holistic method of interpretation is required. The relationship between oral health and one’s systemic health illustrates an important area where population-focused medicine could potentially cause a reduction in rates of morbidity and mortality across multiple medical domains. Current medical research has, for example, confirmed an association exists between cardiovascular disease and periodontal disease. [9] A large retrospective cohort study performed by Morrison and colleagues (1999) reported an association between poor dental health and an increase in the incidence of fatal coronary heart disease. [10] The relationship was assessed using Poisson regression and results were adjusted for age, sex, diabetes status, serum total cholesterol, smoking and hypertensive status. [10] Rate ratios of 2.15 (95% confidence interval (CI): 1.25-3.72) and 1.90 (95% CI: 1.17-3.10) were observed in the gingivitis and edentulous status groups respectively and supported a positive association with fatal coronary heart disease. [10] A study by Joshipura and colleagues (2003), looking at 41,380 men who were free of cardiovascular disease and diabetes mellitus at baseline, suggested that periodontal disease and fewer teeth may also be associated with an increased risk of stroke. [11] During the follow up period, 349 cases of ischaemic stroke were reported, and men who had 24 or less teeth at baseline were at a higher risk of stroke than those with at least 25 teeth (hazard ratio: 1.57; 95% CI: 1.24-1.98). Furthermore, the addition of dietary factors to the model only changed the hazard ratios slightly. [11] Similar relationships have been established in linking oral infection to diabetes mellitus, low birth weight babies and disorders like otitis media and delayed growth. [3] The fact that the area of oral health has been identified as a potential risk factor for so many medical conditions highlights its importance as a target in population health.

The role of education

WHO defines health as “a state of complete physical, mental and social well being and not merely the absence of disease or infirmity.” [6] Whilst “population” is the “total number of people or things in a given place.” [12] So essentially, putting these two terms together there is an orientation towards “preventing disease, prolonging life and promoting health through organised efforts and informed choices” among whole groups rather than individuals. [12] Many of the oral health problems faced by these communities have overlapping risk factors with wider general health conditions [3] and whilst this may be a reflection of the huge amount of work that is to be done it may also be viewed as a golden opportunity, to bring positive change through the many domains of health. Improving oral health through a campaign against alcohol and tobacco will not only have positive ramifications for oral health but its effects may also be seen in the areas of general health and wellbeing. The promotion of better oral hygiene through healthier eating may also have positive developments in the rates of obesity and type two diabetes mellitus.

It is also important to mention the role of education in achieving these goals, as this tool is often the key to someone gaining the power and knowledge to change their life. Education to create awareness on how dental hygiene can improve all domains of life is important in empowering people from a population perspective. Previous studies looking at the oral health of Indigenous Australians in Port Augusta, South Australia, have revealed associations between low oral health literacy scores and self-reported oral health outcomes. [13-15] It is studies like these that have prompted the need for targeted interventions that use tailored communication and training techniques to improve oral health literacy; however, there remain few interventions actually targeting oral health literacy in Indigenous populations. [16]

Conclusion

Indigenous health is a complex and often controversial topic and there is much debate as to what actually needs to be done to address the huge gap. Oral health is an important field of health care that has associations with many systemic conditions and thus may provide an appropriate target for effective public health policy. Perhaps a fault in our current health care system is that the dental and medical care fields have evolved quite separately and thus many people may habitually fail to understand how a simple cavity can be linked to the rest of their being. [17] Even in the Medicare system today there are no provisions for any preventative oral health services; with the exception for low income earners being entitled to concessions for public dental treatment through the public hospital system. [3] Oral health is an integral aspect of general health and thus should be an important public health goal; especially in Indigenous communities where the high prevalence of oral disease could be prevented through population-level interventions.

Conflict of interest

None declared.

Correspondence

L Mclean: lsmcl2@student.monash.edu

References

[1] Thomson N, MacRae A, Brankovich J, Burns J, Catto M, Gray C et al. Overview of Australian Indigenous health status 2011. Perth: Australian Indigenous HealthInfoNet; 2012.

[2] Harford J, Spencer J, Roberts-Thomson K. Oral health, In: The health of Indigenous Australians, N. Thomson (Ed.). South Melbourne: Oxford university press; 2003. p.313-338.

[3] Shearer M and Jamieson L. Indigenous Australians and Oral Health, Oral Health Care – Prosthodontics, Periodontology, Biology, Research and Systemic Conditions [monograph on Internet]. n/a: InTech; 2012 [cited 2012 Jun 13]. Available from: http://www.intechopen.com/books/oral-health-care-prosthodontics-periodontology-biology-research-and-systemic-conditions/indigenous-australians-and-oral-health

[4] Williams S, Jamieson L, MacRae A, & Gray C. Review of Indigenous oral health [monograph on Internet]. Australian Indigenous HealthInfoNet; 2011 [cited 2012 Jun 13]. Available from: http://www.healthinfonet.ecu.edu.au/uploads/docs/oral_health_review_2011.pdf

[5] Irvine J, Kirov E, Thomson, N. The Health of Indigenous Australians. Melbourne: Oxford University Press; 2003.

[6] World Health Organisation. Oral Health: Fact sheet No. 318 [homepage on Internet]; 2007 [cited 2012 Jun 13]. Available from: http://www.who.int/mediacentre/factsheets/fs318/en/index.html

[7] Australian Institute of Health and Welfare, Dental Statistics and Research Unit: Jamieson LM, Armfield JM, Roberts-Thomson KF. Oral health of Aboriginal and Torres Strait Islander children. Canberra: Australian Institute of Health and Welfare (Dental Statistics and Research Series No. 35); 2007. AIHW cat. No. DEN 167.

[8] Banks G. Overcoming indigenous disadvantage in Australia. Address to the second OECD world forum on statistics, knowledge and policy, measuring and fostering the progress of societies. Measuring and fostering the progress of societies; 27-30 Jun 2007. Istanbul, Turkey.

[9] Demmer RT, Desvarieux M. Periodontal infections and cardiovascular disease. J Am Dent Assoc. 2008 Mar;139(3):252.

[10] Morrison HI, Ellison LF, Taylor GW. Periodontal disease and risk of fatal coronary heart and cerebrovascular diseases. J Cardiovasc Risk 1999;6(1):7-11.

[11] Joshipura KJ, Hung HC, Rimm EB, Willett WC, Ascherio A. Periodontal disease, tooth loss, and incidence of ischemic stroke. Stroke 2003;34(1):47-52.

[12] Queensland Health. Understanding Population Health [monograph on Internet]. Queensland: Queensland Health; [cited 2012 Jun 10]. Available from: http://www.health.qld.gov.au/phcareers/documents/background_paper.pdf

[13] Jamieson LM, Parker EJ, Richards L. Using qualitative methodology to inform an Indigneous owned oral health promotion initiative. Health Prom Int 2008; 23:52-59.

[14] Williams SD, Parker EJ, Jamieson LM. Oral health-related quality of life among rural-dwelling Indigenous Australians. Aust Dent J 2010; 55:170-176.

[15] Parker EJ, Jamieson LM. Associations between Indigenous Australian oral health literacy and self-reported oral health outcomes. BMC Oral Health 2010; 10:3.

[16] Parker EJ, Misan G, Chong A, Mills H, Roberts-Thomson K, Horowitz A et al. An oral health literacy intervention for Indigenous adults in a rural setting in Australia. BMC Public Health 2012; 12:461.

[17] Widdop, F. Crossing divides: an ADRF perspective [monograph on Internet]. Australian Dental Association. Australia: Australian Dental Association; 2005 [cited 2012 Jun 13]. Available from: http://www.ada.org.au/app_cmslib/media/lib/0702/m47474_v1_crossingdividesanadrfperspective.pdf