Categories
Feature Articles

Trust me, I’m wearing my lanyard

The medical student’s first lanyard represents much more than a device which holds clinical identification cards – it symbolises their very identity, first as medical students and eventually as medical practitioners. The lanyard allows access to hospitals and a ready way to discern who’s who in a fast paced environment. It is the magic ticket that allows us to wander hospital corridors (often aimlessly) without being questioned.

Despite this, the utility of the lanyard as a symbol of “an insider” is being questioned, with mounting evidence showing it to be a harbour for the indirect transmission of bacteria from health care staff to patients. It may be time for the lanyard, like the white coat before it, to be retired as a symbolic but potentially harmful relic of the past.  This essay investigates the validity of these concerns by examining available literature and the results of a small pilot study.

v6_i1_a25

Background

In  May  2014  Singapore  General  Hospital  announced  a  new  dress policy for all staff. Hanging lanyards were banned and replaced with retractable identification card holders. Dr Ling Moi Lin, the hospital’s director of infection control, explained that the hospital aimed “to ensure that ties and lanyards do not flap around when staff examine patients, these objects can easily collect germs and bacteria – we do not want to carry them to other patients.” [1]

This hospital is not alone on their stance against hanging lanyards. The British National Health Service (NHS) Standard Infection Prevention and   Control   guidelines   published   in   March  2013  lists   wearing neckties or lanyards during direct patient care as “bad practice”. The guidelines state that lanyards “come into contact with patients, are rarely laundered and play no part in patient care”. [2] Closer to home the current 2013 Bare Below the Elbows campaign, a Queensland Government initiative aiming to improve the effectiveness of hand hygiene performed by health care workers, recommends that retractable (or similar) identification card holders are used in place of lanyards. [3] Other Australian states and many individual hospitals have adopted similar recommendations. [4,5]

However, some hospitals and medical schools continue to require staff and students to wear lanyards. For example James Cook University medical students are provided with one lanyard, which must be worn in all clinical settings (whether that be at the medical school during clinical skills sessions or at the hospital) for the entire duration of their six-year degree. [6] The University of Queensland 2013 medical student guide for their Sunshine Coast clinical school states that students must wear their lanyards and display identification badges at all times in teaching locations. [7] This is not concordant with the current Queensland Government initiative recommendations.

The NHS Standard Infection Prevention and Control guidelines are also being breached by medical schools requiring their students to wear lanyards. London Global University states that lanyards are important in that they remind patients who students are and clinical teachers and other professionals that they are in a teaching hospital. However students are required to use a safety pin to attach the end of their lanyard to fixed clothing. [8] A similar policy is in place in Cardiff University where students must wear lanyards but ensure that they are not “dangling” freely when carrying out examinations and procedures. [9] So how harmful could the humble, dangling lanyard really be?

How harmful could the lanyard be?

Each year there are around 200,000 healthcare-associated infections in Australian  acute  healthcare  facilities.  Nosocomial  infections  are the most common complication affecting patients in hospital. These potentially preventable adverse effects cause unnecessary pain and suffering for patients and their families, prolong hospital stays and are costly to the health care system. [10]

Improving hand hygiene among healthcare workers is currently the single most effective intervention to reduce the risk of nosocomial infections in Australian hospitals. [11] The World Health Organisation guidelines on Hand Hygiene in Health Care indicate five moments when the hands must be washed. Two of these are before and after contact with a patient. [12]

In between these two crucial hand washes several objects are frequently touched by health care staff. Objects such as a doctor’s neckties [13-17], stethoscopes [18-20] and pens [21,22] have all been shown to carry pathogenic bacteria. The bacteria isolated include methicillin resistant Staphylococcus aureus (MRSA), found on doctors’ ties [14,16] and stethoscopes. [19] Making contact with these objects during an examination can result in the indirect transmission of microorganisms, transferring the infectious agents to a susceptible host via an intermediate object termed a fomite.

The infectious agents must be transferred from the fomites to the hands of the health care practitioners before they can be spread to patients. The efficiency of transfer of a pathogen on a surface to a practitioner’s hand after a single contact was tested by a recent study published in 2013. It isolated five known nosocomial pathogens and placed  them  on  non-porous  surfaces;  after 10  seconds  of  contact time between a finger and the surface under a known pressure the microorganism transferred to the finger was examined. It showed that under relative humidity non-porous surfaces had a transfer efficiency of up to 79.5%. [23] This study indicates that after one contact with a contaminated fomite there is a significant transfer of microorganisms to the hands, these can then be transferred to patients.

Furthermore,  if  no  regular  preventative  disinfection  is  performed the most common nosocomial pathogens may survive or persist on inanimate surfaces for months and can therefore be a continuous source of transmission. [24] One study conducted in the United Kingdom in 2008 approached 100 hospital staff randomly and asked them to state the frequency and method by which their lanyards were washed or decontaminated. Only 27% had ever washed their lanyards and 35% of lanyards appeared noticeably soiled. [25] This suggests that the lanyards, which doctors carry with them daily, could potentially harbour acquired infectious agents for an extended periods of time.

Two recent studies have shown that lanyards do carry pathogenic bacteria. [25,26] An Australian study by Kotsanas et al. tested lanyards and identification cards for pathogenic bacteria and found that 38% of lanyards harboured these. Nearly 10% of lanyards grew MRSA, and other pathogens found included methicillin sensitive Staphylococcus aureus,  enterococci  and  Gram-negative  bacilli.  The  bacterial  load on lanyards was 10 times greater per unit surface area than the identification cards themselves. [26]

It has been suggested that contaminated fomites are a result of poor hand hygiene. As such it is assumed that with good hand hygiene practices wearing these objects is acceptable. It has been widely reported that nurses have far better hand hygiene habits than doctors. A  recent  Australian  study  conducted  in  82  hospitals  reports  that nurses consistently have significantly higher levels of hand hygiene compliance. [27] If the fomite pathogenic carriage is dependent on hand hygiene then one might expect that lanyards worn by nurses would have lower pathogenic carriage. However Kotsanas et al. showed that although there was a difference in organism composition there was no significant difference between total median bacterial counts isolated from nurses’ and doctors’ lanyards. [26] This suggests that the carriage of pathogens on lanyards is not solely dependent on compliance with hand hygiene protocols.

Lanyards have thus been shown to carry bacteria, which may remain on them for months, regardless of hand hygiene practices, and have high rates of transfer to the hands of practitioners. However there have been no studies conducted to directly show that their use results in the increased transmission of bacteria. There are however some studies which have shown bacterial transfer from neckties to patients. Lanyards are similar to neckties in that they have been shown to carry pathogenic bacteria, are made of a textile material which is rarely laundered, are positioned at the waistline, have a nature to swing and inadvertently touch patients or the practitioner’s cleansed hands and have no direct role in patient care. [13-17]

A study in Pakistan found that the bacteria collected from the lower part of neckties worn by physicians correlated with bacteria isolated from their patients’ wounds after surgical review. [17] This suggests that bacterial transmission occurred. More convincingly, a recent study by Weber et al. tested the transmission of bacteria, to dummies, from doctors wearing different combinations of clothing inoculated with comparable levels of bacteria to those previously reported. After a brief 2.5-minute history and exam, cultures were obtained from the dummies at three sites. The number of contaminated mock patients was six times higher and total colony units cultured was 26 times higher when the examiner was wearing an unsecured necktie. [28] This showed that unsecured neckties do result in greater transmission of bacteria from doctors to patients. The ties may swing to directly transmit bacteria to the patient or to the cleansed hands of the doctor, which are then transferred to the patient. Lanyards would likely pose a similar risk.

In my clinical experience, unlike ties, lanyards are often inadvertently touched and fiddled with by medical students and doctors during the clinical examination of a patient. This can recontaminate hands with pathogens even after hand-washing procedures have been followed. Thus, because of this additional contact, lanyards potentially have a higher rate of bacterial transmission than neckties.

What did my pilot study show?

To test this theory I conducted a small observational study, in which 20 James Cook University fourth-year medical students were observed during the focused examination of a volunteer, posing as a patient in an imitated hospital bed setting. Twelve students conducted a focused head and neck examination whilst eight conducted an abdominal examination. The students were unaware of the nature of the study. All students observed washed their hands prior to and at the end of each clinical examination. I observed the students from when they washed their hands prior to the physical exam until their last physical contact with the patient. The mean time taken was 12 minutes. During this period two things were noted: the number of times that their hands made contact with their lanyards and the number of times that the lanyard made contact with the patient. 70% of the students’ lanyards touched their patient during the exam at least once; the mean number of times was 2.65 (SD = 2.99). 95% of students touched their lanyards during the exam; the mean number of times was 7.35 (SD = 5.28).

Many made contact with their lanyard as part of their introduction to the patient, holding their lanyard to “show” that they are in fact a medical student. Some held the lanyards to their abdomen with one hand whilst examining the patient with the other hand to prevent it making contact with the patient. Others fiddled with the lanyard whilst talking to the patient. During hand gestures, the lanyards often collided with the student’s hands and the students’ stethoscopes, prominently displayed around their necks, were often entangled with their lanyards. The amount of contact was to the extent that some students default position was standing with their hands holding their lanyards. After each forced hand movement their hands were returned to holding their lanyards.

It is also interesting to note that several students had attached objects such as pens, USBs and keypads to their lanyards. The attachment of additional objects had a slightly increased correlation with the amount of times that their hands made contact with the lanyard but almost doubled the times the lanyard made contact with the patient (2.65 to 4.67).

One student had a lanyard clip which fastened the end of his lanyard to his shirt. This student did not touch his lanyard once during the exam and his lanyard also did not make contact with the patient. There may thus be some benefit in following the lead of London Global University and Cardiff University in enforcing the use of lanyard clips or safety pins to prevent their students’ lanyards from dangling. [8,9]

This observational study adds another dimension to the argument against wearing lanyards. Like neckties, lanyards have been shown to carry pathogenic bacteria, swing to make contact with the patient, are rarely laundered, and have no direct part in patient care. This small observational study confirmed my assumption that lanyards also come into contact with examiners’ hands a significant number of times during an examination.

Role models

During the influential years at some medical schools, it is standard policy that students are required to wear a hanging lanyard even though there is a growing body of evidence which indicates that hanging lanyards should not be worn. These students can only dream of the day when their blue medical student lanyards are replaced with the lanyards with “DOCTOR” repeatedly printed. Our role models are wearing improved, larger, better lanyards. It has been proposed that advocating the presentation of up-to-date evidence based information with an emphasis on role modelling should be made an educational priority to improve hand hygiene rates. [29] Research has indicated that targeting medical students may be an effective approach to raising the low compliance rates of hand hygiene procedures of doctors. [29] Clearly advocating the role that fomites like lanyards play in the spread of nosocomial infections has not been made an educational priority and may be part of the reason why compliance with current health hygiene policies regarding their use are low.

It seems contradictory that if I do not to wash my hands at the start of a clinical examination I will fail but I could, like one student in the observation study did, touch an object shown to carry pathogenic bacteria, which I am required to wear, 23 times and still pass. Making contact with an object shown to carry pathogenic bacteria more than once per minute of clinical examination is alarming and arguably diminishes the purpose of rigorous hand washing procedures.

Conclusion

Lanyards are an easy way to carry identification cards that identify who’s who in a fast paced environment. However there is a growing body of evidence that indicates that they may be the harbour for the indirect transmission of infectious agents to patients. Several health hygiene policies have been updated to encourage health professionals not to wear lanyards during direct patient care. Some medical schools have not followed these guidelines and still require students to wear lanyards. While there is no definitive link showing the transmission of an acquired infection from the tip of a medical student’s lanyard, there is very reasonable circumstantial evidence indicating that this could easily happen. Obeying current state infection prevention guidelines and  swapping  hanging  lanyards  for  retractable  identification cards or simply preventing them from dangling may be useful in reducing nosocomial infections in Australia. It is about time that the lanyard is retired as a symbolic but potentially harmful relic of the past.

Acknowledgements

James Cook University clinical staff and fourth year medical students for allowing me to observe their clinical skills assessment.

Conflict of interest

None declared.

Correspondence

E de Jager: elzerie.dejager@my.jcu.edu.au

References

[1] Cheong K. SGH staff roll up their sleeves – under new dress code for better hygiene. The Straits Times [Internet]. 2014 May 16 [cited 2014 Jun 25]. Available from: www.straitstimes.com/news/singapore/health/story/sgh-staff-roll-their-sleeves-under-new- dress-code-better-hygiene-2014050

[2] NHS: National Health Service. CG1 Standard infection prevention and control guidelines [Internet]. 2014 Mar [cited 2014 Jun 25]. Available from: http://www.nhsprofessionals. nhs.uk/download/comms/cg1%20standard%20infection%20prevention%20and%20 control%20guidelines%20v4%20march%202013.pdf

[3] Queensland Government Department of Health. Bare below the elbows [Internet]. 2013 Sep [cited 2014 Jun 24]. Available from: http://www.health.qld.gov.au/chrisp/hand_hygiene/fsheet_BBE.pdf

[4] Tasmanian Government Department of Health and Human Services. Hand hygiene policy [Internet]. 2013 Apr 1[cited 2014 Jun 24] Available from: http://www.dhhs.tas.gov.au/data/assets/pdf_file/0006/72393/Hand_Hygiene_Policy_2010.pdf

[5] Australian Capital Territory Government Health. Standard operating procedure 1, hand hygiene [Internet]. 2014 March [cited 2014 June 24]. Available from: http://health.act.gov.au/c/health?a=dlpubpoldoc&document=2723

[6] James Cook University School of Medicine. Medicine student lanyards [Internet]. 2014 [cited 2014 June 24] Available from: https://learnjcu.jcu.edu.au/

[7] The University of Queensland Sunshine Coast Clinical School. 2013 Medical student guide [Internet]. 2013 [cited 2014 July 5]. Available from: https://my.som.uq.edu.au/mc/media/25223/sccs%20medical%20student%20guide%202013.pdf

[8] University College London Medical School. Policies and regulations, identity cards and name badges [Internet]. 2014 [cited 2014 July 5]. Available from: http://www.ucl.ac.uk/medicalschool/staff-students/general-information/a-z/#dress

[9]   Cardiff  University   School   of   Medicine.   Personal   presentation  [Internet].   2013 August  22  [cited  2014  July  5].  Available  from:  http://medicine.cf.ac.uk/media/filer_public/2013/08/22/perspres.pdf. Published August 22, 2013

[10] Australian Government National Health and Medical Research Council. Australianguidelines for the prevention and control of infection in healthcare [Internet]. 2010 [cited2014  June  24].  Available  from: http://www.nhmrc.gov.au/book/australian-guidelines-prevention-and-control-infection-healthcare-2010/introduction

[11] Australian Commission on Safety and Quality in Healthcare Hand Hygiene Australia. 5 Moments for hand hygiene [Internet]. 2009 [cited 2014 July 7]. Available from: http://www.hha.org.au/UserFiles/file/Manual/ManualJuly2009v2(Nov09).pdf

[12]  World  Health  Organisation.  WHO  guidelines  on  hand  hygiene  in  health  care [Internet]. Geneva. 2009 [cited 2014 July 7]. Available from: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf

[13] Dixon M. Neck ties as vectors for nosocomial infection. ICM. 2000;26(2):250.

[14] Ditchburn I. Should doctors wear ties? J Hosp Infect. 2006;63(2):227-8.

[15] Lopez PJ, Ron O, Parthasarathy P, Soothill J, Spitz L. Bacterial counts from hospital doctors’ ties are higher than those from shirts. Am J Infect Control. 2009; 37(1):79-80.

[16] Bhattacharya S. Doctors’ ties harbour disease- causing germs. NewScientist.com [Internet]. 2004 May 24 [cited 2014 June 20]. Available from: http://www.newscientist. com/arti- cle/dn5029-doctors-ties-harbour-diseasecaus- ing-germs.html

[17] Shabbir M, Ahmed I, Iqbal A, Najam M. Incidence of necktie as a vector in nosocomial infection. Pak J Surg. 2013; 29(3):224-225.

[18]  Marinella  MA,  Pierson  C,  Chenoweth  C.  The  stethoscope.  A  potential  source  of nosocomial infection? Arch Intern Med. 1997;157(7):786-90.

[19]  Madar  R,  Novakova  E,  Baska  T.  The  role  of  non-critical health-care  tools  in  the transmission of nosocomial infections. Bratisl Med J. 2005;106(11):348-50.

[20] Lokkur PP, Nagaraj S. The prevalence of bacterial contamination of stethoscope diaphragms: a cross sectional study, among health care workers of a tertiary care hospital. Indian I Med Microbiol. 2014;32(2):201-2.

[21] French G, Rayner D, Branson M, Walsh M. Contamination of doctors’ and nurses’ pens with nosocomial pathogens. Lancet. 1998;351(9097):213.

[22] Datz C, Jungwirth A, Dusch H, Galvan G, Weiger T. What’s on doctors’ ball point pens? Lancet. 1997;350(9094):1824.

[23] Lopez GU, Gerba CP, Tamimi AH, Kitajima M, Maxwell SL, Rose JB. Transfer efficiency of bacteria and viruses from porous and nonporous fomites to fingers under different relative humidity conditions. Appl Environ Microbiol. 2013;79(18):5728-34.

[24] Kramer A, Schwebke I, Kampf G. How long do nosocomial pathogens persist on inanimate surfaces? A systematic review. BMC Infect Dis. 2006;6:130.

[25] Alexander R, Volpe NG, Catchpole C, Allen R, Cope S. Are lanyards a risk for nosocomialtransmission of potentially pathogenic bacteria? J Hosp Infect. 2008;70(1):92-3.

[26] Kotsanas D, Scott C, Gillespie EE, Korman TM, Stuart RL. What’s hanging around your neck? Pathogenic bacteria on identity badges and lanyards. Med J Aust. 2008;188(1):5-8.

[27] Azim S, McLaws ML. Doctor, do you have a moment? National Hand Hygiene Initiative compliance in Australian hospitals. Med J Aust. 2014;200(9):534-7.

[28] Weber RL, Khan PD, Fader RC, Weber RA. Prospective study on the effect of shirt sleeves and ties on the transmission of bacteria to patients. J Hosp Infect. 2012;80(3):252-4.

[29] Hall L, Keane L, Mayoh S, Olesen D. Changing learning to improve practice – Handhygiene  education  in  Queensland  medical  schools.  Healthcare  Infection.  2010  Dec;15(4):126-129.

Categories
Feature Articles

The blind spot on Australia’s PBS: review of anti-VEGF therapy for neovascular age-related macular degeneration

v6_i1_a24

Case scenario

A 72 year old male with a two-day history of sudden blurred vision in his left eye was referred to an ophthalmologist at a regional Australian setting. On best corrected visual acuity (BCVA) testing his left eye had reduced vision (6/12-1) with metamorphopsia. Fundoscopy showed an area of swelling around the left macula and optical coherence tomography and fundus fluorescein angiography later confirmed pigment epithelial detachment of his left macula and subfoveal choroidal  neovascularisation.  He  was  given  a  diagnosis  of  wet macular degeneration and was commenced on monthly ranibizumab (Lucentis®) injections – a drug that costs the Australian health care system approximately AUD $1430 per injection and will require lifelong treatment. Recent debate has risen regarding the optimum frequency of dosing and the necessity of this expensive drug, given the availability of a cheaper alternative.

Introduction

Age-related macular degeneration (AMD) is the leading cause of blindness in Australia. [1] It predominantly affects people aged over 50 years and impairs central vision.  In Australia the cumulative incidence of early AMD for those aged over 49 years is 14.1% and 3.7% for late AMD. [1] Macular degeneration occurs in two forms. Dry macular (non- neovascular) disease comprises 90% of AMD and has a slow progression characterised by drussen deposition underneath the retinal pigment epithelium. [2] Currently there is no agreed treatment of advanced dry AMD and is managed only by diet and lifestyle. [3,4] Late stages of dry macular degeneration can result in “geographic atrophy” causing progressive atrophy of the retinal pigment epithelium, choriocapillaries and photoreceptors. [2]

Wet (neovascular) macular degeneration is less common and affects 10% of AMD patients but causes rapid visual loss. [2] It is characterised by choroidal  neovascularisation  (CNV)  secondary  to  the  effects of vascular endothelial growth factor (VEGF) causing blood vessels to grow from the choroid towards the retina. Leakage of these vessels leads to retinal oedema, haemorrhage and fibrous scarring. When the central and paracentral areas are affected it can result in loss of central vision. [2,5] Untreated, this condition can result in one to three lines of visual acuity lost on the LogMAR chart at three months and three to four lines by one year. [6] Hence visual impairment from late AMD leads to significant loss of vision and quality of life.

Currently there are three main anti-VEGF drugs available for wet macular degeneration:   ranibizumab    (Lucentis®),   bevacizumab    (Avastin®) and aflibercept (Eylea®). This feature article attempts to summarise the development in treatments of wet macular degeneration and highlights the current controversies regarding the optimal drug and frequency of dosing in context of cost to the Australian Pharmaceutical Benefits Scheme (PBS).

Earlier treatments for wet AMD

Neovascular (wet) AMD was largely untreatable over a decade ago but the management has transformed over this period. [2] Initially laser  photocoagulation  was  used  in  the  treatment  of  wet  AMD with the aim of destroying the choroidal neovascular membrane by coagulation.  During the 1980s, the Macular Photocoagulation study reported favourable outcomes for direct laser photocoagulation in small classic extrafoveal and juxtafoveal choroidal neovascularisation (CNV). However the outcomes for subfoveal lesions were poor and laser photocoagulation was limited by lack of stabilisation of vision, high reoccurrence rates in 50%, risk of immediate moderate visual loss in 41% and laser induced permanent central scotomata in sub-foveal lesions. [2,7]

During the 1990s photodynamic therapy (PDT) with verteporfin was introduced. It involved a two stage process: an intravenous infusion of verteporfin that preferentially accumulated in the neovascular membranes, followed by activation with infrared light that generated free radicals promoting closure of blood vessels. The TAP trial reported that the visual acuity benefits of verteporfin therapy in predominantly classic CNV subfoveal lesions was safely sustained for five years. [8] However the mean visual change was still a 13-letter average loss for PDT compared with a 19-letter average loss for untreated controls. [2,9] In addition, photosensitivity, headaches, back pain, chorioretinal atrophy and acute visual loss were observed in 4% as adverse effects. [2]

Anti-VEGF therapies

A breakthrough in treatment came during the mid-2000s with the identification of VEGF as the pathophysiological mechanism in driving the choroidal neovascularisation and associated oedema. This led to the establishment of the first anti VEGF drug, pegatanib sodium, an RNA aptamer that specifically targeted VEGF-165. [10] The VISION trial,  involving 1186 patients with subfoveal AMD receiving pegatanib injections every six weeks, had 70% of patients with stabilised vision (less than three lines of vision loss) compared to 55% of sham controls; yet still only a minority of patients actually gained vision. [10]

A second anti-VEGF agent, bevacizumab (Avastin®) soon came into off- label use. Bevacizumab was initially developed by the pharmaceutical company Genetech® to inhibit the tumour angiogenesis in colorectal cancer but its mechanism of action as a full length antibody that binds to all VEGF isoforms proved to have multiple purposes. Despite a lack of clinical trials to support its use in wet AMD, anecdotal evidence led ophthalmologists to use it in an off-label fashion to inhibit angiogenesis associated with wet macular degeneration. [11,12]

In 2006, however, Genetech® gained Food and Drug Administration (FDA) approval for the anti-VEGF drug ranibizumab, a drug derived from the same bevacizumab molecule, as a fragment but with a smaller molecular size to theoretically aid retinal penetration. [13] Landmark clinical trials established that ranibizumab not only prevented vision loss but also led to a significant gain in vision in almost one-third of patients. [14,15] The ANCHOR trial, involving 423 patients,  compared ranibizumab dosed at 0.3 mg and 0.5 mg given monthly over two years with PDT and verteporfin given as required. This trial found 90% of ranibizumab treated patients achieved visual stabilisation with a loss of < 15 letters compared to 65.7% of PDT patients. Furthermore, up to 41% of the ranibizumab treated group actually gained >15 letters compared to 6.3% of the PDT group. [15]

Further trials including the MARINA, [14] PRONTO, [16] SUSTAIN, [17] and PIER [18] studies confirmed the effectiveness of ranibizumab. Despite these results and the purpose built nature of ranibizumab for the eye, in countries like the US and other countries around the world where patients and health insurance companies bear the cost burden of treatment, bevacizumab (Avastin®) is more frequently used, and constitutes nearly 60% of injections in the US. [19] This occurrence is explained by the large cost difference between ranibizumab (USD $1593) and bevacizumab (USD $42) in context of apparent similar efficacy. [19] The cost difference is due to the fact that one vial of bevacizumab can be fractioned by a compounding pharmacy into numerous unit doses for the eye. [20]

Given the popular off-label use of bevacizumab, the CATT trial was conducted by the US National Eye Institute to establish its efficacy. The CATT trial was a large US multicentre study involving 1208 patients randomised to receive either bevacizumab 1.25 mg or ranibizumab 0.5 mg (monthly or as needed). The CATT trial results demonstrated that monthly bevacizumab was equivalent to monthly ranibizumab (mean gain of 8.0 vs 8.5 letters on ETDRS visual acuity chart in one year). [21] The IVAN trial, a UK multi-centre randomised controlled trial (RCT) involving 628 patients, showed similar results to the CATT trial with a statistically insignificant mean difference in BCVA of 1.37 letters between the two drugs. [22]

Hence debate has mounted in regards to the substantial cost difference in the face of apparent efficacy. [23] On the backdrop of this costly dilemma are three major pharmaceutical companies: Genetech®, Roche®  and Novartis®  Although  bevacizumab  was  developed  in 2004 by the pharmaceutical company Genetech®, the company was taken over in 2009 by the Swiss pharmaceutical giant Roche®, which is one-third owned by another pharmaceutical company, Novartis®. [24] Given that both ranibizumab and bevacizamab are produced essentially by the same pharmaceutical companies (Genetech/Roche/ Novartis) there is no financial incentive for the company to seek FDA or Therapeutic Goods Administration (TGA) approval for the cheaper alternative, bevacizumab. [13,24]

Another  major  concern  that  is  emphasised  in  the  literature  is the potentially increased systemic adverse effects reported with bevacizumab. [22] The systemic half-life of bevacizumab is six days compared to ranibizumab at 0.5 days and in theory it is postulated that systemic inhibition of VEGF could cause higher systemic vascular events. [2] The CATT trial reported similar rates of adverse reactions (myocardial infarction, stroke and death) in both bevacizumab and ranibizumab groups. [21] However, a meta-analysis of the CATT and IVAN data showed that there was an increased risk of serious systemic side effects requiring hospitalisation in the bevacizumab group (24.9% vs 19.0%). Yet this statement is controversial as most events reported were not identified in the original cancer trials involving patients receiving intravenous doses of bevacizumab (500 times the intravitreal dose). [21,22] Hence it has been questioned whether this is more attributable to chance or imbalance in the baseline health status of participants. [2,22] An analysis of US Medicare claims demonstrated that  patients  treated  with  bevacizumab  had  significantly  higher stroke and mortality rates than ranibizumab. [25] However this data is inherently prone to confounding bias considering the elderly at risk of macular degeneration are likely to have risk factors for systemic vascular  disease.  When  corrected  for  comorbidities  there  were no  significant  differences  in  outcomes  between  ranibizumab  and bevacizumab. [23,25] It has been argued that trials to date have been underpowered to investigate adverse events in bevacizumab. Hence until further evidence is available, the risk of systemic adverse effects favouring the use of ranibizumab over bevacizumab is unclear. [22]

Adding to the debate regarding the optimum drug choice for AMD, is the newest anti-VEGF, aflibercept (Eylea®) which attained FDA approval in late 2011. Aflibercept was created by the pharmaceutical companies Regeneron/Bayer® and is a novel recombinant fusion protein designed to bind to all isoforms of VEGF-A, VEGF-B and placental growth factor. [20] Aflibercept has a dispensed price the same as ranibizumab at AUD $1430 per injection on the PBS. [26] The binding affinity of aflibercept to VEGF is greater than ranibizumab and bevacizumab which allows for longer duration of action and hence extended dosing intervals. [27]

The VIEW 1 study, a North American multicentre RCT with 1217 patients, and the VIEW 2 study, with 1240 patients enrolled across Europe, the Middle East, Asia-Pacific and Latin America, assigned patients into one of four groups: 1) 0.5 mg aflibercept given monthly, 2) 2 mg aflibercept given monthly, 3) 2 mg aflibercept at two-monthly intervals after an initial 2 mg aflibercept monthly for three months, or 4) ranibizumab 0.5 mg monthly. The VIEW 1 trial demonstrated that vision was maintained (defined as losing less than 15 ETDRS letters) in 96% of patients on 0.5 mg aflibercept monthly, 95% of patients receiving 2 mg monthly, 95% of patients on 2 mg every two months and 94% of patients on ranibizumab 0.5 mg monthly. [28] Safety profiles of the drugs in both the VIEW 1 and VIEW 2 trials showed no difference between aflibercept and ranibizumab in terms of severe systemic side effects. Hence aflibercept has been regarded as equivalent in efficacy to ranibizumab with potentially less frequent dosing.

Frequency of injections

In addition to the optimal drug of choice for AMD, the optimal frequency of injection has come into question. Given the treatment burden of regular intravitreal injections and risk of endophthalmitis with each injection, extending treatment using “as-required” dosing is often used in clinical practice. Evidence from the integrated analysis of VIEW trials is encouraging as it showed that aflibercept given every two months after an initial loading phase of monthly injections for three months was non-inferior to ranibizumab given monthly in stabilising visual outcomes [28] Although the cost is similar to ranibizumab, the reduced number of injections may represent significant cost savings.

A meta-analysis of the IVAN and CATT trials showed that continuous monthly treatment of ranibizumab and bevacizumab, gives better visual function than discontinuous treatment with a mean difference in BCVA at two years of -2.23 letters. [22] The pooled estimates of macular exudation as determined by optical coherence tomography (OCT) favoured a continuous monthly regimen. However, there was an increase in the risk of developing new geographic atrophy of the retinal pigment epithelium (RPE) with monthly treatment when compared to the as-needed therapy, therefore visual benefits from the monthly treatment may not be maintained long-term. [22] It is unclear whether the atrophy of the RPE represents a drug effect or the natural history of AMD. Interestingly, mortality at two years was lower with the continuous compared to the discontinuous group. In relation to systemic side effects, the pooled results slightly favoured continuous therapy although this was not statistically significant. This appears to contradict the normal dose response framework, however it is hypothesised that immunological sensitisation with intermittent dosing may account for this. [22]

Hence it appears that continuous therapy for bevacizumab and ranibizumab may be favourable in terms of visual outcome. However in clinical practice, given the treatment burden for patients and their carers, the risk of rare sight threatening endopthalmitis and possible sustained rise in intraocular pressure with each injection, [29] the frequency of injections is often individualised based on maintenance of visual acuity and anatomic parameters of macular thickness on OCT.

Currently the “inject and extend” model is recommended, whereby after three monthly injections treatment is extended to five or six weeks if the OCT shows no fluid. Depending on signs of exudation and BCVA, treatment may be reduced or extended by one or two weeks per visit to a maximum interval of ten weeks. Although there are no large prospective studies to support this, smaller studies have reported encouraging results which offers another cost saving strategy. [30] However, given the use of the more expensive ranibizumab, it is still a costly endeavour in Australia.

Current Australian situation

Other practical issues play a role in the choice of anti-VEGF therapy in Australia. For instance, the subsidised cost of ranibizumab to the patient is lower than the unsubsidised full cost of bevacizumab. [13] Patients must pay between AUD $80 and $159 out-of-pocket per injection for bevacizumab, whilst ranibizumab costs the government AUD $1430 and the maximum out of pocket cost for the patient is around AUD $36. [26] Among ophthalmologists there is favour towards the use of ranibizumab because of its purpose built status for the eye. [13] It seems the quantity and quality of evidence for ranibizumab compared to bevacizumab is greater. [29] As bevacizumab is used off- label, its use is not monitored, hence there is no surveillance. Lack of appropriate surveillance has been argued as a case to favour the use of the FDA approved ranibizumab. Essentially the dilemma faced by ophthalmologists is summarised in the statement: “I would personally be reluctant to say to my patients, ‘The best available evidence supports the use of this treatment which is funded, but are you interested in changing to an unapproved treatment [Avastin] for the sake of saving the community some money?” [31]

Another issue in Australia is the need for bevacizumab to be altered and divided by a compounding pharmacist into a product that is suitable and safe for ocular injection. A recent cluster of infectious endophthalmitis  resulting  in  vision  loss  occurred  in  the  US  from non-compliance to recognised standards. [32] The CATT and IVAN studies had stringent quality and safety control with the bevacizumab repackaged in glass vials using aseptic methods. In these trials, the risk of sight-threatening endophthalmitis was rare for both ranibizumab (0.04%) and bevacizumab injections (0.07%). [21] However, in clinical practice, it is argued that many of the compounding pharmacies may not be as regulated as that of the clinical trials to give comparable inferences about safety.

Conclusion

Prior to development of anti-VEGF therapies, patients with wet macular degeneration were faced with a progressive and permanent decline in vision. Today the available treatments not only stabilise vision but also lead to an improvement in vision in a significant portion of patients. Currently there are no published “head-to-head” trials comparing the three available drugs – bevacizumab, ranibizumab and aflibercept – together, which is warranted. In addition, further analyses of the safety concerns of bevacizumab are required. Current research is focusing on improving anti-VEGF protocols to reduce injection burden and combination therapies with photodynamic therapy or corticosteroids. [3] However, topical therapies such as pazopanib, a tyrosine kinase inhibitor that targets VEGF receptors, currently in the pipeline, may offer a possible non-invasive therapy in the future. [2,33]

At present, the evidence and expert opinion is not unanimous in allowing health policy makers to rationalise the substitution of bevacizumab over ranibizumab or aflibercept. Practical concerns in terms of FDA or TGA approval, surveillance, compounding pharmacy and safety are still major issues.  In 2013, ranibizumab was the third- highest costing drug on the PBS at AUD $286.9 million and aflibercept prescriptions cost the Australian government AUD $60.5 million per annum. [26] From a public health policy perspective, Australia has an ageing population and with eye health burden only to increase, there is need to prioritise resources. The cost–benefit analysis is not limited to AMD but applies to other indications of anti-VEGF therapy such as diabetic macular oedema and retinal vein occlusion. Substitution of first-line treatment with bevacizumab, which has occurred elsewhere in the world, has the potential to save the PBS billions of tax-payer dollars over a few years and its review should be considered a high priority in current health policy.

Acknowledgements

Thanks to Dr. Jack Tan (JCU adjunct lecturer (ophthalmology), MMED (OphthalSci)) for reviewing and editing this submission.

Conflict of interest

None declared.

Correspondence

M R Seneviratne: ridmee.seneviratne@my.jcu.edu.au

References

[1]  Wang  JJ,  Rochtchina  E,  Lee  AJ,  Chia  EM,  Smith  W,  Cumming  RG,  et  al.  Ten-year incidence and progression of age-related maculopathy: the blue Mountains Eye Study. Ophthalmology. 2007;114(1):92-8.

[2] Lim LS, Mitchell P, Seddon JM, Holz FG, Wong T. Age-related macular degeneration. Lancet. 2012;379(9827):1728-38.

[3] Cunnusamy K, Ufret-Vincenty R, Wang S. Next generation therapeutic solutions for age-related macular degeneration. Pharmaceutical patent analyst. 2012;1(2):193-206.

[4] Meleth AD, Wong WT, Chew EY. Treatment for atrophic macular degeneration. Current Opinion in Ophthalmology. 2011;22(3):190-3.

[5] Spilsbury K, Garrett KL, Shen WY, Constable IJ, Rakoczy PE. Overexpression of vascular endothelial growth factor (VEGF) in the retinal pigment epithelium leads to the development of choroidal neovascularization. The American Journal of Pathology. 2000;157(1):13544.

[6] Wong TY, Chakravarthy U, Klein R, Mitchell P, Zlateva G, Buggage R, et al. The natural history  and  prognosis  of  neovascular  age-related  macular  degeneration:  a  systematic review of the literature and meta-analysis. Ophthalmology. 2008;115(1):116-26.

[7]  Photocoagulation  Study  Group  .  Argon  laser  photocoagulation  for neovascular maculopathy.  Five-year  results  from  randomized  clinical trials.  Macular.  Archives  of Ophthalmology. 1991;109(8):1109-14.

[8] Kaiser PK. Verteporfin therapy of subfoveal choroidal neovascularization in age-related macular degeneration: 5-year results of two randomized clinical trials with an open-label extension: TAP report no. 8. Graefe’s archive for clinical and experimental ophthalmology. Albrecht   von   Graefes   Archiv   fur   klinische   und   experimentelle   Ophthalmologie. 2006;244(9):1132-42.

[9] Bressler NM. Photodynamic therapy of subfoveal choroidal neovascularization in age-related macular degeneration with verteporfin: two year results of 2 randomized clinical trials. Archives of Ophthalmology. 2001;119(2):198-207.

[10] Gragoudas ES, Adamis AP, Cunningham ET, Feinsod M, Guyer DR. Pegaptanib for neovascular age-related macular degeneration. The New England Journal of Medicine. 2004;351(27):2805-16.

[11] Madhusudhana KC, Hannan SR, Williams CP, Goverdhan SV, Rennie C, Lotery AJ, et al. Intravitreal bevacizumab (Avastin) for the treatment of choroidal neovascularization in  age-related  macular  degeneration: results  from  118  cases.  The  British  Journal  of Ophthalmology. 2007;91(12):1716-7.

[12] Rosenfeld PJ, Moshfeghi AA, Puliafito CA. Optical coherence tomography findings after  an  intravitreal  injection  of  bevacizumab (avastin)  for  neovascular  age-related macular degeneration. Ophthalmic surgery, lasers & imaging : The Official Journal of the International Society for Imaging in the Eye. 2005;36(4):331-5.

[13] Chen S. Lucentis vs Avastin: A local viewpoint, INSIGHT. 2011. Available from: http://www.visioneyeinstitute.com.au/wp-content/uploads/2013/05/Avastin-vs-Lucentis-Insight-article-Nov-2011.pdf.

[14] Rosenfeld PJ, Brown DM, Heier JS, Boyer DS, Kaiser PK, Chung CY, et al. Ranibizumab for neovascular age-related macular degeneration. The New England Journal of Medicine. 2006;355(14):1419-31.

[15] Brown DM, Michels M, Kaiser PK, Heier JS, Sy JP, Ianchulev T. Ranibizumab versus verteporfin photodynamic  therapy  for  neovascular  age-related  macular  degeneration: Two-year results of the ANCHOR study. Ophthalmology. 2009;116(1):57-65.

[16] Lalwani GA, Rosenfeld PJ, Fung AE, Dubovy SR, Michels S, Feuer W, et al. A variable-dosing  regimen  with  intravitreal  ranibizumab  for  neovascular  age-related  macular degeneration:  year  2  of  the  PrONTO  Study.  American  Journal  of  Ophthalmology. 2009;148(1):43-58.

[17] Holz FG, Amoaku W, Donate J, Guymer RH, Kellner U, Schlingemann RO, et al. Safety and efficacy of a flexible dosing regimen of ranibizumab in neovascular age-related macular degeneration: the SUSTAIN study. Ophthalmology. 2011;118(4):663-71.

[18]  Abraham  P,  Yue  H,  Wilson  L.  Randomized,  double-masked,  sham-controlled  trial of ranibizumab  for  neovascular age-related macular degeneration: PIER study year 2. American Journal of Ophthalmology. 2010;150(3):315-24.

[19] Brechner RJ, Rosenfeld PJ, Babish JD, Caplan S. Pharmacotherapy for neovascular age-related macular degeneration: an analysis of the 100% 2008 medicare fee-for-service part B claims file. American Journal of Ophthalmology. 2011;151(5):887-95.

[20] Ohr M, Kaiser PK. Aflibercept in wet age-related macular degeneration: a perspective review. Therapeutic Advances in Chronic Disease. 2012;3(4):153-61.

[21] Martin DF, Maguire MG, Ying GS, Grunwald JE, Fine SL, Jaffe GJ. Ranibizumab and bevacizumab for neovascular age-related macular degeneration. The New England Journal

[22] Chakravarthy U, Harding SP, Rogers CA, Downes SM, Lotery AJ, Culliford LA, et al.Alternative treatments to inhibit VEGF in age-related choroidal neovascularisation: 2-year findings of the IVAN randomised controlled trial. Lancet. 2013;382(9900):1258-67.

[23] Aujla JS. Replacing ranibizumab with bevacizumab on the Pharmaceutical Benefits Scheme: where does the current evidence leave us? Clinical & experimental optometry : Journal of the Australian Optometrical Association. 2012;95(5):538-40.

[24] Seccombe M. Australia’s Billion Dollar Blind Spot. The Global Mail. 2013 June 4:5.

[25] Curtis LH, Hammill BG, Schulman KA, Cousins SW. Risks of mortality, myocardial infarction,  bleeding,  and  stroke  associated  with  therapies  for  age-related  macular degeneration. Archives of Ophthalmology. 2010;128(10):1273-9.

[26]  PBS.  Expenditure  and  prescriptions  twelve  months  to  30  June  2013.  Canberra, Australia Pharmaceutical Policy Branch; 2012-2013 [4th of June 2014]. Available from: http://www.pbs.gov.au/statistics/2012-2013-files/expenditure-and-prescriptions-12-months-to-30-06-2013.pdf.

[27] Stewart MW, Rosenfeld PJ. Predicted biological activity of intravitreal VEGF Trap. The British Journal of Ophthalmology. 2008;92(5):667-8.

[28] Heier JS, Brown DM, Chong V, Korobelnik JF, Kaiser PK, Nguyen QD, et al. Intravitre-al aflibercept (VEGF trap-eye) in wet age-related macular degeneration. Ophthalmology. 2012;119(12):2537-48.

[29] Tseng JJ, Vance SK, Della Torre KE, Mendonca LS, Cooney MJ, Klancnik JM, et al. Sustained increased intraocular pressure related to intravitreal antivascular endothelial growth  factor  therapy  for  neovascular  age-related  macular  degeneration.  Journal  of Glaucoma. 2012;21(4):241-7.

[30] Engelbert M, Zweifel SA, Freund KB. Long-term follow-up for type 1 (subretinal pigment epithelium) neovascularization using a modified “treat and extend” dosing regimen of intravitreal antivascular endothelial growth factor therapy. Retina (Philadelphia, Pa). 2010;30(9):1368-75.

[31] McNamara S. Expensive AMD drug remains favourite. MJA InSight. 2011. Available from;                     https://www.mja.com.au/insight/2011/16/expensive-amd-drug-remains-favourite?0=ip_login_no_cache%3D22034246524ebb55d312462db14c89f0.

[32] Gonzalez S, Rosenfeld PJ, Stewart MW, Brown J, Murphy SP. Avastin doesn’t blind people, people blind people. American Journal of Ophthalmology. 2012;153(2):196-203.

[33] Danis R, McLaughlin MM, Tolentino M, Staurenghi G, Ye L, Xu CF, et al. Pazopanib eye drops: a randomised trial in neovascular age-related macular degeneration. The British Journal of Ophthalmology. 2014;98(2):172-8.

Categories
Feature Articles

Personal reflection: how much do we really know?

v6_i1_a23

“Hurry up with that blood pressure and pulse,” blurts the ED registrar. “And make sure to do it on both arms this time.” Before I can ask him what’s going on, he’s teleported to the next bed. Great. I’m alone again. But I don’t blame him; it’s a Saturday night, and a volunteer medical student is the least of his worries.

I fumble for what seems like an eternity with the blood pressure cuff, but eventually get it on, much to the amusement of a charge nurse eyeballing me from the nurses’ station. Recording the right arm was textbook, so now it was just the left arm to do. I listen hard for the Korotkoff sounds, but there was nothing. I shut my eyes in a squeamish hope that it might heighten my hearing, but nothing again. I can feel the charge nurse staring again; I fluster and break a cold sweat. I feel for the left radial pulse, but it repeatedly flutters away the moment I find it. I remember thinking: Gosh. Am I really that incompetent? Embarrassed, I eventually concede defeat and ask for a nurse who tells me she’ll be there “in a minute.”

Amidst all this confusion, was John—my patient. I’d gotten so caught up with ‘Operation Blood Pressure’ that I completely forgot that he was lying there with a kind of graceful patience. I quickly apologised and introduced myself as one of the students on the team.

“It’s all right. You’re young; you’ll eventually get the hang of it… Have to start somewhere, right?” His voice had a raspy crispness to it, which was quite calming to actually hear against the dull rapture of a chaotic emergency room.

John was one of those lovely elderly persons who you immediately came to admire and respect for their warm resilience; you don’t meet too many gentlemen like John anymore. Despite his discomfort, he gave me a kind smile and reached out with his right hand to reassuringly touch my hand. It was a beautifully ironic moment: There he lay in bed, and there I stood by his bedside. And for a moment, there I was the patient in distress, and there he was the physician offering me the reassurance I so desperately needed.

Patients teach us to be doctors. Whether it is a lesson in humility or a rare diagnostic finding, patients are the cornerstone of our ongoing clinical expertise and development; they are why we exist. The more we see, the more we learn. The more we learn, the better doctors we become. Sir William Osler was perhaps the first one to formally adopt this into modern medical education. After all, the three-year hospital residency program for training junior medicos was his idea, and is now a curriculum so widely adopted that it’s almost a rite of passage all doctors make.

But how much clinical exposure are we really getting nowadays? With the betterment of societal health, there is a reduced prevalence and incidence  for  rarer  diseases.  Epidemiologically  this  is  undoubtedly a good thing, but it does sadly reduce learning opportunities for upcoming generations of doctors. Our clinical accruement is premised on seeing and doing; through experiences that shape our clinical approach. Earlier this year, an African child presented with mild gut disturbances and some paralysis of his lower limbs. The case baffled three residents and a registrar, but after a quick glance from a consultant, the child was immediately diagnosed with polio (which was confirmed later by one of the myriad of tests the panicking residents had ordered earlier). We’d all read about polio, but either through the lack of clinical exposure or careless assumptions that polio was all cured; we were quick to overlook it. We can only diagnose if we know what we are looking for.

It’s not surprising that preceding generations of senior doctors (and those before them) have such perceived superior clinical intellect, not just with the breadth of their clinical knowledge but with their almost Sherlock Holmes senses of acuity to formulate diagnosis based primarily off history taking and physical examination. Traditionally it is advertised in textbooks that 90% of diagnoses should be made from the history and examination alone. Nowadays, with the advent of improving diagnostic technologies in radiology and pathology, it isn’t surprising that a number of us have replaced this fundamental skill with an apparent dependence on expensive invasive tests. In a recent study physicians at their respective levels were assessed on their ability to identify heart murmurs and associate it with the correct cardiac problem. Out of the 12 murmurs: interns correctly identified 5, senior residents 6, registrars 8 and consultants 9. Makes you wonder how long ago it was when physicians could identify all twelve. I remember an ambitious surgical resident saying – Why bother diagnosing murmurs when you can just order an echocardiogram? And I remembered the humbling answer a grandfather consultant had for him – Because I’m a real doctor and I can.

As for poor John, I was still stuck with getting a blood pressure for his left arm. Two hours earlier, I responded with the ambulance to John at his home, a conscious and breathing 68 year old complaining of severe headaches and back pain. John was a war veteran who lived independently and sadly had no remaining family to care for him. He has had a month’s history of worsening headaches and lumbar back pain with associated sensory loss particularly in his lower limbs that has been affecting his walking recently. Physical exam confirmed his story and he was slightly hypotensive at 100/65 mmHg, but otherwise his ECG and vitals were generally unremarkable. He otherwise looked to be a healthy 68 year old with no significant past medical history. Funnily enough, he’d just been sent home from ED earlier in the day for the same complaint.  As far as we could tell, he was just another old guy with a bad headache, back pain, and possibly sciatica. It wasn’t surprising that he was sent home from ED this morning with a script for Celecoxib, Nurofen, and instructions to follow-up with his GP.

I’ll remember from this moment onwards that when a nurse says that they’ll be a minute, it’s actually a metaphor of an ice age. I eventually decide to fess up to the registrar that I couldn’t do the blood pressure properly. He gives me a disappointing look but I concluded that honesty is usually the best option in healthcare — well, at least, over pride. I remembered reading a case earlier that week about a medical student who failed to admit that he was unable to palpate the femoral and left radial pulses in a neonate, and subsequently missed an early diagnosis of a serious aortic coarctation, which in the end was discovered the following morning after the baby had already become significantly blue and cyanosed overnight.

Much to my relief, the registrar couldn’t find the blood pressure either and ruled it as pathologic. He disappeared to have a word with his consultant, with both of them quickly returning to the bedside to take a brief history from the patient. By that point, the nurse had finally arrived along with a couple more students and an intern. John had an audience. It was bedside teaching time.

“So apparently you’re God?” John asked the consultant, breaking the seriousness of the moment. We all simultaneously swivel our heads to face the consultant liked starved seagulls, only we weren’t looking for a fried chip but craving for a smart response to scribble in our notebooks.

“To them,” the consultant looks at us, “I am. As for you, I’m not sure.” “I survived getting shot you know, during the war…it just nicked some major artery in my chest…clean shot, in the front and out the back… army docs made some stitches, and I healed up just fine by the end of the month. I’ve been fit as a fiddle since—well, at least, up until these last few months.”

The rest of the history was similar to what I’d found out earlier, but I was slightly annoyed and almost felt betrayed that he’d failed to mention this to me earlier.

The fictional TV Dr Gregory House has a saying that “everybody lies.” It’s true to an extent, but I don’t think patients do it deliberately. They generally might discount or overlook facts that are actually an essential part of the diagnostic process; they are human after all (and so are we). There are the psychiatric exceptions, but for the most part, patients do have the good faith of wanting to help us to help them get better. While sending a team of residents to break into a patient’s house is not usually the preferable choice (unless you’re Dr House), we usually try and pick up these extra clues by knowing what questions to ask and through the comfortable rapport we build with our patients as we come to understand them as a person. The trick is to do all of this in a 10 to 15 minute consult.

 

The consultant quickly did a physical exam on John. He closed his eyes as he listened to his chest. And then, a very faint smile briefly came across his face — the epiphany of a pitifully murmuring heart.

“We’re probably going to run some tests to confirm this,” he informs John before turning to us, “but I suspect we might have a case of a dissecting aorta.” Of course; why didn’t I think of that? Hindsight’s always 20-20, but I continue to kick myself for missing that murmur, and not making the (now obvious) connection.

The consultant continues to command his lackeys to request an alphabet of tests. Soon enough the CT images return and it’s evident that there was blood dividing into a false lumen of the descending aorta (likely to have torn at the site where his gunshot injury had traumatised the vascular tissues from decades ago). Urgent surgery was booked, a range of cardiac medications commenced, and by the time I returned from documenting the notes, there was now a bunch of tubes sticking out of him.

The next time I see John is after his surgery and before he was transferred to the rehabilitation unit. I treasure our final meeting.

“So I beat the odds,” John threw a beaming smile towards me. He’s a trooper — I’ll definitely give him that. Assuming his initial dissectional tear occurred when he reported the onset of his headaches and lower back pain, he’d survived a dissecting aortic aneurysm for at least one whole month, not to mention a war before that. (The odds of dropping dead from an aortic dissection in the first 24 hours alone it’s 25%, in 48 hours it’s 50%, in the first week it’s 75% and in the first month it’s 90%.)

“Yes, you definitely beat the odds.” I smile back at him with a certain amount of gained confidence. Our eyes meet briefly, and beneath the toughened exterior of this brave man is the all-too-familiar softened reservoir of unannounced fear. Finally, I extend my hand to shake his and gently squeeze it; it is the blessing of trust and reassurance he first showed me as a patient that I am now returning to him as a physician.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

E Teo: eteo@outlook.com

Categories
Original Research Articles

Adequacy of anticoagulation according to CHADS2 criteria in patients with atrial fibrillation in general practice – a retrospective cohort study

Background: Atrial fibrillation (AF) is a common arrhythmia associated with an increased risk of stroke.  Strategies to reduce stroke incidence involve identification of at-risk patients using scoring systems such as the CHADS2  score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes or Stroke) to guide pharmacological prophylaxis. Aim: The aim of this research project was to determine the prevalence and management of AF patients within the general practice (GP) setting and to assess the adequacy of anticoagulation or antiplatelet prophylaxis according to the CHADS2  score. Methods: This study was a retrospective cohort study of 100 AF patients ≥50 years conducted at a South Coast NSW Medical Centre over a 3-year period.   Data was obtained from existing medical records. CHADS2   scores were determined at baseline, 12 months and 3 years and were compared with medications to assess whether patients were undertreated, adequately treated or over-treated according to their CHADS2 score. Results: Prevalence of AF in patients >50 years was 5.8%. At baseline, 65% of patients (n=100) were at high risk of stroke (CHADS2  score ≥2).   This increased to 75.3% of patients at 12 months (n=89) and 78.4% of patients at 3 years (n=60).  Adequate treatment occurred in 79.0% of patients at baseline and 83.1% and 76.7% at 12 months and 3-years, respectively.  There were three instances of stroke or trans-ischemic attack during the study period. Conclusion: GPs play a critical role in prevention of stroke in patients with AF.   Adequate pharmacological interventions occurred in the majority of cases, however, identification and treatment of at-risk patients could be further improved.

v6_i1_a22a

Introduction

Atrial fibrillation (AF) is the most common cardiac arrhythmia in Australia, affecting 8% of the population over the age of 80 years. [1,2]  The morbidity and mortality associated with AF is primarily due to an increased risk of thromboembolic events such as stroke, with studies reporting up to a five-fold increase in the annual risk of stroke among patients with AF who have not received prophylaxis with either anticoagulant or antiplatelet therapies. [3,4]

It has been demonstrated that the incidence of stroke in patients with AF can be significantly reduced with the use of pharmacological agents, such as anticoagulant and antiplatelet medications including warfarin and aspirin, respectively. [5] More recently, the development of new oral anticoagulant (NOAC) medications such as dabigatran and rivaroxaban have also been approved for use in patients with AF. [6] However, several studies indicate that the use of anticoagulants and antiplatelets for the prevention of thromboembolic events is often underutilised. [7,8]  It is estimated that up to 51% of patients eligible for anticoagulant therapy do not receive it. [9]   Furthermore, an estimated 86% of patients who suffer from AF and have a subsequent stroke were not receiving adequate anticoagulation therapy following their AF diagnosis. [10]

In contrast, pharmacological treatments for stroke prophylaxis have been associated with an increased risk of intracerebral haemorrhage, particularly amongst the elderly. [11]  A study of 170 patients with AF over the age of 85 years demonstrated that the rate of haemorrhagic stroke was 2.5 times higher in those receiving anticoagulant therapy compared to controls (OR=2.5, 95% CI: 1.3-2.7). [12]  Therefore, the need to optimise the management of patients with AF in the general practice (GP) setting is of high importance for stroke prevention and requires an individualised pharmacological approach in order to achieve a balance between stroke reduction and bleeding side effects.

Consequently, the development of validated risk stratification tools such as the CHADS2 score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes, Previous Stroke or Trans-ischemic Attack (TIA)) has enabled more accurate identification of AF patients who are at an increased risk of stroke by assessing co-morbidities and additional risk factors to determine the appropriateness of anticoagulation or antiplatelet prophylaxis to reduce the risk of thromboembolic events. [13]

The aim of this research project was to determine the prevalence of AF among patients within a GP cohort and to assess the adequacy of pharmacological stroke prophylaxis according to the CHADS2  criteria. The results of this study will enable GPs to determine whether the current management of patients with AF is adequate and whether closer follow-up of these patients needs to occur in order to minimise associated bleeding and stroke complications.

Methods

Study design and ethics

This study was a retrospective cohort study of the prevalence, patient characteristics and adequacy of anticoagulation according to the CHADS2  score in GP patients with AF over a 3-year period.  The study was approved by the University of Wollongong Human Research Ethics Committee (Appendix 1, HREC 13/031).

Participants

Participants were identified using a search of the practice database (Best Practice, Version 1.8.3.602, Pyefinch Software Pty Ltd), at a South Coast NSW Medical Centre using the database search tool.  Search criteria included any patient (recorded as alive or deceased) who attended the practice with a recorded diagnosis of AF over a 3-year period (between November 2010 – November 2013) and were ≥50 years of age. This included both patients with long-term AF diagnosed before the study period in addition to those newly diagnosed with AF during the study period.   The total number of all patients aged ≥50 years who attended the practice at least once during the same period was recorded to determine the prevalence of AF at the practice.

Exclusion Criteria

Exclusion   criteria   included   patients   <50   years   of   age,   patients with incomplete medical records or those diagnosed with AF who subsequently moved from the practice during the study period.

 

CHADS2  score

The CHADS2   score was chosen for the purpose of this study as it is a validated risk-stratification tool for patients with AF. [13-15]  The scoring system assigns one point each for the presence of Congestive Heart Failure, Hypertension, Age ≥75 years or Diabetes and assigns two points if a patient has a history of previous Stroke or TIA.  AF patients with a CHADS2 score of 0 are considered to be at low risk of a thromboembolic event (0.5 – 1.7% per year stroke rate); a score of 1 indicates intermediate risk (2.0% per year stroke rate) and a score ≥2 indicates high risk (4.0% per year stroke rate). [16]

Data Search and Extraction

Patient data was manually extracted from individual patient records, coded  and  recorded  into  a  spreadsheet  (Microsoft  Excel,  2007). Basic data including date of birth and sex were recorded.  Date of AF diagnosis (assessed as the first documented episode of AF within the patient record) and co-morbidities including hypertension, congestive heart failure, diabetes, stroke or TIA were included if documented within the patient medical record.   Correspondence from specialists and hospital discharge summaries were also analysed for any diagnosis made outside of the medical centre and not subsequently recorded in the medical record.

Lifestyle factors were recorded from the practice database including
alcohol use (light/moderate/heavy or none) and smoking status (nonsmoker, ex-smoker or current smoker). Complications arising from
pharmacological prophylaxis (including any documented bleeding or
side-effects) or discontinuation of treatments were included. Individual
patient visits were analysed for any documented non-compliance with
medications. Where possible, cause of death was also recorded.

Adequacy of Anticoagulation

Individual CHADS2 scores were determined for each patient at baseline,
12 months and 3 years. At each of these time points, CHADS2 scores
were compared to each patient’s medication regime (i.e. no medication
use, an anticoagulant agent or an antiplatelet agent). The use of other
medications for the treatment of AF (for example, agents for rate or
rhythm control) was not assessed. Patients were then classified as
being undertreated, adequately treated or over-treated according to
the CHADS2 score obtained at baseline, 12 months and 3 years as per
the current therapeutic guidelines (Figure 1). [17]

v6_i1_a22c

Adequate treatment was considered to be patients receiving treatments
in accordance with the therapeutic guidelines. [17] Undertreated
patients included those who received no treatment when an oral
anticoagulant was indicated (CHADS2 score ≥2). Over-treated patients
included those treated with an oral anticoagulant where it was not
indicated according to the current guidelines (CHADS2 score = 0).

Statistical Analysis

Results are presented as mean ± standard deviation.   A p-value of <0.05 was considered to be statistically significant.  One-way ANOVA was  used  to  assess  between-group  differences  in  CHADS2    scores at each time point (Baseline, 12 months and 3 years).   Descriptive data is presented where relevant.   Prevalence of AF at the practice was calculated using the formula; (patients with AF ≥50 years / total number of patients ≥50 years at the practice, X 100).

 

 

Results

A total of 346 patients with AF aged ≥50 years were identified. Of these, 246 participants were excluded – (n=213 due to insufficient data within their medical record, and n=33 patients had left the practice during the study period) leaving a total of 100 patients for inclusion in the analysis (Figure 2).  Due to the nature of the search strategy (which identified any patient with AF during the period of November 2010-November 2013), both newly-diagnosed patients and patients with long-term AF were included in the analysis. Therefore, long-term data was available for n=89 participants at 12 months, and n=60 participants at 3 years. There were no statistically significant differences in age (p=0.91) or sex (p=0.86) between the included and excluded participants.

v6_i1_a22d

Including all patients initially identified with AF (n=346), the overall prevalence of AF among patients at the practice was 5.8%. Participant characteristics are presented in Table 1.  The mean age of participants at diagnosis was 74.9 ± 10.0 years, with more males suffering from AF (60%) compared to females (40%).  Over half of patients had a history of smoking (57%), and hypertension was the most common co- morbidity (74%).  13% of participants were listed within the practice database as being deceased.

v6_i1_a22e

 

At baseline, 65.0% of patients were classified as high risk of stroke (CHADS score ≥2).  This increased to 75.3% of patients and 78.4% of patients at 12 months and 3 years, respectively (Graph 1). There were no patients with a CHADS2 score of 6 at any of the study time points. Analysis of participants who had 3-year follow-up data available (n=60) demonstrated  a  statistically significant increase  in  average  CHADS2 scores among patients between baseline vs. 12 months (p<0.05) and baseline vs. 3 years (p<0.01).   There was no statistically significant difference in CHADS2 scores between 12 months vs. 3 years (p=0.54).

v6_i1_a22f

Graph 2 demonstrates changes in treatment adequacy over time based on patients’ initial treatment group allocation at baseline. For patients who were initially identified as being undertreated at baseline, there was a trend toward adequate treatment by 3-years.  For patients initially identified as over-treated at baseline, the trend towards adequate treatment occurred more rapidly (p=non-significant) (on average by 12 months).

v6_i1_a22b

Patient pharmacological  treatments and adequacy of treatment at baseline, 12 months and 3 years are shown in Table 2.

v6_i1_a22g

 

There were several reported side-effects and documented instances of medication cessation from anticoagulation and antiplatelet therapy. A total of eight patients were non-compliant and ceased warfarin during the study period and eight patients had their warfarin ceased by their treating doctor (reason for cessation unknown).  A further eight patients ceased warfarin therapy due to side-effects (Intracranial haemorrhage   (n=1),   Gastrointestinal  bleeding   (n=3),   Haematuria (n=1), Unknown bleeding (n=3)).   One patient ceased aspirin due to oesophageal irritation.   No other pharmacological therapies were ceased due to side-effects. Warfarin was ceased in one case due to an elective surgical procedure.

A total of two patients suffered an embolic or haemorrhagic stroke and a further two patients suffered a TIA during the study period.  Prior to their thromboembolic event, one patient was undertreated with aspirin (CHADS2 score = 2), one was adequately treated with clopidogrel (CHADS2    score  =  1)  and  a  further  one  patient  was  undertreated on aspirin (CHADS2 score = 3).  Cause of death was unknown in six patients. No patients had stroke or TIA listed as their cause of death in their medical record.

Discussion

It  has  been  suggested  that  Australian  patients  with  AF  may  not be receiving optimal prophylactic anticoagulant and antiplatelet medications for the prevention of thromboembolic events. [7,8]  The aims of this retrospective cohort study were to assess stroke risk and the adequacy of anticoagulation in 100 AF patients ≥50 years over a 3 year period in a GP setting.

Results from the current study indicate that overall, the use of anticoagulant and antiplatelet strategies for stroke prophylaxis was appropriate in the majority of cases and consistent with published therapeutic guidelines. [17]  The prevalence of AF at the practice of 5.8% was similar with other studies, which report a prevalence of AF in the GP setting of between 4-8%. [18, 19]  In the current study, there were more males with AF than females, however this trend has also been found in several other studies which have reported a higher prevalence of AF amongst males. [15,18]

CHADS2  scores increased between baseline and 12 months and baseline and 3 years.   This increase was to be expected as patients are likely to gain additional risk factors as they age.  The majority of patients at all time points were at high risk of stroke (CHADS2 score ≥2), with warfarin or similar anticoagulation therapy being indicated.

Overall, treatment adequacy increased between baseline and 12 months (79% versus 83.1%), then decreased by 3 years (83.1% versus 76.7%).  This trend is likely to represent aggressive management of AF at the initial diagnosis then a decline in optimal stroke prophylaxis as patients age, develop additional side-effects or become at increased risk of falls.  Additionally, older patient groups (those >70 years) were more likely to be undertreated.  This may be due to several factors, including patient non-compliance with warfarin therapy, doctor reluctance to prescribe warfarin to patients at risk of falls, and the incidence of side-effects such as bleeding.   Similar causes of under- treatment of elderly patients with AF have been outlined in other studies. [20,21]  In younger patients, there was a trend towards over- treatment at the time of diagnosis.

In the current study, one patient suffered an embolic stroke during the study period and two patients had a TIA. Appropriately, all three of these patients were subsequently changed to warfarin.  One patient who was adequately treated on warfarin with a CHADS2   score of 1 was changed to aspirin following an intracranial haemorrhage (and consequently remained classified as adequately treated).  Although these were isolated cases within the study, it should be noted that the life-long morbidity of stroke for these individuals is significant.

Strengths of the current study include the large number of patients and the comprehensive assessment of medical records for the main study outcomes of CHADS2  scores and anticoagulation or antiplatelet therapies.  By assessing individual medical records, a comprehensive assessment of patient data was available for inclusion in the study analysis.

There are some limitations in the current study. As data was extracted from  an  existing  database  of  patient  medical  records  (which  was not kept for the purpose of conducting research) there were some instances of missing or incomplete data.  However, the majority of missing data was, in general, relating to the patient’s social history (such as smoking rates and alcohol use), which were not central to the main research aims and would not have influenced the results.

A thorough assessment of medication regimes was able to be carried out for the purpose of this study.   As all medication changes are automatically recorded by the Best Practice program at each visit, the author is confident that this aspect of the data is accurate. However, it should be noted that it is possible that some patients may have been taking over the counter aspirin, which may not have been recorded on their medication list and consequently some patients may have been assessed as ‘undertreated’.  An additional consideration relates to the use of warfarin and whether patients’ prescribed warfarin were within the therapeutic range, however, the assessment of multiple INR readings for each patient over a 3-year period was thought to be beyond the scope of this study. Only two patients at the practice had been prescribed NOACs (Dabigatran) for anticoagulation, therefore analysis of this medication was limited.

The  calculation  of  CHADS2   scores was  able  to  be  assessed for all patients.  Although most co-morbidities were well documented, theremay have been some limitations with regards to the identification of some co-morbidities such as hypertension, diabetes and the presence of congestive heart failure among some patients.   For example, in some instances some patients did not have a recorded diagnosis of hypertension, but a review of blood pressure readings demonstrated several high systolic blood pressure readings which could have been diagnostic for hypertension.  Where this occurred, patients were not considered to have hypertension or congestive heart failure and were not assigned an additional CHADS2 point.

The CHADS2   score was chosen for the purpose of this study due to its simplicity and validation for the identification of patients at risk of stroke [13-15].  More recently, refinements to the CHADS2  score has led to the development of the CHA2DS2-VASC score, which assigns additional points to higher age groups, female patients and patients with vascular disease. [22]  The CHA2DS2-VASC score provides a more comprehensive overview of stroke risk factors in an individual and has  also  been  validated  for  the  purpose  of  determining  the  need for pharmacological stroke prophylaxis.  More recently, studies have shown that application of the CHA2DS2-VASC score is most useful for clarifying the stratification of patients within the low-intermediate stroke risk categories (i.e. determining those with CHADS2  scores of 0-1 who are truly at low risk and do not require aspirin). [23]  Because the aims of the current study were to identify patients at high risk of stroke and determine the appropriateness of their treatment, the CHA2DS2-VASC score was not utilised in this study.  However, it should be noted that the CHA2DS2-VASC may provide additional clarification in the assessment of patients with low-intermediate CHADS2 scores.

An additional consideration in this study relates to the nature of the AF suffered by patients.  Although patients were included if they had a known diagnosis of AF, it is almost impossible to determine how long patients had already been suffering from AF prior to and after their diagnosis.  In addition, it was not possible to determine whether patients had paroxysmal or sustained/chronic AF.   However, it has been demonstrated that there may be little difference in outcomes for patients with paroxysmal versus persistent AF, [24,25] with a large cohort study comparing stroke rates in patients with paroxysmal versus sustained  AF  reporting no  significant  difference  in  rates  of  stroke (3.2% versus 3.3%, respectively). [24] Therefore, it is unlikely that determination of paroxysmal and sustained AF patterns would have influenced results of the current study.

Conclusion

The results obtained from this study will allow GPs to optimise the management of patients with AF in the community setting.  Although this study found that the management of patients with AF at the practice is consistent with the current guidelines in the majority of cases, further improvements can be made to minimise the risk of stroke among patients with AF, especially with regards to targeting undertreated patients.   Additionally, the current study may raise greater awareness of the incidence of AF within the practice and the need to assess stroke risk and treat patients accordingly, especially as  CHADS2 scores  were  rarely recorded  formally  at  the  time  of diagnosis.  GPs are well placed to optimise the treatment of AF and prevent strokes though treatment of co-morbidities and implementing lifestyle interventions, such as encouraging smoking cessation and the minimisation of alcohol use, and may further reduce the incidence of stroke and TIA in patients with AF.

Acknowledgements

The author would like to acknowledge Dr Darryl McAndrew, Dr Brett Thomson, Prof Peter McLennan, Dr Judy Mullan and Dr Sal Sanzone for their contribution to this research project.

Conflict of interest

None declared.

Correspondence

S Macleod: dm953@uowmail.edu.au

References

[1] Furberg, C, Psaty, B, Manolio, T, Gardin, J, Smith, V, Rautaharju, P. Prevalence of atrial fibrillation in elderly subjects (The Cardiovascular Health Study). Am J Cardiol. 1994; 74 (3): 236-241.

[2] Wong, C, Brooks, A, Leong, D, Roberts, K, Sanders, P. The increasing burden of atrial fibrillation compared with heart failure and myocardial infarction: A 15-year study of all hospitalizations in Australia. Arch Int Med. 2012; 172 (9): 739-741.

[3] Lip, G, Boos, C. Antithrombotic treatment in atrial fibrillation. Heart. 2006; 92 (2): 155-161.

[4] Medi, C, Hankey, G, Freedman, S. Stroke risk and antithrombotic strategies in atrial fibrillation. Stroke. 2010; 41: 2705-2713.

[5] Gould, P, Power, J, Broughton, A, Kaye, D. Review of the current management of atrial fibrillaiton. Exp Opin Pharmacother. 2003; 4 (11): 1889-1899.

[6] Brieger, D, Curnow, J, Anticoagulation: A GP primer on the new anticoagulants. Aust Fam Physician 2014, 43 (5): 254-259.

[7] Gladstone, D, Bui, E, Fang, J, Laupacis, A, Lindsay, P, Tu, J, et. al. Potentially preventable strokes in high-risk patients with atrial fibrillation who are not adequately anticoagulated. Stroke. 2009; 40: 235-240.

[8] Olgilvie, I, Newton, N, Welner, S, Cowell, W, Lip, G. Underuse of oral anticoagulants in atrial fibrillation: A systematic review. Am J Med. 2010; 123 (7): 638-645.

[9] Pisters, R, Van Oostenbrugger, R, Knottnerus, I. The likelihood of decreasing strokes in atrial fibrillation patients by strict application of guidelines. Europace. 2010; 12: 779-784. [10] Leyden, J, Kleinig, T, Newbury, J, Castles, S, Cranefield, J, Anderson, C, et. al. Adelaide Stroke Incidence Study: Declining stroke rates but many preventable cardioembolic strokes. Stroke. 2013; 44: 1226-1231.

[11] Vitry, A, Roughead, E, Ramsay, E, Preiss, A, Ryan, P, Pilbert, A, et. al. Major bleeding risk associated with warfarin and co-medications in the elderly population. Pharmacoepidem Drug Safe. 2011; 20 (10): 1057-1063.

[12] Fang, M, Change, Y, Hylek, E, Rosand, J, Greenberg, S, Go, A, et. al. Advanced age, anticoagulation intensity, and risk for intracranial hemorrhage among patients taking warfarin for atrial fibrillation. Ann Int Med. 2004; 141: 745-752.

[13] Gage B, Waterman, A, Shannon, W. Validation of clinical classification schemes for predicting stroke: Results from the National Registry of Atrial Fibrillation. JAMA. 2001; 285: 2864-2870.

[14] Khoo, C, Lip, G. Initiation and persistance on warfarin or aspirin as thromboprophylaxis in chronic atrial fibrillation in general practice. Thromb Haemost. 2008; 6: 1622-1624.

[15] Rietbrock, S, Heeley, E, Plumb, J, Van Staa, T. Chronic atrial fibrillation: Incidence, prevalence, and prediction of stroke using the congestive heart failure, hypertension, age >75, diabetes mellitus, and prior stroke or transient ischemic attack (CHADS2) risk stratification scheme. Am Heart J. 2008; 156: 57-64.

[16]  UpToDate.  Antithrombotic  therapy  to  prevent  embolization  in  atrial  fibrillation. [Internet]. 2013 [Cited 2014 Mar 9]. Available from: http://www.uptodate.com/contents/antithrombotic-therapy-to-prevent-embolization-in-atrial-fibrillation

[17] e-Therapeutic Guidelines. Prophylaxis of stroke in patients with atrial fibrillation.[Internet]. 2012 [Cited 2014 Mar 9]. Available from: http://etg.hcn.com.au/desktop/index.htm?acc=36422

[18] Fahridin, S, Charles, J, Miller, G. Atrial fibrillation in Australian general practice. Aust Fam Physician. 2007; 36.

[19] Lowres, N, Freedman, S, Redfern, J, McLachlan, A, Krass, I, Bennet, A, et. al. Screening education and recognition in community pharmacies of atrial fibrillation to prevent stroke in and ambulant population aged ≥65 years (SEARCH-AF Stroke Prevention Study): A cross-sectional study protocol. BMJ. 2012; 2 (Online).

 

[20] Hobbs, R, Leach, I. Challenges of stroke prevention in patients with atrial fibrillation in clinical practice. Q J Med. 2011; 104: 739-746.

[21] Hickey, K. Anticoagulation management in clinical practice: Preventing stroke in patients with atrial fibrillation. Heart Lung. 2012; 41: 146-156.

[22] van Starr, T, Setakis, E, Ditanna, G, Lane, D, Lip, G. A comparison of risk stratification schemes for stroke in 79,884 atrial fibrillation patients in general practice. Thromb Haemost. 2011; 9: 39-48.

[23] Lip, G. Atrial fibrillation and stroke prevention: Brief observations on the last decade. Expert Rev Cardiovasc Ther. 2014; 12 (4): 403-406.

[24] Hart, R, Pearce, L, Rothbart, R, McAnulty, J, Asinger, R, Halperin, J. Stroke with intermittent atrial fibrillation: Incidence and predictors during aspirin therapy. J Am Coll Cardiol. 2000; 35: 183-187.

[25] Nattel, S, Opie, L. Controversies in atrial fibrillation. Lancet. 2006; 367: 262-272.

Categories
Original Research Articles

General practitioner awareness of pharmacogenomic testing and drug metabolism activity status amongst the Black-African population in the Greater Western Sydney region

Background:  Individuals  of  black-African  background  have  a high variability in drug metabolising enzyme polymorphisms. Consequently, unless these patients are tested for these polymorphisms, it becomes difficult to predict which patients may have a sub-therapeutic response to medications (such as anti- depressants) or experience an adverse drug reaction. Given the increasing population of black-Africans in Australia, GPs are on the front line of this issue, especially in Greater Western Sydney (GWS) – one of the country’s rapidly increasing populations due to migration. Aim: To ascertain the awareness of GPs regarding drug metabolising enzyme polymorphisms in the black-African population and pharmacogenomic testing in the GWS community. Methods:  A  descriptive,  cross-sectional  study  was  conducted in GWS by analysing GP responses to a questionnaire consisting of closed and open-ended questions. Results: A total of 46 GPs completed the questionnaire. It was found that 79.1% and 79.5% of respondents were unaware of: the high variability in drug metabolism enzyme activity in the black-African population and pharmacogenomic testing (respectively). No respondents had ever utilised pharmacogenomic testing. Only a small proportion of GPs “always” considered a patient’s genetic factors (13.9%) and enzyme metaboliser status (11.1%) in clinical practice. Preferred education media for further information included written material, direct information from other health professionals (such as pharmacists) and verbal teaching sessions. Conclusion: There was a low level of awareness of enzyme metaboliser status and pharmacogenomic testing amongst GPs in GWS. A future recommendation to ameliorate this includes further education provision through a variety of media noted in the study.

v6_i1_a21a

Introduction

Depression accounts for 13% of Australia’s total disease burden, making it an important health issue in the current context. [1] General Practitioners (GPs) are usually the first point of contact for patients seeking help for depression. [2,3] Antidepressant prescription is the most common treatment form for depression in Australia with GPs prescribing an antidepressant to treat up to 40% of all psychological problems. [2] This makes GP awareness of possible treatment resistance or adverse drug reactions (ADRs) to these medications vital.

Binder et al. [4] described pharmacogenomics as “the use of genome- wide approaches to elucidate individual differences in the outcome of drug therapy”. Detecting clinically relevant polymorphisms in genetic expression can potentially be used to identify susceptibility to ADRs. [4] This would foster the application of personalised medicine by  encouraging  an  inter-individual  approach  to  medication  and dose prescriptions based on an individual’s predicted response to medications. [4,5]

Human DNA contains genes that code for 57 cytochrome (CYP) P450 isoenzymes; these are a clinically important family of hepatic and gastrointestinal isoenzymes responsible for the metabolism of over 70% of clinically prescribed drugs. [5-10] The CYP family of enzymes are susceptible to polymorphisms as a result of genetic variations, influenced by factors such as ethnicity. [6,5,10] Research has shown that polymorphisms in certain CYP drug metabolising enzymes can result in phenotypes that class individuals as “ultrarapid metabolisers (UMs), extensive metabolisers (EMs), intermediate metabolisers (IMs) and poor metabolisers (PMs).”[6,10] These categories are clinically important as they determine whether or not a drug stays within the therapeutic range. Individuals with PM status may be susceptible to experiencing ADRs as a result of toxicity, and conversely, those with UM status may not receive a therapeutic effect. [5,6,10,11]

When considering the metabolism of antidepressants, the highly polymorphic CYP enzymes: CYP2C19 and CYP2D6 are known to be involved. [5,10,12] A study by Xie et al. [13] has shown that for the CYP2D6 enzyme alone, allelic variations induce polymorphisms that result in a PM phenotype of “~1%” in Asian populations, “0-5%” among Caucasians and a variation of between “0-19%” in black- African populations. This large disparity of polymorphism phenotypes was reproduced in a recent study, which also showed that the variation is not exclusive to the CYP2D6 enzyme. [6] It has been reported that the incidence of ADRs among PMs treated with drugs such as antidepressants is 44% compared to 21% in other patients. [5,14] Consequently, increased costs have been associated with the management of UM or PM patients. [5]

The black-African population in Australia and specifically Sydney (where GWS is one of the fastest growing regions) continues to rise through migration and humanitarian programs. [15-18] Almost 30% of Africans settling in Australia in the decade leading to the year 2007 did so under humanitarian programs including under refugee status. [15-17] As refugees are at a higher risk of having mental health problems including depression  due  to  their  traumatic  histories  and  post-migratory difficulties, GPs in GWS face increased clinical interactions with  black-Africans  at  risk  of  depression.  [19,20]  Considering  the high  variability of enzyme   polymorphisms   in   this   population, pharmacogenomic testing may play a role in the primary care of these patients. We therefore conducted a study to assess GP awareness of pharmacogenomic testing and the differences in enzyme metaboliser status (drug metabolism phenotypes). We also investigated the GP preferences of media for future education on these topics.

Methodology

Study Design and Setting

This is a descriptive, cross-sectional study. Ethics approval was granted by the Human Research Ethics Committee.

Considering GWS is the fastest growing region in Sydney, we focussed on particular suburbs in GWS (Blacktown, Parramatta and Holroyd Local Government Areas). [17-20] Using geographical cluster sampling, a list of GP practices were identified with the aim of recruiting 50 participants.

Study tool

Data was collected using a questionnaire validated by university supervisors and designed to elicit the level of understanding and awareness among GPs. The main themes of the questionnaire involved: questions regarding basic demographic information; questions aimed at determining the level of GP awareness regarding differences in drug metabolising phenotypes and pharmacogenomic testing; and open- ended questions eliciting the preferred methods of education with respect to pharmacogenomic testing.

Data Collection

We invited 194 GPs between April and May 2014 to participate in the study. The questionnaire and participant information sheet were either given to the practice managers or to the GPs in person. Questionnaires were collected in person within the following two weeks.

Data Analysis

Data was analysed using SPSS (version 22, IBM Australia). Descriptive statistics were used to summarise findings, with p-values calculated using Chi-square analysis (with Yates correction) to compare two sets of data. A p-value of <0.05 indicated statistical significance.

Results

The overall response rate was 23.7% (46/194). Our respondents included: 27 females and 19 males. The mean number of years of experience in general practice was 13.9 and most GPs (93.4%, 43/46) had received some form of training in antidepressant prescription in the last 5 years. The number of patients of black-African background seen in the last 6 months ranged from 0 to greater than 100. Only

26.1% (12/46) of GPs reported no consultations with a patient of black- African background within this timeframe. Of the 73.9% (34/46) of GPs who had seen at least one patient from this cohort, 55.9% (19/34) had treated at least one patient for depression with antidepressants.

GPs experience of ADRs in patients of black-African background treated for depression

From 46 participants, 19 had treated a patient of black-African background with antidepressants, 18/19 reported having identified at least one ADR (Figure 1).

v6_i1_a21d

GP awareness and consideration of drug metabolism activity status and genetic factors

Awareness amongst GPs of the different drug metabolism activity phenotypes in black-Africans was low with 79.1% (34/43) being unaware. Patients’ genetic factors and enzyme metaboliser status were “always” considered by only 13.9% (5/36) and 11.1% (4/36) of GPs, respectively. There was no statistically significant difference regarding awareness between GPs who had treated black-African patients and those who had not (21.1% vs 13.3% respectively, p=0.89).

GP awareness and use of pharmacogenomic testing

The awareness of methods for testing a patient’s key drug metabolising enzymes, also known  as  pharmacogenomic testing, was extremely low with 79.5% (35/44) of GPs being unaware of the testing methods available in Australia. Of the 20.5% of GPs (9/44) who were aware, none had utilised pharmacogenomic testing for their black-African patients. These nine GPs then nominated factors that would influence their utilisation of pharmacogenomic testing on these individuals. Three main categories of influence emerged (Table 1). When specifically asked whether they would be more inclined to utilise pharmacogenomic testing on black-African patients who had previously experienced ADRs, 88.9% (8/9) GPs stated that they would be more inclined.

v6_i1_a21b

Preferred education media

GPs that were aware of pharmacogenomic testing were asked, through an open-ended question, how they obtained information regarding these  methods.  Three  main  categories  were  identified  based  on their responses (Table 2). All GPs were then asked to note down their preferred medium of education for pharmacogenomic testing (Table 3). Multiple responses were allowed.

v6_i1_a21c

Discussion

This study showed that there is a low level of awareness regarding pharmacogenomic testing and the differences in drug metabolism phenotypes among GPs. Additionally, we identified the preferred education media for providing information to GPs (Table 3). Awareness of pharmacogenomic testing and of the differences in drug enzyme metaboliser status (phenotype) could be valuable in the clinical setting. Improved patient outcomes have been noted when doctors are able to personalise management based on information from pharmacogenomic testing,[21] with Hall-Flavin et al. [21] noting significantly improved baseline depression scores amongst patients with depression whose doctors were provided with information on pharmacogenomics.

A previous study reported that a high proportion (97.6%) of physicians agreed that differences in genetic factors play a major role in drug responses.  [22]  Whilst  it  is  arguable  that  knowledge  of  genetic factors holistically playing a role in drug response may be universal, our study specifically focussed on the knowledge of differences in enzyme metaboliser status. It was found that 79.1% of GPs (34/43) were unaware, with only a small number of GPs “always” considering enzyme metaboliser status (11.1%) in their management. Given the aforementioned  importance  of  genetic  factors  and  the  potential to reduce ADRs using personalised medicine, this is an area for improvement.

When considering pharmacogenomic testing, we found 79.5% (35/44) of GPs to be unaware of testing methods. No GP had ever utilised pharmacogenomic testing, this low rate of utilisation is also reported previously in other several studies. [22-24] A lack of utilisation and awareness arguably forms a barrier against the effective incorporation of personalised medicine in the primary care setting. These low figures represent a lack of education regarding pharmacogenomics and its clinical applications. This is an issue that has been recognised since the arrival of these testing methods. [25] McKinnon et al. [25] highlighted that this lack of education across healthcare professionals is significant enough to be considered a “barrier to the widespread uptake of pharmacogenomics”. To ameliorate the situation, the International Society of Pharmacogenomics has issued recommendations in 2005 for  pharmacogenomics  to  be  incorporated  into  medical  curricula. [26]  Another  contributing  factor  to  the  low  utilisation  of  testing could include the lack of subsidised tests available through Medicare. Currently, pathology labs do provide pharmacogenomic testing (such as Douglas Hanley Moir and Healthscope), however this is largely done so through the patient’s expenses as only two methods are subsidised by Medicare. [23,27,28]

Amongst those aware of pharmacogenomic testing, eight out of nine GPs answered that they would be more likely to utilise pharmacogenomic testing in black-African patients who had previously experienced ADRs; this is consistent with findings noted by van Puijenbroek et al. [29]. Among these GPs, factors that were noted to be potential influences in their utilisation of testing included: patient factors such as compliance and the reliability of the test, and, factors affecting the clinical picture (as described in Table 1). This is consistent with findings by studies that have also identified cost and a patient’s individual response to drugs as influential factors in a physician’s decision making. [29,30]

Considering that the majority of information regarding enzyme metabolism and pharmacogenomic testing was published in pharmacological journals,[6,8-14,30-32] much of this knowledge may not have been passed on to GPs. In order to understand the preferred media of information for GPs, we posed open-ended questions and discovered that the majority of GPs who answered the question (32/39), would prefer information in the form of writing (Table 3). This could be either in the form of online sources (such as guidelines, summaries, the National Prescribing Service or the Monthly Index of Medical Specialities) or peer reviewed journal articles. Current literature also reflects this preference for GPs to gain education regarding pharmacogenomics through journal articles. [22] The other preferred medium of education was through verbal teachings, peer discussions and presentations (Table 3), with there being specific interest in information being disseminated by clinical pathology laboratories; this is also reflected in the literature. [22,29]

Strengths and limitations

Small sample size is a limitation of this study with possible contributing factors including: the short amount of time allowed for data collection and the low response rate due to GP time constraints. Strengths of the study include the use of a validated questionnaire catered to our target population and open-ended questions which gave us further insight into GP preferences.

Implications and future research

Currently, anti-coagulants provide an example of the clinical applications of considering enzyme polymorphisms in patient management. [33,34] Warfarin is a particular example where variability in INR has been associated with enzyme polymorphisms, leading to the utilisation of dosage algorithms to optimise clinical outcomes. [34] Similarly, when using antidepressants, pharmacogenomic testing could play a role in clinical decision making with Samer et al. [5] suggesting dose reductions and serum monitoring for those with known PM status. However, as identified in our study, there is an overall lack of awareness regarding the differences in enzyme metaboliser status and the methods available for pharmacogenomic testing.

Future studies should focus on the clinical practicality of utilising these tests. Additionally, future studies should determine the effectiveness of the identified GP preferred modalities of education in raising awareness.

Conclusion

There is a low awareness among GPs regarding both the differences in enzyme metaboliser status in the black-African community, and the methods of pharmacogenomic testing.

To optimise clinical outcomes in black-African patients with depression, it  may  be  useful  to  inform  GPs  of  the  availability  and  application of pharmacogenomic testing. We have highlighted the preferred education modalities through which this may be possible.

Acknowledgements

We would like to acknowledge and thank Dr. Irina Piatkov for her support as a supervisor during this project.

Conflict of interest

None declared.

Correspondence

Y Joshi: 17239266@student.uws.edu.au

References

[1] Australian Institute of Health and Welfare. The burden of disease and injury in Australia 2003  [Internet].  2007  [cited  2014  April  25].  Available  from:  http://www.aihw.gov.au/ publication-detail/?id=6442467990

[2] Charles J, Britt H, Fahridin S, Miller G. Mental health in general practice. Aust Fam Physician. 2007;36(3):200-1.

[3] Pierce D, Gunn J. Depression in general practice: consultation duration and problem solving therapy. Aust Fam Physician. 2011;40(5):334-6.

[4]  Binder  EB,  Holsboer  F.  Pharmacogenomics  and  antidepressant  drugs.  Ann  Med. 2006;38(2):82-94.

[5] Samer CF, Lorenzini KI, Rollason V, Daali Y, Desmeules JA. Applications of CYP450 testing in the clinical setting. Mol Diagn Ther. 2013;17(3):165-84.

[6]  Alessandrini  M,  Asfaha  S,  Dodgen  MT,  Warnich  L,  Pepper  MS. Cytochrome  P450 pharmacogenetics in African populations. Drug Metab Rev. 2013;45(2):253-7.

[7] Yang X, Zhang B, Molony C, Chudin E, Hao K, Zhu J et al. Systematic genetic and genomic analysis of cytochrome P450 enzyme activities in human liver. Genome Res. 2010;20(8):1020-36.

[8] Zanger UM, Schwab M. Cytochrome P450 enzymes in drug metabolism: Regulation of gene expression, enzyme activities and impact of genetic variation. Pharmacol Therapeut. 2013;138(1):103-41.

[9]  Guengerich  FP.  Cytochrome  P450  and  chemical  toxicology.  Chem  Res  Toxicol. 2008;21(1):70-83.

[10] Ingelman-Sundberg M. Genetic polymorphisms of cytochrome P450 2D6 (CYP2D6): clinical consequences, evolutionary aspects and functional diversity. Pharmacogenomics J. 2005;5:6-13.

[11] Zhou S. Polymorphism of human cytochrome P450 2D6 and its clinical significance. Clin Pharmacokinet. 2009;48(11):689-723.

[12] Li-Wan-Po A, Girard T, Farndon P, Cooley C, Lithgow J. Pharmacogenetics of CYP2C19: functional and clinical implications of a new variant CYP2C19*17. Br J Clin Pharmacol. 2010;69(3):222-30.

[13] Xie HG, Kim RB, Wood AJJ, Stein CM. Molecular Basis of ethnic differences in drug disposition and response. Ann Rev Pharmacol Toxicol. 2001;41:815-50.

[14] Chen S, Chou WH, Blouin RA, Mao Z, Humphries LL, Meek QC et al. The cytochrome P450  2D6  (CYP2D6)  enzyme  polymorphism:  screening  costs  and  influence on  clinical outcomes in psychiatry. Clin Pharmacol Ther. 1996;60(5):522–34.

[15]  Hugo  G.  Migration  between  Africa  and  Australia:  a  demographic  perspective  – Background paper for African Australians: A review of human rights and social inclusion issues. Australian Human Rights Commission [Internet]. 2009 Dec [cited 2014 April 26]. Available  from:  https://www.humanrights.gov.au/sites/default/files/content/Africanaus/papers/Africanaus_paper_hugo.pdf

[16]  Joint  Standing  Committee  on  Foreign  Affairs,  Defence  and  Trade.  Inquiry  into Australia’s relationship with the countries of Africa [Internet]. 2011 [cited 2014 April 26]. Available  from:  http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=jfadt/africa%2009/report.htm

[17] Census 2006 – People born in Africa [Internet]. Australian Bureau of Statistics; 2008 August 20 [updated 2009 April 14; cited 2014 April 26]. Available from: http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/3416.0Main+Features32008

[18]    Greater    Western    Sydney    Economic    Development    Board.    Some    national transport  and  freight  infrastructure  priorities  for  Greater  Western  Sydney  [Internet]. Infrastructure    Australia;    2008    [cited    April    25    2014].    Available    from:    http:// w w w. i n fras tru ctu r eau s tral i a. g o v. au /p u b l i c_su b mi ssi o ns/p u b l i sh ed /fi l es/368_ greaterwesternsydneyeconomicdevelopmentboard_SUB.pdf

[19] Furler J, Kokanovic R, Dowrick C, Newton D, Gunn J, May C. Managing depression among ethnic communities: a qualitative study. Ann Fam Med. 2010;8:231-6.

[20] Robjant K, Hassan R, Katona C. Mental health implications of detaining asylum seekers: systematic review. Br J Psychiatry. 2009;194:306-12.

[21] Hall-Flavin DK, Winner JG, Allen JD, Carhart JM, Proctor B, Snyder KA et al. Utility of integrated pharmacogenomic testing to support the treatment of major depressive disorder in a psychiatric outpatient setting. Pharmacogenet Genomics. 2013;23(10):535- 48.

[22] Stanek EJ, Sanders CL, Taber KA, Khalid M, Patel A, Verbrugge RR et al. Adoption of pharmacogenomics testing by US physicians: results of a nationwide survey. Clin Pharmacol Ther. 2012;91(3):450-8.

[23] Sheffield LJ, Phillimore HE. Clinical use of pharmacogenomics tests in 2009. Clin Biochem Rev. 2009;30(2):55-65.

[24] Corkindale D, Ward H, McKinnon R. Low adoption of pharmacogenetic testing: an exploration and explanation of the reasons in Australia. Pers Med. 2007;4(2):191-9.

[25]  McKinnon  R,  Ward  M,  Sorich  M.  A  critical  analysis  of  barriers  to  the  clinical implementation of pharmacogenomics. Ther Clin Risk Manag. 2007;3(5):751-9.

[26]  Gurwitz  D,  Lunshof  J,  Dedoussis  G,  Flordellis  C,  Fuhr  U,  Kirchheiner  J  et  al. Pharmacogenomics      education:      International      Society      of      Pharmacogenomics recommendations for medical, pharmaceutical, and health schools deans of education. Pharmacogenomics J. 2005;5(4):221-5.

[27]  Pharmacogenomics  [Internet].  Healthscope  Pathology;  2014  [cited  2014  October 22]    Available    from:    http://www.healthscopepathology.com.au/index.php/advanced pathology/pharmacogenomics/

[28]  Overview  of  Pharmacogenomic  testing.  Douglas  Hanley  Moir  Pathology;  2013 [cited  2014  October  22].  Available  from:  http://www.dhm.com.au/media/21900626/pharmacogenomics_brochure_2013_web.pdf

[29] van Puijenbroek E, Conemans J, van Groostheest K. Spontaneous ADR reports as a trigger for pharmacogenetic research: a prospective observational study in the Netherlands. Drug Saf. 2009;32(3):225-64.

[30]  Rogausch  A,  Prause  D,  Schallenberg  A,  Brockmoller  J,  Himmel  W.  Patients’  and physicians’ perspectives on pharmacogenetic testing. Pharmacogenomics. 2006;7(1):49- 59.

[31] Akilillu E, Persson I, Bertilsson L, Johansson I, Rodrigues F, Ingelman-Sundberg M. Frequent distribution of ultrarapid metabolizers of debrisoquine in Ethopian population carrying duplicated and multiduplicated functional CYP2D6 alleles. J Pharmacol Exp Ther. 1996;278(1):441-6.

[32] Bradford LD. CYP2D6 allele frequency in European Caucasians, Asians, Africans and their descendants. Pharmacogenomics. 2002;3:229-43.

[33] Cresci S, Depta JP, Lenzini PA, Li Ay, Lanfear DE, Province MA et al. Cytochrome p450 gene variants, race, and mortality among clopidogrel-treated patients after acute myocardial infarction. Circ Cardiovasc Genet. 2014 7(3):277-86.

[34] Becquemont L. Evidence for a pharmacogenetic adapted dose of oral anticoagulant in routine medical practice. Eur J Clin Pharmacol. 2008 64(10):953-60

Categories
Original Research Articles

Health literacy and patient comprehension in the pre-anaesthetics consultation

Background: The concept of health literacy and patient comprehension is important, especially in the area of patient consent for surgical procedures. This extends to the pre- admissions anaesthetic consultation where poor patient health literacy can have an impact on the patient’s comprehension of risks. Objectives: This exploratory study aims to investigate the level of health literacy and comprehension in a population of patients attending a pre-admissions anaesthetic clinic. Methods: A cross-sectional study design was used to survey adult participants (≥18yrs old) attending a regional based pre-anaesthetics clinic. Information gathered as part of the survey included demographic information,  health  literacy  scores  (via  a  previously  validated tool), and questions pertaining to the comprehension of their consultation.   Results: In total, 51 patients participated in the study. Patients were divided into two subgroups (inadequate/ marginal vs. adequate), depending on their screened level of health literacy. Those with inadequate/marginal health literacy were significantly more at risk of having inadequate comprehension than those with adequate health literacy (p = 0.01). There was no statistically significant difference between  health  literacy  levels and a variety of demographic indicators, including education level and  employment  status.  Conclusion:  Patients  with  inadequate or marginal screened health literacy scores were less likely to comprehend the information provided to them as part of their pre-admissions consultation. These results suggest that screening patients for their health literacy levels may be advantageous, in that information provided can be tailored to their individual needs. Further research is however required.

v6_i1_a20a

Introduction

Health literacy is broadly defined by the World Health Organisation (WHO) as the “cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand, and use information in ways which promote and maintain good health”. [1] By using this definition, the concept of health literacy is more than just encompassing health education and communication – it also addresses the underlying environmental, political, and social factors that can determine health. It is important to note that health literacy does not just encompass the ability of a patient to understand a diagnosis or make an appointment, but is also critical for good patient engagement with the medical system. This is important in an Australian context as research suggests that up to 59% of Australians have inadequate health literacy skills. [2]

 

Inadequate or poor health literacy has been linked with poor health outcomes. [3,4] These poor health outcomes result from a combination of factors which include but are not limited to: poorer health-related knowledge and comprehension, [3] difficulty understanding diagnosis and treatment recommendations, [5] inappropriate use of resources –  including  decreased  use  of  preventative  health  measures  and an increase in emergency department presentations [4] and poor medication compliance. [6] Poor health literacy can further negatively impact on older adults, who are more likely to experience poorer overall health status [7] and higher mortality rates, [8,9] as compared to older adults with adequate health literacy.

Pre-admissions anaesthetic clinics are used to deliver important information to patients. Consultations within these clinics aim to ensure that the patient is optimally prepared for the operation or surgical procedure by providing them with relevant and essential information. [10] A comprehensive pre-admissions anaesthetic consultation and assessment is a valuable exercise because it can result in reduced: in-patient length of stay following the procedure/surgery, [11] case cancellations and/or further delays on the day of the procedure/ surgery. [12] Two of the key elements communicated to patients during the pre-anaesthetics consultation include the risks involved with the procedure/surgery and the potential risks associated with receiving anaesthetic agents. This information is typically provided to patients using both verbal and written communication strategies, [13] which can be inadequately comprehended by the patient with poor health literacy skills. [14]

A recent study conducted by Kadakia et al., [15] identified that inadequate health  literacy  could  potentially  predict  poor  patient comprehension of their orthopaedic injury and surgical intervention – including understanding the risks involved with the procedure. It could be argued therefore, that there is a rationale for screening for health literacy levels, and identifying those at risk of poor comprehension, as  part  of  the  pre-admissions  anaesthetic  clinic  routine  practices. By  screening  and  identifying  these  patients,  additional  measures could be used by the physician to ensure the optimisation of patient understanding, including understanding of potential risks associated with the procedure. However, there appears to be a paucity of evidence regarding a patient’s understanding of the pre-admission anaesthetic consultation, and the effect of health literacy in predicting comprehension of information provided to them during these consultations.

This exploratory pilot study aimed to assess the level of health literacy and comprehension of health information delivered to patients attending a regional pre-admissions anaesthetic clinic.

Methods

Following human ethics approval from the University of Wollongong Human  Research  Ethics  Committee  (Ethics  No.  GSM13/048),  this study utilised a cross-sectional survey design which included the self- completion of an anonymous questionnaire. Upon verbal consent being given, participants aged 18 years and above were provided with a questionnaire by the clinic nursing staff, which was to be completed at the end of the pre-anaesthetics consultation. The clinics were run either by an anaesthetic consultant, or by a qualified GP anaesthetist, and was set in a New South Wales based regional pre-admissions anaesthetics clinic.

Potential participants presented to the pre-admissions anaesthetics clinic for a wide range of elective surgical procedures, including; ophthalmic,  Ear/Nose/Throat  (ENT),  orthopaedic,  and  general surgical procedures. The anonymous questionnaire comprised three components. The first component gathered demographic information. The second component included the following three validated health literacy questions, [16,17] which were rated on a 5-point Likert scale:

  • How often do you have someone help you read hospital materials? (5 = ‘Never’; 4 = ‘Occasionally’; 3 = ‘Sometimes’; 2 = ‘Often’; 1 = ‘Always’).
  • How confident are you filling out medical forms by yourself? (5 = ‘Extremely’; 4 = ‘Quite a Bit’; 3 = ‘Somewhat’; 2 = ‘A Little Bit’; 1 = ‘Not At All’).
  • How often do you have problems learning about your medical condition because of difficulty understanding written information? (5 = ‘Never’; 4 = ‘Occasionally’; 3 = ‘Sometimes’; 2 = ‘Often’; 1 = ‘Always’).

These three questions were chosen based on a previously validated system for stratifying health literacy in an efficient and rapid manner. [17] In order to analyse health literacy in this patient population, participants were stratified into either adequate or inadequate/ marginal  health  literacy.  Those  participants  with  a  response  of

‘Somewhat’ or ‘Sometimes’ (correlating with a Likert score of 3) and below were deemed to have either inadequate or marginal health literacy. Deficiency in one or more of the three questions was deemed sufficient to classify patients as having overall inadequate or marginal health literacy. Those above this cut-off for all three questions were deemed to have adequate health literacy.

The third component of the questionnaire included seven questions about the patient’s comprehension of information provided during the pre-anaesthetics consultation, which have not been previously validated. They included a range of questions about health information, which is commonly discussed during pre-anaesthetic consultations. Responses to the following seven questions were categorised via three responses; yes, no, and unsure:

  • Do you know what operation or procedure you are having? (Yes, No or Unsure).
  • Do  you  understand  why  you  are  having  the  operation  or procedure? (Yes, No or Unsure).
  • Do you understand the potential complications of your operation or procedure? (Yes, No or Unsure).
  • Do you understand the potential complications of the anaesthesia? (Yes, No or Unsure).
  • Do you understand where you will be after your operation or procedure? (Yes, No or Unsure).
  • Do you know what to expect after you wake up? (Yes, No or Unsure).
  • Was there one or more times during your time with the doctor where you were not sure of what he was saying? (Yes or No).

To score patient comprehension, the results of this third component of the anonymous questionnaire were tabulated and a score out of seven given. A score of 1 was given for each affirmative response for the first six questions. For the final question, a score of 1 was given if the patient understood the anaesthetist throughout the entirety of the consultation. A patient with a total score of ≥6 was deemed to have adequate comprehension of the consultation, whereas any patient with a total score <6 was deemed to have inadequate comprehension. This measure of patient comprehension was devised for this study, and is not based on any previously validated tools. Consequently, this is an exploratory study and the scoring system for comprehension will need validation in the future. Descriptive statistics were used to analyse the data. Associations between variables were analysed using chi-square analysis. [18] The level of significance was set at p < 0.05.

Results

Patient Characteristics

A total of 51 responses were received from study participants between February and April 2014, with all received questionnaires completed in a satisfactory manner. The mean age of the participants was 64.8 ±

13.6 years, with the ages ranging from 18 to 84 years. In the majority of cases, participants were from either the category “had completed high school” or “had not finished ”, and 61% of the participants were not in the labour force. (Table 1)

v6_i1_a20b

Health Literacy and Patient Comprehension

Of the total participants, 76% (n = 39) were deemed to have adequate health literacy, as compared to 24% (n = 12) with inadequate/marginal health literacy. In addition, the majority of the participants (n = 43;

84%) had adequate comprehension scores of the consultation, rather than inadequate comprehension scores (n = 8; 16%). When the comprehension scores are viewed within each health literacy grouping,

42% (n = 5) of those with inadequate/marginal health literacy also had inadequate comprehension. The proportion of those with inadequate comprehension was less amongst those with adequate health literacy (n = 3; 8%). These statistics are reflected in figure 1.

v6_i1_a20c

Analysis

Chi-square analysis demonstrated that there was a statistical difference between the two groupings of health literacy in relation to their comprehension of the anaesthetics consultation (p = <0.01). Chi- square analysis was also performed in regards to employment status (employed vs. unemployed/not in labour force) and education level attained (education of High School or lower vs. education beyond High School). These groupings were used due to the low level of participants in some groups. There was no statistically significant difference between the previously stated health literacy groups in regards to both level of education attained (p = 0.356) and employment status (p = 0.494) at the time.

Discussion

The issue of patient comprehension in the delivery of information regarding  the  pre-admissions  anaesthetic  consultations  and procedural risks cannot be understated. Without the ability to correctly understand and interpret both verbal and written information patients will be unable to provide accurate consent to their procedure, and they will also be at risk of poor health outcomes because they may have misunderstood important information regarding their procedures. If we could identify this at-risk patient group by using a quick and cheap assessment of health literacy, additional resources and techniques could be utilised to improve patient understanding that would otherwise be absent in the standard pre-admissions anaesthetic consultation. Upon analysis of the data, a significant difference was found between the health literacy groups in terms of comprehension of the pre-anaesthetics consultation.

The findings of patient comprehension in the pre-admissions anaesthetics consultation mirror that of a number of other studies. Similar  findings  by  Kadakia  [15]  and  Wallace  [19]  show  that  a lower level of health literacy can place patients at risk of poor comprehension. This can and has been used as a predictor for patients at risk of misinterpreting health care information. For example, the study by Kadakia [15] examined comprehension and health literacy in an orthopaedic trauma patient population. They used the same questions by Chew et al. [16] to delineate patients into inadequate and adequate health literacy, and then tested patient comprehension and knowledge of their procedure. They found a significant link between poor health literacy and poor patient comprehension and retention of information about their procedure. However they also found that patient comprehension depended on educational level, which was not replicated in this study. This may have been due to the larger sample size of the Kadakia study. However, their suggestion of an increased focus on patient communication by medical staff can also be applied in the pre-anaesthetics consultation.

Predicting Patient Comprehension

Since this study demonstrated that health literacy can have an impact on overall  patient  comprehension,  it  could  be  recommended  that screening of health literacy should be an important addition to pre- anaesthetic clinic consultations. Doing so would help to identify those patients at greatest risk of poor comprehension and would allow for the delivery of information, which was targeted toward individual patient needs. The anaesthetists in this study could then have improved patient comprehension by employing a variety of techniques. These could include using simple and easy to understand language and by speaking slowly, [20] asking patients to repeat back basic information [21], and by having a longer consultation time. [22]

One component that this study did not explore was the effect that supplementary information can have on further improving patient comprehension. In theory patient leaflets should be a very useful tool in assisting patients with comprehension of their medical procedure and management of their condition, before and after their procedure. However, many patient information leaflets are written at levels in excess of the mean patient literacy, [23] often including too much information, which may be irrelevant to the patients’ needs. [24] Information provided to patients during these education sessions, should therefore be aimed at an appropriate level for the target audience.  Furthermore, using culturally appropriate images that are linked to either spoken or written information can also be additional useful strategies to help improve patient retention and comprehension of health information provided during consultations. [25]

Limitations and Future Research

The nature of the current study resulted in a small sample size, without the assumed entirety of patients presenting to the pre-anaesthetics clinic  being  sampled.  This  small  sample  size  limits  the  statistical power of the study. It is also possible that some of the non-significant differences may trend towards significance with a larger sample size. Due to the study being an anonymous survey, it would be speculative to estimate patient uptake of surveys. In addition, those patients with poor health literacy and/or patient comprehension may not have attempted  to  complete  the  questionnaire.  This  introduces  a  level of selection bias towards those with higher levels of health literacy, something that could be potentially avoided if it was a compulsory part of the pre-anaesthetics consultation workup. In addition, assistance could be provided to these patients in completing the survey after their consultation. The exploratory nature of the study, as well as the use of a scoring system for comprehension that has not been validated, also limit this study. In particular, validation of the scoring criteria for comprehension would be vital. Furthermore, measurements of both inter-rater and intra-rater reliability were not performed.

There were also a number of confounding factors, which need to be considered as part of the current study. For instance, different anaesthetists were involved throughout the duration of the study. As a result this study was unable to allow for the potential differences in information delivery from each of these health professionals. It is also feasible that word of mouth from the study may have led to the anaesthetists themselves changing their approach to information delivery. Additionally, due to the variety of surgical specialties and procedures that were included, it is possible that the complexity of the procedure would have influenced the patient’s comprehension. In light of this variety in the delivery of information, perhaps future inclusion of a patients’ overall satisfaction with the delivery of health information would be beneficial, as well as the anaesthetists overall impression of the patient’s level of health literacy and comprehension of the consultation.

Language could be seen as another confounding factor and barrier to comprehension of the anaesthetic process. In fact, this could also have led to some patients declining to enter the study itself. This could be avoided in future studies by either excluding patients from a Non- English Speaking Background (NESB), or utilising this as an additional demographic data for future analysis. The patient’s postcode and socio- economic status could also have a profound effect on health literacy and patient comprehension, and were not assessed in this study.

 

In terms of the patient understanding the anaesthetist, perhaps a qualitative component could be included in future studies. From this, we could further investigate barriers and facilitators, which may have impacted upon the patient’s ability to understand their anaesthetist. Moreover, future studies could also assess patient recall regarding important information imparted to them as part of the consultation. An additional component that should be included in future analysis is the proportion of patients who returned surveys out of the entire population presenting to the pre-anaesthetics clinic.

The results of this study warrant further research, potentially by addressing the limitations addressed above. This would include a larger  sample  population  size,  over  a  longer  period  of  time,  and should potentially include multiple sites. In addition, developing a validated scoring of comprehension would be beneficial in future analysis. By increasing the sample size and including a validated score of comprehension, stronger statistical analysis could be performed. A study of this kind could be replicated in a variety of areas of medicine where comprehension of risks and complications is needed.

Conclusion

The results of this study suggest that screening for at-risk patients prior to attending a pre-admissions anaesthetic clinic may be beneficial in identifying patients with poor health literacy. Such individuals could have information tailored to maximise comprehension of the pre- admission anaesthetic consultation. Further research in these areas is warranted.

Acknowledgements

The author would like to thank both Dr Judy Mullan and Dr Timothy Billington from the University of Wollongong Graduate School of Medicine for their support and advice for the duration of this project.

Conflict of interest

None declared.

Correspondence

M Russell: mr828@uowmail.edu.au

References

[1] World Health Organisation. Track 2: Health literacy and health behaviour [Internet]. 2009   [cited   2014   Apr   4].   Available   from:   http://www.who.int/healthpromotion/conferences/7gchp/track2/en/

[2] Australian Bureau of Statistics. Health literacy, Australia, 2006 [Internet]. [updated 2008 Jun 25; cited 2014 Apr 4]. Available from: http://www.abs.gov.au/AUSSTATS/abs@.nsf/Latestproducts/4233.0Main%20Features22006?opendocument&tabname=Summary&prodno=4233.0&issue=2006&num=&view=

[3]  DeWalt  DA,  Berkman  ND,  Sheridan  S,  Lohr  KN,  Pignone  MP.  Literacy  and  health outcomes: a systematic review of the literature. J Gen Intern Med. 2004; 19:1228-39.

[4] Berkman ND, Sheridan SL, Donahue KE, Halpern DJ, Crotty K. Low health literacy and health outcomes: an updated systematic review. Ann Intern Med. 2011; 155:97-107.

[5] Makaryus AN, Friedman EA. Patients’ understanding of their treatment plans and diagnosis at discharge. Mayo Clin Proc. 2005; 80(8):991-4.

[6] Davis TC, Wolf MS, Bass III PF, Thompsom JA, Tilson HH, Neuberger M, Parker RM. Literacy and misunderstanding prescription drug labels. Ann Intern Med. 2006; 145:887-94.

[7] Bennett IM, Chen J, Soroui JS, White S. The contribution of health literacy to disparities in self-rated health status and preventative health behaviors in older adults. Ann Fam Med. 2009; 7:204-11.

[8] Baker DW, Wolf MS, Feinglass J, Thompson JA, Gazmararian JA, Huang J. Health literacy and mortality among elderly persons. Arch Intern Med. 2007; 167(14): 1503-9.

[9] Baker DW, Wolf MS, Feinglass J, Thompson JA. Health literacy, cognitive abilities, and mortality among elderly persons. J Gen Intern Med. 2008; 26(6):723-6.

[10] NSW Department of Health. Pre-procedure preparation toolkit [Internet]. 2007 Nov 2 [updated 2012 Nov 2; cited 2014 Apr 4]. Available from: http://www0.health.nsw.gov.au/ policies/gl/2007/GL2007_018.html

[11] O’Conner DB, Cotter M, Treacy O, Owens T, McShane A, Mehigan D, Sheehan SJ, Barry An anaesthetic pre-operative assessment clinic reduces pre-operative inpatient stay in patients requiring vascular surgery. Ir J Med Sci. 2011; 180:649-53.

[12] Ferschi MB, Tung A, Sweitzer B, Huo D, Glick DB. Preoperative clinic visits reduce operating room cancellations and delays. Anesthesiology. 2005; 103:855-9.

[13] Stern C, Lockwood C. Knowledge retention from preoperative patient information. Int J Evid Based Healthc. 2005; 3:45-63.

[14] Puro H, Pakarinen P, Korttila K, Tallgren M. Verbal information about anesthesia before scheduled surgery – contents and patient satisfaction. Patient Educ Couns. 2013; 90:367-71.

[15] Kadakia RJ, Tsahakis JM, Issar NM, Archer KR, Jahangir AA, Sethi MK, Obremskey WT, Mir HR. Health literacy in an orthopaedic trauma patient population: a cross-sectional survey of patient comprehension. J Orthop Trauma. 2013; 27:467-71.

[16] Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004; 36(8):588-94.

[17] Chew LD, Griffin JM, Partin MR, Noorbaloochi S, Grill JP, Snyder A, Bradley KA, Nugent SM, Baines AD, VanRyn M. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008; 23(5): 561-6.

[18] Preacher, KJ. Calculation for the chi-square test: an interactive calculation tool for chi-square tests of goodness of fit and independence [Computer Software]. [updated 2001 Apr; cited 2014 Apr 4]. Available from: http://www.quantpsy.org

[19] Wallace LS, Cassada DC, Rogers ES, Freeman MB, Grandas OH, Stevens SL, Goldman Can screening items identify surgery patients at risk of limited health literacy? J Surg Res. 2007; 140(2):208-13.

[20] Safeer RS, Keenan J. Health literacy: the gap between physicians and patients. Am Fam Physician. 2005; 72(3):463-8.

[21] Schenker Y, Fernandez A, Sudore R, Shillinger D. Interventions to improve patientcomprehension in informed consent for medical and surgical procedures: a systematic review. Med Decis Making. 2011; 31:151-73.

[22] Fink AS, Prochazka AV, Henderson WG, Bartenfeld D, Nyirenda C, Webb A, Berger DH, Itani K, Whitehill T, Edwards J, Wilson M, Karsonovich C, Parmelee P. Predictors of comprehension during surgical informed consent. J Am Coll Surg. 2010; 210(6):919-26.

[23] Cronin M, O’Hanlon S, O’Connor M. Readability level of patient information leaflets for older people. Ir J Med Sci. 2011; 180:139-42.

[24] Rajasundaram R, Phillips S, Clay NR. Information leaflet used in out-patient clinics: A survey of attitude and understanding of the user. Int J Health Care Qual Assur. 2006; 19(7):575-9.

[25] Houts PS, Doak CC, Doak LG, Loscalzo MJ. The role of pictures in improving health communication: A review of research on attention, comprehension, recall and adherence. Patient Educ Couns 2006; 61:173-90.

Categories
Original Research Articles

How do the specialty choices and rural intentions of medical students from Bond University (a full-fee paying, undergraduate-level medical program) compare with other (Commonwealth Supported Places) Australian medical students?

Introduction: Australian medical schools are demonstrating an
increased interest in full-fee paying education, which warrants
assessment of possible ramifications on the profile of the Australian medical workforce. This study aims to identify differences in demographics, specialty preferences and rural intentions between domestic full-fee paying undergraduate medical students and all other (CSP) Australian medical students. Methods: The data of 19,827 medical students was accessed from the Medical Schools Outcomes Database from 2004-2011. This was then analysed using logistic regression and McNemar’s test to identify differences in specialty choice and preferred location of practice. Results: Demographically, full-fee paying medical students of Bond University and other Australian medical students were similar in age and gender. However, Bond medical students were less likely to come from a rural background (10% versus 21.7%) and, even after performing logistic regression analysis, still showed
a greater preference for future urban practice at both entry and
exit of medical school than all other students (entry questionnaire OR = 3.3, p < 0.01, and exit questionnaire OR = 3.9, p < 0.05). There was no significant difference in preference for higher-paid medical specialties or those in short-supply between Bond medical students and all other Australian medical students.Conclusion: Full-fee paying medical students of Bond University demonstrate similar future specialty preferences but are far more likely to come
from an urban background and choose urban over rural practice than other medical students. Further research is necessary to better understand the implications of full-fee paying education on the medical workforce.

v6_i1_a19a

Introduction

Australian medical schools are demonstrating an increased interest in providing full-fee paying education; in 2004 there were 160 places for domestic full-fee paying Australian medical students (1.6% of all students), which increased to 932 (7%) by 2008 and 871 (5.1%) in 2013. [1,2] This trend warrants the assessment of possible ramifications on the profile of the Australian medical workforce in terms of specialty and geographical distribution. [3]

There are a number of medical student characteristics and experiences that are known to guide medical training and ultimately impact on the nature and location of their specialty choices. [4-8] This includes demographic characteristics such as gender, background (rural or urban origin), personal and family factors (whether a student has a partner or children), education, personality and interests. [9] Previous research has indicated a pattern of gender distribution amongst medical specialties, where women are more likely to choose general practice and men are more likely to enter other specialist careers (such as surgery, which remains a very male-dominated field). [9] Similarly, male doctors tend to place a higher emphasis on financial remuneration and women are generally more concerned about working hours and flexibility of practice. [8] The perceived prestige and lifestyle factors associated with certain specialties plays a significant role in specialisation choice. [8,10-12] Clinical exposure to specialty fields is key in influencing some of these preconceived views. [10,13]

It is well documented that there is a significant shortage and maldistribution of doctors in remote and rural Australia, reflecting an increasing awareness that this inadequacy of healthcare needs to be addressed in these communities. [5,6,8-10,12,14-19] Only 23% of Australian doctors practise in places of significant workforce need, where the number of doctors per head of population is 54-65% of that in metropolitan areas. [16] Although programs, research and government incentives have been introduced over the past 20 years to address these problems, the Rural Doctors Association of Australia has reported that less than 5% of medical school graduates have taken up rural practice in the last 15 years and the majority of doctors working in rural areas are international medical graduates on restricted provider numbers. [12,19,20]

Among the many factors that influence medical students to take up rural practice after graduation, the strongest indicator is a rural background, closely   followed   by   positive  rural   placements.   [5,9,15,16,20,21] Rural-practicing doctors are two to four times more likely to be of rural background than those practising in urban areas. [5] However, between 34% – 67% of rural doctors originate from urban backgrounds which is attributed in part to students’ rural clinical exposure through scholarships and placements such as the John Flynn Placement Program and the Rural Undergraduate Support and Coordination (RUSC) funded rural experience. [5,22] Training opportunities such as Rural Clinical Schools are also effective in influencing students towards a rural career by allowing students to experience the benefits of rural life first-hand whilst providing effective and innovative medical education. [7,22,23] Now many programs are available for medical students that offer exposure to rural practice. [6,12,17,23,24]

Bond University was the first institution to offer a full-fee paying undergraduate medical course in Australia in 2005 [25], with no direct funding from the Australian Government. Several well-established medical schools followed suit by introducing up to 50% more full- fee paying places in their current medical programs to cater for the student surplus, including international students. [26] While fees vary amongst medical schools for both domestic and international full-fee paying places, they are generally between of $30,000 to $60,000 per annum for a four to six year education. [27] Domestic full-fee paying students do have the option of accessing loans under the Government ‘Fee-Help’ program to cover a portion of their tuition fees; they are entitled to a lifetime maximum of $112,134 for a medical education (as of 2013) with 20% simple interest, repayable upon graduation and employment. [27]

The aim of this study was to determine whether full-fee paying Australian medical students differ significantly from other medical students in terms of future intended specialty career and rural/urban location of practice. This level of financial burden has raised significant concerns about its implications of medical education accessibility and future workforce specialty distribution. We hypothesised that full-fee paying students would indicate an increased preference for pursuing future urban practice and higher-paying specialties.

Methods

Data was provided by the Medical Schools Outcomes Database (MSOD), a project of the Medical Deans Australia and New Zealand association that is funded by Health Workforce Australia as a means of evaluating rural medical education initiatives. [19] Commencement of Medical School Questionnaires (CMSQ) and Exit Questionnaires (EQ) are administered to all medical students on entry to and graduation from all Australian medical schools and at the end of the first postgraduate year.

Independent variables

The  main  independent  binary  variable  of  comparison  represented whether the student attended Bond University’s full-fee paying undergraduate medical program or not.  Other independent variables included in each analysis were the student’s sex, age when they began medical school, the year they began medical school, whether they are of rural background and their marital status.

Dependent variables

Preference for urban versus rural future medical practice was re- categorised into a binary variable from the original questionnaire categories: Those who chose to practice in a small community, small town, regional city or town were considered to be rural candidates. Those who chose to practice in a capital or major city centre were considered to be urban candidates.

Two variables were created to explore preferences for future medical specialty. The first of the two is a binary variable that assesses the preference  for  choosing  a  higher-paying  specialty.  Students  who chose surgery, obstetrics and gynaecology, radiology, intensive care medicine or emergency medicine—which are the top five specialties rated as the highest paid in the ‘2010 Medicine in Australia: Balancing Employment and Life (MABEL)’ study—were considered in pursuit of a higher-paying specialty. The second binary variable examined the preference for choosing a specialty in-demand (not necessarily highest paid). [24] Students who chose general practice, psychiatry, obstetrics and gynaecology, pathology, ophthalmology or radiology—which are predicted to be the top six specialties in short supply by 2025 by the Health Workforce of Australia (HWA)—were considered in pursuit of a specialty in-demand. [28]

Statistical analysis

Independent samples t-tests were used to compare differences between medical students of Bond University and all other Australian medical students on demographic background variables (Table 1). Logistic regression was used for comparisons between full-fee paying medical students of Bond University and all other Australian medical students in analyses of preferences for the three dependent variables listed above. Data on preferences for rural versus urban practice, for the top five paid, and six most in-need specialties were analysed at two data collection time points: on entry to medical school (CMSQ) and exit from medical school (EQ), resulting in six logistic regression models comparing full-fee paying undergraduate medical students with all other medical students. McNemar’s test was used to analyse changes in student rural future practice and specialty preferences between the time that they entered and exited medical school, and logistic regression to explore changes through time between cohorts in these preferences.

v6_i1_a19b

Ethics Approval

Approval was provided by all universities for the MSOD project, which applies to this paper. Permission was requested and approval given by MSOD to use their data for this research article (MR-2013-002).

Results

The results of the McNemar’s tests showed no statistically significant difference for full-fee paying medical students of Bond University who completed both entry and exit questionnaires (n = 94), but that in all other medical students (n = 3760) there was a significant drop in intention to practice rurally, and an increase in preference for a top 5 paid specialty and specialties predicted to be in short supply (p-values < 0.001). In addition, there was evidence of cohort effects in CMSQ preferences amongst all medical students: between 2005 and 2011 cohorts entering medical school, later cohorts of students had a greater preference for urban future practice. The cohort effect odds ratio was 1.07 (p < 0.001; 95% CI: 1.04-1.10). Later cohorts were less likely to select a future specialty on the list of six most in-need (OR = 0.95, 95% CI: 0.94-0.98, p < 0.001) but were not more or less likely to prefer a top 5 paid specialty. No significant cohort effects were observed in the exit questionnaire analyses, although it should be noted that the exit data only included four cohorts (2008-11).

Both commencement and exit of medical school surveys showed that full-fee paying medical students of Bond University had a significantly greater preference for future urban practice than other Australian medical students (Figure 1).

v6_i1_a19d

Age on entry to medical school, gender, marital status and whether the student was from a rural background were statistically controlled in all six analyses.

Analyses performed using logistic regression; 95% CIs aforementioned in Results text; p < 0.05.

OR > 1.0 indicates variable in favor of full-fee paying medical students of Bond University.

OR < 1.0 indicates variable in favor of other Australian medical students.

 

Analyses shows that full-fee paying medical students of Bond University were neither more likely to have a preference for the top five paid medical specialties, nor more likely to pursue the top 6 specialties predicted to be in need by 2025, when compared with all other medical students in Australia (Table 2).

v6_i1_a19c

Of the variables which were statistically controlled in the logistic regression analyses, a number were significant predictors of the three outcomes. These findings have been summarised in Table 3. At entry to medical school, older students and women were less likely to select a top five paid specialty and women remained less likely to select a top five paid specialty at exit. Married students were significantly less likely to choose any top five paid specialty as their first preference on exit. Specialties in short supply were selected at entry by those who were older, married, from a rural background or female and at exit by those who were married or female. Coming from an urban background was a strong predictor of not preferring future rural medical practice at both entry and exit from medical school and men were oriented towards urban practice. Younger students stated less preference for rural practice and unmarried students stated a lesser preference for future urban practice.

v6_i1_a19e

Discussion

Interestingly, in contrast to our hypothesis, we found that full-fee paying medical students of Bond University were not more or less likely to prefer the highest-paid medical specialties when compared with other Australian medical students. There were also no significant differences in preference for specialties predicted to be in short supply. This result implies that the full-fee paying nature of education is not a significant influential  factor  in  future  specialty  preferences  whilst  supporting the idea that this choice may be guided by other demographic and experiential factors documented in the literature.

More students in general (that is, including students of Bond and all other medical schools) had a tendency to select a top-paid specialty by the end of medical school compared to entry.

So the potential generalised ‘commercialisation’ of students’ motivations during medical training remains a point of concern despite the apparent validation of full-fee paying training as an unlikely implicating factor. There are no papers in the current Australian literature specifically exploring the factors influencing medical students in this choice, so we can only theorise on the circumstances affecting the  decisions  to  pursue  a  higher-paid  specialty.    This  trend  may indicate that students begin medical school with more altruistic and rural intentions, but change their minds during training and come to place greater importance on financial return as they mature through their educational experience.  Cohort effects may also play a role (that is, whether more recent student cohorts are more oriented towards future urban practice and career earnings).   The trends of student specialty preference being affected by financial debt obtained during training and potential remuneration in higher paid specialties are being increasingly explored in American and Canadian literature. [29, 30]

At exit from medical school, fewer Australian medical students, in general, planned to work in a rural area than at entry, despite the numerous incentives and rural programs to encourage rural medical practice. The decrease in preference for rural practice by graduation in all Australian medical students may reflect the small number of regional medical schools or limited opportunity for rural placements, and factors such as specialty choice and training in urban areas. It is nonetheless clear that full-fee paying Bond medical students are more likely to prefer urban practice when compared against other Australian medical students.  This may suggest the need for further modification of medical school recruitment and admission processes at privately funded  institutions  to  focus  on  students  who  demonstrate  either a rural background or interest in rural practice.  There are current opportunities for students who are keen to undertake rural clinical clerkships at all medical schools through various in-curricular and extra- curricular activities. However, unlike their commonwealth-supported counterparts, privately-funded medical schools are not mandated to enforce their students to receive this rural exposure.  Ensuring a rural clinical rotation could be a potential avenue for encouraging more students to pursue rural and remote practice.  James Cook University (JCU) has designed its medical program specifically to recruit and prepare doctors to work in rural and remote locations.  Their program is characterised by a selection process targeting students of regional and remote backgrounds, a rural community orientated curriculum, increased engagement with Aboriginal and Torres Strait Islander health issues and more frequent and extended rural clinical placements. [31] As a result, at graduation, 88% of JCU medical students intend to practise outside Australian capital cities, compared to 31% of graduates from other medical schools. [31]

The conclusions of this study are limited by the difference in sample size  between  full-fee  paying  undergraduate  medical  students  and all other students (496 compared to 18161).   This restricted the potential for statistically significant subgroup data analysis.  A further qualitative study would be useful in clarifying student motivations and influencing factors in decision making of future specialty choice and location of practice.  There is, as yet, no long-term data of how students’ preferences translate to actuality, with the first students who contributed to the MSOD project still in their early postgraduate years. Ongoing follow-up of students may also shed further light on factors that influence doctors at all stages of training.  There is an increasing emphasis on medical schools becoming ‘socially accountable’ in their training of physicians, in order to respond to current and future health needs and challenges in society, which includes the maldistribution of doctors. [32] The initiatives that medical schools undertake in an effort to fulfil the criteria presented by the World Health Organisation (WHO) for social accountability are designed to impact the training of medical students and therefore may be partially accountable for graduate specialty preferences. Further research is being conducted to clarify whether a full-fee-paying medical student education and potential associated debts can influence specialty choice, particularly higher income specialties. [33]

Conclusion

Full-fee paying medical students of Bond University are more likely to come from an urban background and prefer urban over rural practice at exit of medical school when compared with all other Australian medical students.  This is a point of concern and may inform future modifications to medical school admission processes as well as more opportunities for rural clinical exposure in the curriculum. Nonetheless, they remain similar to all other Australian medical students in terms of demographic characteristics and preference for higher-paying specialties and those in short supply. Future research is directed to assess the long term impact on medical workforce distribution and specialty choice of full-fee paying medical education in Australia.

Acknowledgements

The research on which this paper is based used data provided by the Medical Schools Outcomes Database (MSOD) Project, Medical Deans Australia and New Zealand. We are grateful to the Australian Government Department of Health and Ageing for funding the project from 2004 – 2011, to Health Workforce Australia for funding from 2011 onwards and to the medical students and graduated doctors who participated.

Conflict of interest

None declared.

Correspondence

E Teo: eteo@outlook.com

References

[1]  Jolly,  R.  Briefing report:  Medical  practitioners education and  training  in  Australia. Parliamentary Library (Australia). 2009. Available from: http://apo.org.au/node/17809

[2] Australian Government Department of Health. Medical training review panel: Seventeenth report. 2014. Available from: http://www.health.gov.au/internet/main/ publishing.nsf/Content/work-pubs-mtrp

[3] Australian Medical Students’ Association. Concerning increase in private medical places Australian Medical Students’ Association [Internet]. 2010 [cited 2014 Aug 29]. Available from: https://www.amsa.org.au/press-release/20100607-concerning-increase-in-private- medical-places/

[4] Laurence C, Elliott T. When, what and how South Australian pre-registration junior medical officers’ career choices are made. Medical Education. 2007;41(5):467-75.

[5] Tollhurst HM, Adams J, Stewart SM. An exploration of when urban background medical students become interested in rural practice. Rural Remote Health. 2006;6:452.

[6] Eley DS, Synnott R, Baker PG. A decade of Australian rural clinical school graduates: Where are they and why? Rural Remote Health. 2012;12:1937.

[7]  Pearson  SA,  Rofle  I,  Clare  R.  A  comparison  of  practice  outcomes  of  graduates from  traditional  and  non-traditional  medical  schools  in  Australia.  Medical  Education. 2002;36:985-91.

[8] Thistlethwaite JE, Leeder SR, Kidd MR. Addressing general practice workforce shortages: Policy options. Med J Aust. 2008;189(2):118-21.

[9] Ward AM, Kamien M, Lopez DG. Medical career choice and practice location: Early factors  predicting  course  completion,  career  choice  and practice  location.  Medical Education. 2004;38:239-48.

[10] Creed PA, Searle J, Rogers ME. Medical specialty prestige and lifestyle preferences for medical students. Social Sciences and Medicine. 2010;71(1084-1088).

[11] Tolhurst H, Stewart M. Becoming a GP: a qualitative study of the career interests of medical students. Aust Fam Physician. 2005;34(3):204-6.

[12] Krahe LM, McColl AR, Pallant JF. A multi-university study of which factors medical students consider when deciding to attend a rural clinical school in Australia. Rural Remote Health. 2010;10:1477.

[13] Spencer RJ, Cardin AJ, Ranmuthugala G, Somers GT, Solarsh B. Influences of medical students’ decisions to study at a rural clinical school. Aust J Rural Health. 2008;16(5):262-8.

[14] Eley D, Young L, Ptzybeck TR. Exploring temperament and character traits in medical students: A new approach to increase a rural workforce. Medical Teaching. 2009;31:79-84.

[15] Adams ME, Dolland J, J H. Development of a questionnaire measuring student attitudes to working and living in rural areas. Rural Remote Health. 2005;5:327.

[16] Critchley J, DeWitt DE, Khan MA. A required rural health module increases students’ interest in rural health careers. Rural Remote Health. 2007;7(688).

[17] Denz-Penhey H, Murdoch JC. Reported reasons of medical students for choosing a clinical longitudinal integrated clerkship in an Australian rural clinical school. Rural Remote Health. 2009;9:1093.

 

[18] Humphreys JS, Prideaux D, Beilby JJ. From medical school to medical practice: A national tracking system to underpin planning for a sustainable medical workforce in Australasia. Med J Aust. 2009;191(5):244-5.

[19] Jones M, Humphreys J, Prideaux D. Predicting medical students’ intentions to take up rural practice after graduation. Medical Education. 2009;43:1001-9.

[20] Jones M, Humphreys JS, McGrail MR. Why does a rural background make medical students more likely to intend to work in rural areas and how consistent is the effect? A study of the rural background effect. Aust J Rural Health. 2012;20:29-34.

[21] Playford DE, Evans SF, Atkinson DN. Impact of the Rural Clinical School of Western Australia on work location of medical graduates. Med J Aust. 2014;200:104-7.

[22] Australian Government Department of Health. Rural Clinical Schools Program. 2014. Available from: http://www.health.gov.au/clinicalschools

[23] Couper I, Worley PS, Strasser R. Rural longitudinal integrated clerkships: lessons from two programs on different continents. Rural Remote Health. 2011;11:1665.

[24] Cheng TC. What factors influence the earnings of general practitioners and medical specialists? Evidence from the medicine in Australia: Balancing employment and life survey. Health Econ. 2012;21(11):1.

[25] Prideaux D. Medical education in Australia: Much has changed but what remains? Medical Teacher. 2009;31(2):96-100.

[26] Hubraq H. CMSQ National Data Report. Canberra, ACT: Medical Deans Australia and New Zealand, Inc; 2011.

[27] Information for Commonwealth Supported Students and HECS-HELP. In: EaWR DoE, editor. Canberra, ACT: Commonwealth of Australia; 2012.

[28] Australian Government. Health Workforce Australia (HWA) Report 2025 for Doctors, Nurses and Midwives. Health Workforce Australia. 2012 [cited 2014 Aug 29]; Available from: https://www.hwa.gov.au/sites/uploads/HW2025_V3_FinalReport20121109.pdf.

[29] Morra DJ, Regehr G, Ginsburg S. Medical students, money, and career selection: Students’ perception of financial factors and remuneration in family medicine. Fam Med. 2009;41(2):105-10.

[30] Rosenblatt R, Andrilla C. The impact of U.S. medical students’ debt on their choice of primary care careers: An analysis of data from the 2002 medical school graduationq uestionnaire. Acad Med. 2005;80(9):815-19.

[31] Sen Gupta T, Murray, R, Hays R, Woolley T. James Cook University MBBS graduate intentions  and  intern  destinations:  A  comparative  study  with  other  Queensland  and Australian medical schools. Rural and Remote Health. 2013;13(2313).

[32] Larkins SL, Preston R, Matte MC, Lindemann IC, Samson R, Tandinco FD. Measuring social accountability in health professional education: Development and international pilot testing of an evaluation framework. Med Teach. 2013;35(1):32-45.

[33] Hays R, Lockhart K, Teo E, Smith J, Waynforth D. Full medical program fees and medical student career intention. Med J Aust. Pending Publication.

Categories
Case Reports

Impact of socioeconomic status on the provision of surgical care

In Australia, there is an association between low socioeconomic status (SES) and poor health outcomes. Surgical conditions account for a large portion of a population’s disease burden. The aim was to determine the difference in provision of surgical care and patient satisfaction between low and high SES communities in Sydney, Australia. A cross sectional analytical study was conducted using questionnaire-based data. Patients were recruited from five general practice centres across low and high SES areas. Participants were eligible for this study if they had surgery performed under general anaesthesia  within  the  last  five years.  Analysis  was performed to determine whether waiting times for surgery and surgical consultations were different between low and high SES groups, and whether private health insurance impacted on waiting times. A total of 107 patient responses were used in the final data analysis. Waiting times for elective surgery were longer in the low SES group (p=0.002).The high SES group were more likely to have private health insurance (p <0.001) and were 28.6 times more likely to have their surgery in a private hospital. Private health insurance reduced waiting times for elective surgical procedures (p = 0.004), however, there was no difference in waiting times for initial surgical consults (p=0.449). Subjective patient satisfaction was similar between the two groups. In conclusion, our study demonstrates that SES does not impact on access to a surgical consultation, but a low SES is associated with longer waiting times for elective surgeries. Despite this, patients in both groups remained generally satisfied with their surgical care.

v6_i1_a18a

Introduction

In Australia, low socioeconomic status (SES) has been linked to poor health outcomes [1] with a 1.3 times greater mortality risk in low SES areas when compared to the highest SES areas. [2-3] Individuals living in more disadvantaged areas are more likely to engage in unhealthy behaviours, and their poorer health is reflected in more frequent utilisation of health care services. [4] Greater Western Sydney represents one of the lowest SES areas in Sydney, Australia [5] and according to the Socio-Economic Indexes of Areas (SEIFA), contains eight of the ten most disadvantaged areas in Sydney. [5-6] For general elective procedures, average waiting times in Greater Western Sydney hospitals varied from 23 to 93 days, compared with 4 to 36 days in other areas of Sydney. [6] Thus, timely and easily accessible provision of surgical services is a growing necessity for the expanding population of Greater Western Sydney.

Methods

The research was approved by the University of Western Sydney Human Research Ethics Committee (H9067).  The SEIFA [7] score was used to determine the areas chosen for data collection. A total of five Sydney General Practices, three located in low SES areas and two in high SES areas, were chosen randomly for patient recruitment.

The data collection tool employed was a survey which included questions relating to SES factors, health fund status, comorbidities, details of the surgical procedures undertaken, waiting times for operations,  follow-up   consultations,  post-operative   complications and patient satisfaction. The survey and written consent were offered to all General Practice waiting room patients over a period of two weeks by the authors. Patients were eligible to participate if they had undergone a surgical procedure in Sydney, performed under general anaesthesia within the last five years. The survey was anonymous with no personally identifying information recorded.

Data were analysed using Microsoft Excel 2010 and SPSS software version 22.0. Logarithmic values were calculated for all data sets and t-tests performed for analysis. Chi-squared analyses were conducted to assess the effect of private health insurance on hospital choice.

Results

A total of 107 surveys were eligible for analysis after excluding dental procedures, colonoscopies, procedures performed outside Sydney, emergency procedures, caesarean sections and respondents under 18 years of age.

Table 1 illustrates the characteristics of the sample studied. Notable differences between responses from high and low SES areas include level of education and private health insurance status. The median ages were 56 for low SES and 66 for high SES (p=0.02). Table 2 displays the types of surgical procedures that were included in the study.

v6_i1_a18b

v6_i1_a18c

Waiting times

The average waiting time for consultation with a surgeon was 2.5 weeks in the low SES group and two weeks in the high SES group (p=0.449). Private health insurance status did not influence this waiting time. Waiting times for elective surgery were on average six weeks in the low SES group and 2.5 weeks in the high SES group (p=0.002). Possession of private health insurance was associated with a decreased waiting time (p=0.004).

Private health insurance and choice of hospital

Responders with private health insurance were 28.6 times (p < 0.001) more likely to have surgery performed at a private hospital.

Patient satisfaction

Table 3 demonstrates rates of patient satisfaction between the low and high SES groups. There was an overall trend for patients in the lower SES groups to be dissatisfied with waiting times but be generally satisfied with other aspects of surgery.

v6_i1_a18d

Discussion

The study found that patients from lower SES groups had less private health insurance and longer wait times for surgery. Despite this, a high level of satisfaction was expressed across both SES groups regarding surgical outcomes and overall medical care during hospital admission.

These findings were anticipated and are consistent with previous research which has shown that patients in the public system experienced longer waiting times and were 60-95% less likely to undergo surgery than private patients. Furthermore, privately insured patients were also found to have greater access to surgical care, shorter overall length of stay and lower mortality rates. [8] This relationship creates the premise that increasing access to private care will relieve the burden on the public system and reduce waiting times. However, the converse has been shown to be the case, with an increase in waiting times for surgery when access to private hospitals is increased. [9] The trend for generally high levels of satisfaction is counter-intuitive, however, is consistent with the literature. [10-11]

The implications of longer waiting times in Western Sydney is of concern because the region’s population is expected to grow by 50% over the next 20 years, a growth of 1 million people [12], and the availability of health care services will have to expand to accommodate this increasing population. There are increasing numbers of additions to public hospital elective surgery waiting lists every year. [13] Availability and staffing of beds in public hospitals are lower in the Western Sydney region, and there is a relative lack of private hospitals compared to the wider Sydney metropolitan area [6]. Compounding the issue of access

to healthcare are lower rates of private health insurance membership and the generally poorer health of low SES populations. [6] It becomes apparent  that  there  is  a  relative  lack  of  services  available  in  low SES areas of Sydney. It is estimated that the cost of funding enough public hospital beds to accommodate a populace of this size would be a minimum of $1.29 billion a year. This poses the risk of escalating inequality in access to health services between the low SES areas of Western Sydney and the wider metropolitan area. [6] The NSW government has invested $1.3 billion from the recent health budget to upgrade existing hospitals [14], however, ongoing funding of these hospitals will need to increase to accommodate the growing demand. [6]

Data were collected from a small number of locations across only three SES regions in Sydney, providing a limited sample size for analysis. Recall bias would also have an impact on accuracy of responses, despite the criteria for a five year cut off. Future research would benefit from increasing data collection across a larger number of SES sites to reduce any possible sample bias. Furthermore, expanding data sources to include hospital databases would minimise recall bias, allowing for more objective and accurate data regarding the length of time spent on surgical waiting lists and utilisation of private health cover.

Conclusion

It is well established that a low SES is associated with poorer health. This study has found that patients from low SES areas experienced longer waiting times for elective surgery. A contributing factor to the longer waiting times was possession of private health insurance. Patients from low SES areas felt that they waited too long for their surgery; however, overall satisfaction ratings were generally high across both SES groups. The interplay between SES and the public and private health systems has created a disparity in access to timely elective surgery.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

Z El-Hamawi: z.elhamawi@hotmail.com

References

[1] Armstrong BK, Gillespie JA, Leeder SR, Rubin GL, Russell LM. Challenges in health and health care for Australia. Med J Aust. 2007;187(9):485-489.

[2] Korda RJ, Clements MS, Kelman CW. Universal health care no guarantee of equity: Comparison of socioeconomic inequalities in the receipt of coronary procedures in patients with acute myocardial infarction and angina. BMC Public Health. 2009 14;9:460.

[3] Clarke P, Leigh A. Death, dollars and degrees: Socio-economic status and longevity in Australia. Economic Papers: 2011 Sept 3;30(No. 3): 348–355.

[4] Australian Bureau of Statistics. Health Status: Health & socioeconomic disadvantage of area. Canberra. 2006 May. Cat. No 4102.0

[5] Australian Bureau of Statistics. ABS releases measures of socio-economic advantage and disadvantage. Canberra. 2008 March.Cat. No. 2033.0.55.001

[6] Critical Condition: A comparative study of health services in Western Sydney [Internet]; Australia: Western Sydney Regional Organisation of Councils. August 2012. [cited 2013 Feb]

[7] Australian Bureau of Statistics. Census of population and housing: Socio-economic index for areas, Australia, 2011. Canberra. 2013 March. Cat. No. 2033.0.55.001

[8] Brameld K, Holman D, Moorin R. Possession of health insurance in Australia – how does it affect hospital use and outcomes? J Health Serv Res Policy. 2006;11(2):94-100.

[9] Duckett S.J. Private care and public waiting. Aust Health Rev. 2005;29(1);87-93

[10] Myles PS, Williams DL, Hendrata M, Anderson H, Weeks AM. Patient satisfaction after anaesthesia & surgery: Results of a prospective survey of 10811 patients. Br J Anaesth. 2000;84(1):6-10

[11] Mira JJ, Tomás O, Virtudes-Pérez M, Nebot C, Rodríguez-Marín J. Predictors of patient satisfaction in surgery. Surgery. 2009;145(5):536-541.

[12] New South Wales in the future: Preliminary 2013 population projections [Internet]. Australia: NSW Government Department of Planning and Infrastructure;2013 [cited 2014 Sept]

[13]  Australian  Institute of Health and Welfare. Australian hospital statistics 2011-12: Elective surgery waiting times – Summary. 2012 Oct

[14] $1.3 billion building boom for NSW hospitals [Internet].Media Release. Australia: NSW Government Budget 2014-2015; 2014. [cited 2014 Sept]

Categories
Original Research Articles

English-speaking background and its relationship with length of stay in elderly patients admitted to a subacute setting: a retrospective study

Introduction: Despite the resource implications of extended inpatient stays, the impact of a non-English speaking background (NESB) on length of stay (LOS) has not been studied in the subacute geriatric population. We investigated the relationship between language background and LOS in elderly subacute inpatients. Method: A retrospective file audit of subacute inpatients (aged≥75) was conducted. LOS, language background, interpreter requirement, comorbidities, functional status (Functional Independence  Measure  (FIM)),  history  of  dementia/delirium, and discharge destination were noted. Results: 121 records were audited. 45 (37%) were identified as NESB with a median LOS of 21 days [IQR 13.0, 41.0] compared to 19 days for patients with an ESB [IQR 8.8, 35.8]. The median LOS for NESB patients who required an interpreter (n=24) was 27.5 [IQR 14.4, 44.8] compared to 17.0 [IQR 10.0, 40.0] for those who did not (n=21). There were no statistically significant differences in LOS found between ESB patients and NESB patients who required an interpreter (p=0.272), or NESB patients who required and did not require an interpreter (p=0.232). When short LOS patients (<22 days) were compared to long LOS patients (≥22 days), we found a significant association between a longer LOS and history of dementia/delirium (p=0.038), lower admission FIM score (p<0.001) and discharge destination. Those with short LOS were significantly more likely to be discharged to acute care, and those with long LOS to home or residential care (p=0.003). Conclusion: We did not find a statistically significant difference in LOS between ESB and NESB in subacute patients aged over 75. However, an association between longer LOS and a history of dementia, delirium or cognitive impairment; lower admission functional status; and discharge to home or residential care was found.

v6_i1_a17a

Introduction

Australia is a multicultural country. Persons from a Non-English Speaking Background (NESB) now comprise a large and growing proportion of Australia’s ageing population. [1] The 2011 census revealed that greater than thirty-two percent of households in Greater

Melbourne reported speaking two or more languages at home. [2] Language barriers have the potential to impact multiple aspects of health care delivery for older people, including effects on diagnosis, prevention  of  complications,  engagement  in  treatment  decisions, and timely discharge planning. [3] These factors have been shown to increase hospital length of stay (LOS), which is undesirable for both patients and the health care system. [3]

Associations between NESB and LOS have been mostly investigated in acute care settings and younger populations, with mixed results. [3-8] A retrospective study in a Canadian Paediatric Emergency Department (ED) revealed longer LOS for families that did not speak English. [4] A prospective cohort study in another Paediatric ED in Chicago (USA) found a 20 minute longer stay for patients who spoke a different language to the clinician. [5] A retrospective study by John-Baptiste et al. in a heterogeneous inpatient population in Canada showed a 0.5 day longer LOS for patients with limited English proficiency. [3] Conversely, studies in a psychogeriatric setting in Western Melbourne and medical inpatient settings in Californian hospitals found no significant difference in LOS between English speaking background (ESB) and NESB patients. [6-8]

Elderly patients admitted to subacute care such as a rehabilitation ward or Geriatric Evaluation and Management (GEM) unit typically have a longer LOS to address complex needs. [9] A number of factors influencing  LOS  in  subacute  care  have  been  identified,  including pre-existing disability, cognitive impairment, recurrent falls, urinary incontinence and lack of supportive living arrangements. [10-12] However, the impact of a NESB in elderly patients on LOS in a subacute setting has not been investigated to our knowledge, and is of particular relevance in the context of an ageing and diverse population due to the resource implications of an extended LOS. [13]

Our primary aim was to investigate the relationship between language background (ESB vs. NESB) and LOS in older patients in a subacute setting in metropolitan Melbourne. Our secondary aim was to explore other factors associated with an increased LOS in this setting.

Methods

Study setting and participants

Monash  Health  caters  for  the  South  Eastern  catchment  area  of Melbourne, which is the largest in Victoria in terms of population. The 2011 census found that 44.4% of people living within the City of Monash reported speaking a language other than English at home, compared to 29.1% in Greater Melbourne. [14] Monash Health services a large NESB population and as such, it is ideally placed to research the impact of linguistic diversity on health care delivery.

Study participants were older patients (aged 75+) admitted to any subacute medical ward within Monash Health. Subacute care in Monash Health is located across the South-Eastern Region in three centres. The term ‘subacute’ encompasses two inpatient streams: GEM and Rehabilitation (Rehab). GEM encompasses the subacute care of chronic or complex conditions associated with ageing, cognitive dysfunction, chronic illness or disability. It is conducted by a geriatrician and a multi-disciplinary team for a defined episode of care. [15] Rehabilitation aims to maximise independence and quality of life for people living with a disabling medical condition. Multidisciplinary care is provided in an inpatient setting with an aim to minimise long-term care needs and community support to bring about considerable cost savings both in acute health care, and in long-term social security. [15]

Study design

This project was a retrospective file audit of consecutive discharges from subacute wards between February 2012 and February 2013. Inclusion and exclusion criteria are detailed in Table 1.

v6_i1_a17b

The lower age limit for our study was selected as being 75 because this age range captures the ‘old’ and ‘oldest-old’ categories, while excluding the ‘young-old’ (65-74 years) category who are likely to have less complex care needs.

Ascertainment of English speaking background

Language background status was ascertained from the patient admission cover sheet. Assessment of whether an interpreter was

‘required’ was made via allied health admission notes. The standardised Monash Health admission forms require the health care provider to indicate in a checkbox item whether an interpreter is required. The language status and requirement of an interpreter was corroborated with medical, nursing and allied health progress notes.

Length of stay data

Length of stay was calculated from the admission date and discharge date in the discharge summary of each participant record.

Other variables collected

Based on our literature review, other variables collected were: Patient demographics: age, gender and primary language spoken, Clinical characteristics: admission type (GEM or Rehab), discharge destination, diagnosis on admission to subacute, functional status (Functional Independence Measure (FIM) [16]), comorbidities (used to calculate the Charlson Comorbidity Score [17]), and a history of dementia, delirium, or cognitive impairment.

 

Statistical analysis

Extracted data were analysed descriptively in Excel and the following statistical tests were applied in SPSS (version 22) to assess differences between groups: Mann-Whitney U test for non-normally distributed continuous data, Chi-square test for categorical data and independent samples and t-test for normally-distributed continuous data.

Three comparative analyses were conducted. Comparison 1 sought associations between language background (ESB vs. NESB patients who required an interpreter) and LOS. Comparison 2 involved a subset analysis of NESB patients, where difference in LOS was investigated for NESB patients who did not require an interpreter compared with NESB patients who required an interpreter. Comparison 3 sought associations between prolonged LOS and all clinical/demographic variables collected, where long LOS was defined as ≥ 22 days (based on average length of stay of 20.8 days for patient with orthopaedic impairments in New South Wales rehabilitation units in 2010). [18]

Ethics

The project was approved as a Low/No Risk research activity by the Monash Health Ethics Committee (Ref: 13048L).

Results

Participant characteristics

There were 201 discharges from subacute settings within Monash Health between February 2012 and February 2013, of which a total of 121 discharges met the eligibility criteria. The average age of the patients was 83.2 years old (SD=5.2). Male patients represented 46%. The languages spoken by NESB patients were most commonly Greek (16%) and Italian (13%).

The three most common primary diagnoses at admission were fractures from any cause (17%), stroke (both ischaemic or haemorrhagic) (12%) and intracranial haemorrhage (8%). The median FIM score for ESB and NESB was 68.0 [46.3, 81.8] and 65.6 [42.0, 79.3] respectively (FIM score possible range: 18 to 126). 32% of ESB patients had a history of dementia, delirium or cognitive impairment compared to 29% of the NESB group.

As shown in Table 2, mean age, mean Charlson co-morbidity score, and distribution of admission care types were very similar for ESB and NESB groups. There was a small difference in gender distribution for ESB and NESB groups, but this was not statistically significant (χ²[2, n = 121] = 3.82, p = 0.15).

In terms of discharge destination, more patients from the NESB group were discharged home with supports (33%) compared to the ESB group (21%). A large proportion of both ESB and NESB patients were discharged to an acute inpatient unit; 55% and 53% respectively.

v6_i1_a17c

Length of stay

Comparison 1 revealed a median difference in LOS of 8.5 days between ESB patients and NESB patients who required an interpreter, although this was not statistically significant (Table 2, Figure 1).   Comparison 2 sought differences in LOS between NESB patients who required an interpreter and those who did not (Table 2). The difference in LOS was not statistically significant (Figure 1).

v6_i1_a17d

The purpose of Comparison 3 was to identify other variables associated with LOS status (long LOS ≥ 22 days, short LOS < 22 days). The long LOS group was associated with a history of dementia/delirium/cognitive impairment, lower functional status (FIM score) at admission, and discharge to home or residential care (Table 3). No relationship was found between age or interpreter requirement and LOS status (Table 3).

v6_i1_a17e

Discussion

The study confirmed that a large number of Monash Health’s elderly subacute patients are from a Non-English Speaking Background (NESB), consistent with the proportion reported by 2011 Census data. [2]

NESB patients in Australia may find it difficult to communicate with the doctor and navigate the health care system. [19] This may be especially the case for elderly Non-English Speaking patients, as they have a high burden of chronic disease, disability and impairments, and have complex medical, functional and social needs. [20] A survey of Aged Care Assessment Service clinicians in Victoria cited the availability and quality of interpreters as a significant challenge in assessing culturally and linguistically diverse clients. [21] Communication difficulties have been found to make the assessment of cognitively impaired patients more challenging. [21]

Both Rehabilitation and GEM require high levels of patient cooperation and understanding in order to engage with multidisciplinary care, including physiotherapy, occupational therapy, and social work. [9] We hypothesised NESB status would be associated with a longer LOS for elderly patients in subacute care due to language barriers faced during complex multi-disciplinary care. However, we did not find a significant difference between the patients who were NESB compared to ESB patients. Comparison 1 found a trend towards a longer median LOS between patients who were from NESB requiring an interpreter, compared to ESB patients, although this was not statistically significant.

Our finding of a similar LOS for ESB and NESB is not unique. A study of a psychogeriatric unit in Melbourne [6] and American studies of inpatients in acute medical settings found no difference in LOS between ESB and NESB patients. [7,8]

These findings are interesting because they are counterintuitive, and the reasons for the lack of relationship deserve further investigation. It is possible that health services that provide care for a large NESB population, such as Monash Health, already have effective procedures in place to support communication between staff and NESB patients and their families. Alternatively, it may be that NESB patients are consulted less often due to language barriers, and decision-making is conducted without involvement of the patient. While such practice could reduce LOS, it may result in lower satisfaction with care and quality of care. For example, research conducted in North America has shown language discordance in the physician-patient relationship may result in reduced satisfaction and poorer health outcomes. [22,23] While a study of NESB patients in a Queensland Emergency Department reported increased rates of patient satisfaction when an interpreter was used compared to patients who did not utilise an interpreter. [24] Future studies of the subacute population could focus on satisfaction and health outcomes of NESB patients.

Although NESB was not associated with increased LOS in our study, we did identify a number of other factors associated with a longer LOS. When short LOS patients (<22 days) [18] were compared to long LOS patient, we found a significant associated between a longer LOS and patients with dementia/delirium, lower admission FIM score and discharge destination. Those with a short LOS were more likely to be transferred to acute care and those with a long LOS to home or residential care.

One of the limitations of this study is that it was retrospective. Retrospective analyses suffer from the fact that the data being analysed was not originally collected for the purpose of the study. The accuracy of data depends upon diligently prepared medical records by medical, nursing and allied health professionals. There were instances in this study where the language background of the patient was not clearly recorded in the patient’s medical record, which led to the file being excluded from the study. This study also revealed that Allied Health and Nursing staff noted the language and social background of the patient in their notes more often than Medical staff. In addition, the generalizability of this study is limited in that it was conducted within one health service and in one state of Australia. The power of the study is limited by the fact that only 121 patient records were studied.

A final limitation of the study is the high number of patients who were transferred back to an acute inpatient unit during their subacute stay (Tables 1 and 2). The LOS for some of these patients may have been underestimated as the majority actually returned to subacute and continued their rehabilitation following the resolution of their acute medical problem. Future work could include repetition of this analysis with LOS calculated using combined admission times and/or exclusion of cases where patients were transferred unexpectedly and did not return to subacute care.

Conclusion

Our study did not reveal a statistically significant difference in LOS between subacute inpatients aged over 75 of English speaking and non-English   speaking   backgrounds.   Variables   that   were   related to longer LOS were a history of dementia, delirium or cognitive impairment, lower admission functional status, and discharge to home or residential care.

Acknowledgements

To my co-authors and the Monash Health Medical Academic Scholarship Committee: This project would not have been possible without your encouragement and guidance. Thank you.

Scholarship: Ankit Gupta received a Monash Health Medical Academic Scholarship to complete this research project.

Conflict of interest

None declared.

Correspondence

A Gupta: ankitg420@gmail.com

References

[1] The Department of Health, Australian Government. National ageing and aged care strategy from Culturally and Linguistically Diverse (CALD) backgrounds [Internet]. 2012 [updated 2012 Dec 19; cited 2014 July 9]. Available from: http://www.health.gov.au/ internet/main/publishing.nsf/Content/ageing-cald-national-aged-care-strategy-html

[2] Australian Bureau of Statistics. Greater Melbourne (Greater capital city statistical area), People – cultural & language diversity: Language [Internet]. 2011 [updated 2013 Mar 28; cited 2014 April 28]. Available from: http://www.censusdata.abs.gov.au/census_services/ getproduct/census/2011/quickstat/2GMEL

[3] John-Baptiste A, Naglie G, Tomlinson G, Alibhai SM, Etchells E, Cheung A et al. The effect of English language proficiency on length of stay and in-hospital mortality. J Gen Intern Med 2004;19(3):221-8

[4] Goldman RD, Amin P, Macpherson A. Language and length of stay in the Pediatric Emergency Department. Pediatr Emerg Care. 2006;22(9):640-3

[5] Hampers LC, Cha S, Gutglass DJ, Binns HJ, Krug SE. Language barriers and resource utilization in a pediatric emergency department. Pediatrics. 1999;103(6):1253-6

[6] Hassett A, George K, Harrigan S. Admissions of elderly patients from English-speaking and non-English-speaking backgrounds to an inpatient psychogeriatric unit. Aust N Z J Psychiatry. 1999;33(4):576-82

[7] Grubbs V, Bibbins-Domingo K, Fernandez A, Chattopadhyay A, Bindman AB. Acute myocardial infarction length of stay and hospital mortality are not associated with language preference. J Gen Intern Med. 2007;23(2):190-4

[8] Karliner LS, Kim SE, Metlzer DO, Aurbach AD. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med. 2010;5(5):276-282

[9]  Ward  SA,  Workman  B.  Multidisciplinary teamwork.  In:  Caplan  G,  editor.  Geriatric Medicine: An introduction. 1st ed. IP Communications 2014. p.30-46

[10] Anpalahan M, Gibson SJ. Geriatric syndromes as predictors of adverse outcomes of hospitalisation. Intern Med J. 2008;38(1):16-23

[11] Carpenter I, Bobby J, Kulinskaya E, Seymour G. People admitted to hospital with physical disability have increased length of stay: implications for diagnosis related group re-imbursement in England. Age Ageing. 2007;36(1):73-8

[12] Lang P, Heitz D, Hedelin G, Drame M, Jovenin N, Ankri J et al. Early markers of prolonged hospital stays in older people: A prospective, multicenter study of 908 inpatients in French acute hospitals. J Am Geriatr Soc. 2006;54(7):1031-9

[13]  Campbell  SE,  Seymour  DG,  Primrose  WR.  A  systematic  literature  review  of factors affecting outcome in older medical patients admitted to hospital. Age Ageing. 2004;33(2):110-5

[14] Australian Bureau of Statistics. City of Monash: Language spoken at home [Internet]. 2011 [cited 2014 May 1] Available from: http://profile.id.com.au/monash/language

[15]  State  Government  of  Victoria,  Department  of  Health.  Sub-acute care  services [Internet]. 2014 [updated 2014 Jan 23; cited 2014 May 1] Available from: http://health.vic.gov.au/subacute/overview.htm

[16] Tinetti ME. Performance-oriented assessment of mobility problems in elderly patients. J Am Geriatr Soc. 1986;34(2):119-26

[17]  Charlson  ME,  Pompei  P,  Ales  KL,  MacKenzie  CR.  A  new  method  of  classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373-83

[18] NSW Agency for Clinical Innovation, New South Wales Health. Rehabilitation redesign project: Final report – Model of care [Internet]. 2011 [updated 2011 Feb 21; cited 2014 Apr  17]  Available  from: http://www.archi.net.au/documents/resources/models/rehab_redesign/NSW_Health_Rehabilitation_Redesign_Final_Report_1.4.pdf

[19] Goldstein D, Bell ML, Butow P, Sze M, Vaccaro L, Dong S et al. Immigrants’ perception of the quality of their cancer care – An Australian comparative study, identifying potentially modifiable factors. Ann Oncol. 2014;25(8):1643-9

[20] Ward SA, Parikh S, Workman B. Health Perspectives: international epidemiology of ageing. Best Pract Res Clin Anaesthesiol. 2011;25(3):305-17

[21] Vrantsidis F, Logiudice D, Rayner V, Dow B, Antonopoulous S, Runci S et al. Aged care assessment service practitioners: a review of current practice for assessment of cognition of older people of culturally and linguistically diverse backgrounds in Victoria. Australas J Ageing. 2014;33(1):1-6

[22] Sarver K, Baker DW. Effect of language barriers on follow-up appointments after an emergency department visit. J Gen Intern Med. 2000;15(4):256-64

[23] Fox SA, Stein JA. The effect of physician-patient communication on mammography utilization by different ethnic groups. Med Care. 1991;29(11):1065-82

[24] Mahmoud I, Hou XY, Chu K, Clark M, Eley R. Satisfaction with emergency department service   among   non-English-speaking  background   patients.   Emerg   Med   Australas. 2014;26(3):256-61

Categories
Case Reports

Adolescent-onset metabolic syndrome

Obesity is a common cause of insulin resistance (metabolic syndrome) in adults, however in recent years this has extended into much younger age groups. Associated conditions including dyslipidaemia, type 2 diabetes mellitus, and cardiovascular complications are all major components of metabolic syndrome. This case report describes a sixteen-year-old with features typical of adult-onset metabolic syndrome. The patient described in this report did not receive adequate treatment for three years after her initial diagnosis, which highlights challenges in engaging with and managing this age group. This report discusses the use of a biopsychosocial approach in managing metabolic syndrome in the adolescent population.

v6_i1_a16a

Case

DT is a sixteen-year-old female who was referred to the emergency department by her general practitioner (GP) after she was found to have a blood glucose level of 14.1mmol/L. She was commenced on intravenous saline and short-acting insulin, and transferred to the paediatric ward.

DT had been diagnosed with a cluster of health problems collectively known as the metabolic syndrome at 13 years of age, but subsequently ceased prescribed medication and failed to attend follow-up appointments. Her co-morbidities at the time included type 2 diabetes mellitus (T2DM), dyslipidaemia, obesity, and non-alcoholic fatty liver disease. She was also found to have obstructive sleep apnoea and polycystic ovarian syndrome.

She reported that most of her adult relatives were overweight, however denied any family history of T2DM or any hereditary conditions. She had never smoked, or participated in alcohol or recreational drug use.

Having emigrated from Samoa at age 11, DT said she had few friends, although she socialised within her church community. DT dealt with domestic violence in her immediate family, parental separation, and was responsible for the care of her seven siblings.

On examination, DT was severely obese with a body mass index (BMI) of 41.8kg/m2. Her vital signs were all within the normal ranges and she had no signs of diabetic ketoacidosis. Of significance was the presence of acanthosis nigricans on her neck, elbow creases, and axillae, indicating longstanding insulin resistance. She had a deep voice, but no other signs of hyperandrogenism.

DT’s investigations revealed a HbA1c of 12.2% (reference range [RR]<6.5%), fasting glucose level of 14.1mmol/L (RR: 4.0-6.0), alanine transaminase of 56 mmol/L (RR <30), aspartate amino transferase of 44mmol/L (RR <30), and gamma-glutamyl transferase of 64 mmol/L (RR  <30).  Ketones,  fasting  lipid  profile,  and  thyroid  function tests were all within the normal ranges, and no insulin autoantibodies were present. Urinalysis demonstrated glycosuria but not ketonuria.

These results confirmed the previous diagnosis of T2DM. However, due to the resolution of dyslipidaemia and her normal blood pressure, DT no longer met the International Diabetes Federation criteria for metabolic syndrome. The deranged liver function tests were consistent with her previous diagnosis of non-alcoholic fatty liver disease.

DT was managed in a multidisciplinary setting involving a paediatrician, endocrinologist, diabetes educator, dietician, and a social worker. She received ongoing care from a local GP and the paediatric endocrinology hospital outpatient service. The GP initially checked her blood glucose level weekly and adjusted the metformin dosage (1 x 850mg mane, 2 x 850mg nocte) [1] as required. The allied health team provided her with a lifestyle plan to reduce her dietary energy intake, to include incidental exercise as part of a regular exercise regimen, and distraction strategies to address overeating. DT was also booked for appointments to monitor diabetes-related complications (ophthalmology, renal, and podiatry clinics). Due to difficulty locating an appropriate interpreter, DT’s mother was not actively involved in discussions regarding her ongoing management. This made it incredibly difficult for the treating team to include DT’s family in the management plan, despite family involvement being a crucial component of care of the adolescent.

DT was involved in many discussions around her extensive management plan, however she asked upon discharge, “What if I can’t?” Her self-doubt demonstrates a normal, adolescent response to an overwhelming challenge and is worsened by a lack of family involvement in her care. It remains uncertain as to whether DT will attend any follow-up appointments.

Discussion

Adolescent obesity

With obesity rates in Australians being very high, the public eye has long been focused on the health impacts of the modern lifestyle. The 2007 Australian  National  Children’s  Nutrition  and  Physical  Activity Survey found that 17% of Australian children were overweight and 6% were obese. [2] Despite these figures, only a minority of Australian GPs  routinely perform  measurements  such  as  height,  weight,  and calculation of body mass index in children, relying on visual inspection alone to assess weight. In addition, many GPs find it difficult to raise the issue of weight management with children and their families, resulting in delayed or lack of dietary control and lifestyle modification. [3]

Adolescence is a time when the ability to learn increases and new habits are adopted yet the ability to self-regulate is not fully developed. [4] Overweight adolescents may desire the improved body image and self-esteem that weight loss might entail but lack an understanding of the practical steps that need to be undertaken in order to achieve that goal. [5]

The Metabolic Syndrome in the Paediatric Population

The metabolic syndrome is a term used to describe the co-occurrence of  a  range  of  metabolic  risk  factors  including  abdominal  obesity, hyperglycaemia, dyslipidaemia, and hypertension. [6] While the overt disease is rare in the paediatric population, adult cardiovascular disease is more common in those who exhibited metabolic syndrome traits as children compared to those who did not. [7]

The International Diabetes Federation requires the presence of central obesity as well as two other metabolic abnormalities to reach a diagnosis of metabolic syndrome (Table 1). [8] While DT did not meet the full diagnostic criteria for metabolic syndrome on her current presentation she previously fulfilled these criteria and has extensive metabolic derangements consistent with this syndrome, including cardiovascular disease, non-alcoholic fatty liver disease, chronic kidney disease, and diabetic retinopathy. [6] A New Zealand study of adolescents with a Pacific Island ethnicity (including Samoan) found that although rates of overweight and obesity were high (40% and 36%), only a small proportion had aberrant glucose metabolism. This is thought to be due to better insulin secretory reserves in these populations, and thus the fact that DT has T2DM is of particular concern given that she is at an extreme end of an already at-risk population. [9] Early onset T2DM is closely associated with hereditary risk factors such as increased BMI, lower threshold for insulin resistance, and dyslipidaemia. [6] Given the established heritability of these conditions it would be suitable to test DT’s immediate family members for T2DM and dyslipidaemia.

v6_i1_a16b

Managing metabolic risk factors

Optimal management of co-morbidities reduces both the occurrence and severity of complications. Regular monitoring should be undertaken, including assessment of blood pressure, waist circumference, fasting lipid profile, fasting blood glucose, urinalysis and renal function, HbA1c, visual acuity, and pedal sensation. [6] First line management in individuals with obesity as well as obesity-associated complications includes weight loss, as well as lifestyle interventions such as diet and exercise modification, glycaemic control, and optimisation of lipid profile. Such monitoring may present a burden on both the patient and healthcare providers but is an important secondary prevention strategy to reduce the risk of major long term complications.

The identification of risk factors for metabolic complications is crucial in adolescents for two reasons: 1) many risk factors can be modified to reduce future disease burden; [6] 2) adolescents are more likely to misjudge their weight status and thus feel either overwhelmed or unable to recognise the need to make lifestyle adjustments. [10] Clinicians play an important role in providing support and initiating lifestyle changes.

Adolescent attitudes to chronic disease management

For  DT,  the  prospect  of  dietary  restriction,  an  exercise  regime, daily medication, and multiple appointments may have appeared overwhelming. The transition from childhood to adolescence is marked  by  heightened  social  awareness  and  often  a  struggle  to form an individual identity. [11] A study of adolescent females found that deviation from the BMI norm is associated with greater social anxiety, depression, and lower self-worth, all of which affect not only the mental health of the individual but also their engagement with healthcare professionals. [11] In DT, these factors may also impact on the day-to-day management of her health.

Another study investigating the experience of adolescents with T2DM, found that three main factors influence the maintenance of health and end-health outcomes: concept of illness, adjustment to diagnosis, and motivation to maintain good health. [12] The study suggests that the adolescent’s beliefs about both the cause of the condition and the ability to adhere to advice are affected by motivation stemming from immediate and future consequences. If adolescents cannot yet fully understand the consequences, their motivation is sourced from family, health professionals, and their own perceptions of their health status. [10, 12] In DT’s case, family dysfunction and lack of continuity of care due to emigration may have contributed to her apparent lack of motivation to comply with health recommendations.

What went wrong in DT’s care?

Although several of DT’s health concerns were identified when she was thirteen years old a combination of factors, including emigration and family dysfunction, meant that DT did not have adequate support. These issues might be overwhelming to an adult, and are further amplified in an adolescent who does not yet have the understanding and motivation to adhere to treatment. She may have been prevented from ‘falling through the gaps’ if a treating team in Australia had been established by her New Zealand doctor before she emigrated. With a comprehensive handover, DT may have been better supported by a team who at least had some information about her history. The central problem however, is the family dysfunction meaning that her parents have had very little insight into her medical issues. Also considering that  she  has  seven  siblings  and  her  parents  are  estranged,  her health concerns are less likely to be managed outside of the hospital environment. This complex set of issues is difficult to address and may require support from a social worker and GP. Cultural issues including language, home life, and diet may be best evaluated with a home visit by a community nurse and the assistance of an interpreter. Cultural sensitivity is imperative to establishing rapport, so input from a Pacific Islander social worker may be beneficial.

The biospsychosocial approach

When addressing chronic disease, the biopsychosocial approach is appropriate for individuals of any age. This involves consideration of the medical aspects, which for DT includes medication and specialist reviews, as well as consideration of the psychological and social factors that influence attitudes and behaviours. Traditionally, the focus has been on addressing lifestyle factors in the individual, when there are perhaps better long-term outcomes by addressing wider, societal issues. [13] Family-centred models are the current mainstay of treatment and in DT’s case, will require consideration of culturally appropriate ways to engage with her family such as with social workers, interpreters, ethnic health workers, and members of her church community.

By addressing her individual concerns, which may include self-esteem and self-confidence, and by improving communication with her healthcare providers, DT may be given a better chance at improving her long-term health outcomes. As mentioned previously, by improving self-efficacy, adolescents such as DT are given the confidence in their own ability to manage their health, and thus are more likely to be able to sustain a healthy lifestyle.

It is important to consider DT’s Samoan origin, as factors such as family commitments, roles within the community, and societal expectations will influence her motivation and ability to improve her health. An investigation into the facilitators of healthy lifestyles in the Pacific Islands found that supportive role models and making physical activity more enjoyable were the most effective ways in which the health of communities could be improved. [14] These utilise the existing social structures of Pacific Island populations to provide motivation to make positive lifestyle choices and also support for long-term maintenance. Interventions should therefore focus on improving self-efficacy and providing realistic strategies. Motivational interviewing could be used by a GP to identify key goals for the individual patient to be achieved through a lifestyle plan. [4]

The increasing occurrence of typically adult-onset metabolic syndrome in children is a public health concern and DT is a prime example of the potential  for  patients  to  ‘slip  through  the  gaps’.  While  there are multiple  public  campaigns  aimed  at  improving  the  modifiable risk factors  in  the  paediatric  population, the  rates  of  obesity  and associated complications remain high. Another concern involves the many challenges unique to adolescent medicine, as the patients are not only dealing with chronic health issues but the individual changes in body and mind that are characteristic of that stage of life. This case demonstrates that a multi-faceted approach aimed at engaging, motivating, and empowering adolescents is required to optimise health outcomes in this population.

Acknowledgements

Special thanks to Dr Datta Joshi – Consultant Paediatrician, Monash Health

Consent declaration

Informed  consent  was  obtained  from  the  patient  and  parent  for publication of this case report

Conflict of interest

None declared.

Correspondence

N Ngu: natalielyngu@gmail.com

References

[1] Metformin hydrochloride: Australian Government: Department of Health; 2014 [31/05/14]. Available from: http://www.pbs.gov.au/medicine/item/1801T.

[2] Health TDo. 2007 Australian national children’s nutrition and physical activity survey – Key findings: Australian Government; 2007 [23/09/14]. Available from: http://www.health. gov.au/internet/main/publishing.nsf/Content/phd-nutrition-childrens-survey-keyfindings. [3] Cretikos MA, Valenti L, Britt HC, Baur LA. General practice management of overweight and obesity in children and adolescents in Australia. Med Care. 2008;46(11):1163-9.

[4] Fonesca H, Palmeira AL, Martins SC, Falcato L, Quaresma A. Managing paediatric obesity: a multidisciplinary intervention including peers in the therapeutic process. BMC Pediatr. 2014;14(89):1-8.

[5] Hardy LL, Hills AP, Timperio A, Cliff D, Lubans D, Morgan PJ, et al. A hitchhiker’s guide to assessing sedentary behaviour among young people: deciding what method to use. J Sci Med Sport. 2013;16:28-35.

[6] Meigs JB. The metabolic syndrome (insulin resistance syndrome or syndrome X) [Internet]. UpToDate. 2014.   Available from: http://www.uptodate.com/contents/the- metabolic-syndrome-insulin-resistance-syndrome-or-syndrome-x

[7] Wake M, Clifford SA, Patton GC, Waters E, Williams J, Canterford L, et al. Morbidity patterns among the underweight, overweight and obese between 2 and 18 years: population-based cross-sectional analyses. Int J Obes. 2013;37:86-93.

[8] Van Grouw JM, Volpe SL. Childhood obesity in America. Curr Opin Endocrinol Diabetes Obes. 2013;20(5):396-400.

[9] Grant AM, Taungapeau FK, McAuley KA, Taylor RW, Williams SM, Waldron MA, et al. Body mass index status is effective in identifying metabolic syndrome components and insulin resistance in Pacific Island teenagers living in New Zealand. Metabolism. 2007;57:511-6.

[10] Fredrickson J, Kremer P, Swinburn B, de Silva-Sanigorski A, McCabe M. Biopsychosocial correlates of weight status perception in Australian adolescents. Body Image. 2013;10:552-7.

[11] Lanza HI, Echols L, Graham S. Deviating from the norm: body mass index (BMI) differences and psychosocial adjustment among early adolescent girls. J Pediatr Psychol. 2012;38(4):376-86.

[12] Salamon KS, Brouwer AM, Fox MM, Olson KA, Yelich-Koth SL, Fleischman KM, et Experiencing type 2 diabetes mellitus: quantitative analysis of adolescents’ concept of illness, adjustment and motivation to engage in self-care behaviours. Diabetes Educ. 2012;38:543-51.

[13] Pratt KJ, Lamson AL, Lazorick S, Swanson MS, Cravens J, Collier DN. A biopsychosocial pilot study of overweight youth and care providers’ perceptions of quality of life. Pediatr Nurs. 2011;26:61-8.

[14] Siefken K, Schofield G, Schulenkorf N. Laefstael Jenses: aAn investigation of barriers and facilitators for healthy lifestyles of women in an urban pacific island context. J Phys Act Health. 2014;11:30-7.