Categories
Original Research Articles

General practitioner awareness of pharmacogenomic testing and drug metabolism activity status amongst the Black-African population in the Greater Western Sydney region

Background:  Individuals  of  black-African  background  have  a high variability in drug metabolising enzyme polymorphisms. Consequently, unless these patients are tested for these polymorphisms, it becomes difficult to predict which patients may have a sub-therapeutic response to medications (such as anti- depressants) or experience an adverse drug reaction. Given the increasing population of black-Africans in Australia, GPs are on the front line of this issue, especially in Greater Western Sydney (GWS) – one of the country’s rapidly increasing populations due to migration. Aim: To ascertain the awareness of GPs regarding drug metabolising enzyme polymorphisms in the black-African population and pharmacogenomic testing in the GWS community. Methods:  A  descriptive,  cross-sectional  study  was  conducted in GWS by analysing GP responses to a questionnaire consisting of closed and open-ended questions. Results: A total of 46 GPs completed the questionnaire. It was found that 79.1% and 79.5% of respondents were unaware of: the high variability in drug metabolism enzyme activity in the black-African population and pharmacogenomic testing (respectively). No respondents had ever utilised pharmacogenomic testing. Only a small proportion of GPs “always” considered a patient’s genetic factors (13.9%) and enzyme metaboliser status (11.1%) in clinical practice. Preferred education media for further information included written material, direct information from other health professionals (such as pharmacists) and verbal teaching sessions. Conclusion: There was a low level of awareness of enzyme metaboliser status and pharmacogenomic testing amongst GPs in GWS. A future recommendation to ameliorate this includes further education provision through a variety of media noted in the study.

v6_i1_a21a

Introduction

Depression accounts for 13% of Australia’s total disease burden, making it an important health issue in the current context. [1] General Practitioners (GPs) are usually the first point of contact for patients seeking help for depression. [2,3] Antidepressant prescription is the most common treatment form for depression in Australia with GPs prescribing an antidepressant to treat up to 40% of all psychological problems. [2] This makes GP awareness of possible treatment resistance or adverse drug reactions (ADRs) to these medications vital.

Binder et al. [4] described pharmacogenomics as “the use of genome- wide approaches to elucidate individual differences in the outcome of drug therapy”. Detecting clinically relevant polymorphisms in genetic expression can potentially be used to identify susceptibility to ADRs. [4] This would foster the application of personalised medicine by  encouraging  an  inter-individual  approach  to  medication  and dose prescriptions based on an individual’s predicted response to medications. [4,5]

Human DNA contains genes that code for 57 cytochrome (CYP) P450 isoenzymes; these are a clinically important family of hepatic and gastrointestinal isoenzymes responsible for the metabolism of over 70% of clinically prescribed drugs. [5-10] The CYP family of enzymes are susceptible to polymorphisms as a result of genetic variations, influenced by factors such as ethnicity. [6,5,10] Research has shown that polymorphisms in certain CYP drug metabolising enzymes can result in phenotypes that class individuals as “ultrarapid metabolisers (UMs), extensive metabolisers (EMs), intermediate metabolisers (IMs) and poor metabolisers (PMs).”[6,10] These categories are clinically important as they determine whether or not a drug stays within the therapeutic range. Individuals with PM status may be susceptible to experiencing ADRs as a result of toxicity, and conversely, those with UM status may not receive a therapeutic effect. [5,6,10,11]

When considering the metabolism of antidepressants, the highly polymorphic CYP enzymes: CYP2C19 and CYP2D6 are known to be involved. [5,10,12] A study by Xie et al. [13] has shown that for the CYP2D6 enzyme alone, allelic variations induce polymorphisms that result in a PM phenotype of “~1%” in Asian populations, “0-5%” among Caucasians and a variation of between “0-19%” in black- African populations. This large disparity of polymorphism phenotypes was reproduced in a recent study, which also showed that the variation is not exclusive to the CYP2D6 enzyme. [6] It has been reported that the incidence of ADRs among PMs treated with drugs such as antidepressants is 44% compared to 21% in other patients. [5,14] Consequently, increased costs have been associated with the management of UM or PM patients. [5]

The black-African population in Australia and specifically Sydney (where GWS is one of the fastest growing regions) continues to rise through migration and humanitarian programs. [15-18] Almost 30% of Africans settling in Australia in the decade leading to the year 2007 did so under humanitarian programs including under refugee status. [15-17] As refugees are at a higher risk of having mental health problems including depression  due  to  their  traumatic  histories  and  post-migratory difficulties, GPs in GWS face increased clinical interactions with  black-Africans  at  risk  of  depression.  [19,20]  Considering  the high  variability of enzyme   polymorphisms   in   this   population, pharmacogenomic testing may play a role in the primary care of these patients. We therefore conducted a study to assess GP awareness of pharmacogenomic testing and the differences in enzyme metaboliser status (drug metabolism phenotypes). We also investigated the GP preferences of media for future education on these topics.

Methodology

Study Design and Setting

This is a descriptive, cross-sectional study. Ethics approval was granted by the Human Research Ethics Committee.

Considering GWS is the fastest growing region in Sydney, we focussed on particular suburbs in GWS (Blacktown, Parramatta and Holroyd Local Government Areas). [17-20] Using geographical cluster sampling, a list of GP practices were identified with the aim of recruiting 50 participants.

Study tool

Data was collected using a questionnaire validated by university supervisors and designed to elicit the level of understanding and awareness among GPs. The main themes of the questionnaire involved: questions regarding basic demographic information; questions aimed at determining the level of GP awareness regarding differences in drug metabolising phenotypes and pharmacogenomic testing; and open- ended questions eliciting the preferred methods of education with respect to pharmacogenomic testing.

Data Collection

We invited 194 GPs between April and May 2014 to participate in the study. The questionnaire and participant information sheet were either given to the practice managers or to the GPs in person. Questionnaires were collected in person within the following two weeks.

Data Analysis

Data was analysed using SPSS (version 22, IBM Australia). Descriptive statistics were used to summarise findings, with p-values calculated using Chi-square analysis (with Yates correction) to compare two sets of data. A p-value of <0.05 indicated statistical significance.

Results

The overall response rate was 23.7% (46/194). Our respondents included: 27 females and 19 males. The mean number of years of experience in general practice was 13.9 and most GPs (93.4%, 43/46) had received some form of training in antidepressant prescription in the last 5 years. The number of patients of black-African background seen in the last 6 months ranged from 0 to greater than 100. Only

26.1% (12/46) of GPs reported no consultations with a patient of black- African background within this timeframe. Of the 73.9% (34/46) of GPs who had seen at least one patient from this cohort, 55.9% (19/34) had treated at least one patient for depression with antidepressants.

GPs experience of ADRs in patients of black-African background treated for depression

From 46 participants, 19 had treated a patient of black-African background with antidepressants, 18/19 reported having identified at least one ADR (Figure 1).

v6_i1_a21d

GP awareness and consideration of drug metabolism activity status and genetic factors

Awareness amongst GPs of the different drug metabolism activity phenotypes in black-Africans was low with 79.1% (34/43) being unaware. Patients’ genetic factors and enzyme metaboliser status were “always” considered by only 13.9% (5/36) and 11.1% (4/36) of GPs, respectively. There was no statistically significant difference regarding awareness between GPs who had treated black-African patients and those who had not (21.1% vs 13.3% respectively, p=0.89).

GP awareness and use of pharmacogenomic testing

The awareness of methods for testing a patient’s key drug metabolising enzymes, also known  as  pharmacogenomic testing, was extremely low with 79.5% (35/44) of GPs being unaware of the testing methods available in Australia. Of the 20.5% of GPs (9/44) who were aware, none had utilised pharmacogenomic testing for their black-African patients. These nine GPs then nominated factors that would influence their utilisation of pharmacogenomic testing on these individuals. Three main categories of influence emerged (Table 1). When specifically asked whether they would be more inclined to utilise pharmacogenomic testing on black-African patients who had previously experienced ADRs, 88.9% (8/9) GPs stated that they would be more inclined.

v6_i1_a21b

Preferred education media

GPs that were aware of pharmacogenomic testing were asked, through an open-ended question, how they obtained information regarding these  methods.  Three  main  categories  were  identified  based  on their responses (Table 2). All GPs were then asked to note down their preferred medium of education for pharmacogenomic testing (Table 3). Multiple responses were allowed.

v6_i1_a21c

Discussion

This study showed that there is a low level of awareness regarding pharmacogenomic testing and the differences in drug metabolism phenotypes among GPs. Additionally, we identified the preferred education media for providing information to GPs (Table 3). Awareness of pharmacogenomic testing and of the differences in drug enzyme metaboliser status (phenotype) could be valuable in the clinical setting. Improved patient outcomes have been noted when doctors are able to personalise management based on information from pharmacogenomic testing,[21] with Hall-Flavin et al. [21] noting significantly improved baseline depression scores amongst patients with depression whose doctors were provided with information on pharmacogenomics.

A previous study reported that a high proportion (97.6%) of physicians agreed that differences in genetic factors play a major role in drug responses.  [22]  Whilst  it  is  arguable  that  knowledge  of  genetic factors holistically playing a role in drug response may be universal, our study specifically focussed on the knowledge of differences in enzyme metaboliser status. It was found that 79.1% of GPs (34/43) were unaware, with only a small number of GPs “always” considering enzyme metaboliser status (11.1%) in their management. Given the aforementioned  importance  of  genetic  factors  and  the  potential to reduce ADRs using personalised medicine, this is an area for improvement.

When considering pharmacogenomic testing, we found 79.5% (35/44) of GPs to be unaware of testing methods. No GP had ever utilised pharmacogenomic testing, this low rate of utilisation is also reported previously in other several studies. [22-24] A lack of utilisation and awareness arguably forms a barrier against the effective incorporation of personalised medicine in the primary care setting. These low figures represent a lack of education regarding pharmacogenomics and its clinical applications. This is an issue that has been recognised since the arrival of these testing methods. [25] McKinnon et al. [25] highlighted that this lack of education across healthcare professionals is significant enough to be considered a “barrier to the widespread uptake of pharmacogenomics”. To ameliorate the situation, the International Society of Pharmacogenomics has issued recommendations in 2005 for  pharmacogenomics  to  be  incorporated  into  medical  curricula. [26]  Another  contributing  factor  to  the  low  utilisation  of  testing could include the lack of subsidised tests available through Medicare. Currently, pathology labs do provide pharmacogenomic testing (such as Douglas Hanley Moir and Healthscope), however this is largely done so through the patient’s expenses as only two methods are subsidised by Medicare. [23,27,28]

Amongst those aware of pharmacogenomic testing, eight out of nine GPs answered that they would be more likely to utilise pharmacogenomic testing in black-African patients who had previously experienced ADRs; this is consistent with findings noted by van Puijenbroek et al. [29]. Among these GPs, factors that were noted to be potential influences in their utilisation of testing included: patient factors such as compliance and the reliability of the test, and, factors affecting the clinical picture (as described in Table 1). This is consistent with findings by studies that have also identified cost and a patient’s individual response to drugs as influential factors in a physician’s decision making. [29,30]

Considering that the majority of information regarding enzyme metabolism and pharmacogenomic testing was published in pharmacological journals,[6,8-14,30-32] much of this knowledge may not have been passed on to GPs. In order to understand the preferred media of information for GPs, we posed open-ended questions and discovered that the majority of GPs who answered the question (32/39), would prefer information in the form of writing (Table 3). This could be either in the form of online sources (such as guidelines, summaries, the National Prescribing Service or the Monthly Index of Medical Specialities) or peer reviewed journal articles. Current literature also reflects this preference for GPs to gain education regarding pharmacogenomics through journal articles. [22] The other preferred medium of education was through verbal teachings, peer discussions and presentations (Table 3), with there being specific interest in information being disseminated by clinical pathology laboratories; this is also reflected in the literature. [22,29]

Strengths and limitations

Small sample size is a limitation of this study with possible contributing factors including: the short amount of time allowed for data collection and the low response rate due to GP time constraints. Strengths of the study include the use of a validated questionnaire catered to our target population and open-ended questions which gave us further insight into GP preferences.

Implications and future research

Currently, anti-coagulants provide an example of the clinical applications of considering enzyme polymorphisms in patient management. [33,34] Warfarin is a particular example where variability in INR has been associated with enzyme polymorphisms, leading to the utilisation of dosage algorithms to optimise clinical outcomes. [34] Similarly, when using antidepressants, pharmacogenomic testing could play a role in clinical decision making with Samer et al. [5] suggesting dose reductions and serum monitoring for those with known PM status. However, as identified in our study, there is an overall lack of awareness regarding the differences in enzyme metaboliser status and the methods available for pharmacogenomic testing.

Future studies should focus on the clinical practicality of utilising these tests. Additionally, future studies should determine the effectiveness of the identified GP preferred modalities of education in raising awareness.

Conclusion

There is a low awareness among GPs regarding both the differences in enzyme metaboliser status in the black-African community, and the methods of pharmacogenomic testing.

To optimise clinical outcomes in black-African patients with depression, it  may  be  useful  to  inform  GPs  of  the  availability  and  application of pharmacogenomic testing. We have highlighted the preferred education modalities through which this may be possible.

Acknowledgements

We would like to acknowledge and thank Dr. Irina Piatkov for her support as a supervisor during this project.

Conflict of interest

None declared.

Correspondence

Y Joshi: 17239266@student.uws.edu.au

References

[1] Australian Institute of Health and Welfare. The burden of disease and injury in Australia 2003  [Internet].  2007  [cited  2014  April  25].  Available  from:  http://www.aihw.gov.au/ publication-detail/?id=6442467990

[2] Charles J, Britt H, Fahridin S, Miller G. Mental health in general practice. Aust Fam Physician. 2007;36(3):200-1.

[3] Pierce D, Gunn J. Depression in general practice: consultation duration and problem solving therapy. Aust Fam Physician. 2011;40(5):334-6.

[4]  Binder  EB,  Holsboer  F.  Pharmacogenomics  and  antidepressant  drugs.  Ann  Med. 2006;38(2):82-94.

[5] Samer CF, Lorenzini KI, Rollason V, Daali Y, Desmeules JA. Applications of CYP450 testing in the clinical setting. Mol Diagn Ther. 2013;17(3):165-84.

[6]  Alessandrini  M,  Asfaha  S,  Dodgen  MT,  Warnich  L,  Pepper  MS. Cytochrome  P450 pharmacogenetics in African populations. Drug Metab Rev. 2013;45(2):253-7.

[7] Yang X, Zhang B, Molony C, Chudin E, Hao K, Zhu J et al. Systematic genetic and genomic analysis of cytochrome P450 enzyme activities in human liver. Genome Res. 2010;20(8):1020-36.

[8] Zanger UM, Schwab M. Cytochrome P450 enzymes in drug metabolism: Regulation of gene expression, enzyme activities and impact of genetic variation. Pharmacol Therapeut. 2013;138(1):103-41.

[9]  Guengerich  FP.  Cytochrome  P450  and  chemical  toxicology.  Chem  Res  Toxicol. 2008;21(1):70-83.

[10] Ingelman-Sundberg M. Genetic polymorphisms of cytochrome P450 2D6 (CYP2D6): clinical consequences, evolutionary aspects and functional diversity. Pharmacogenomics J. 2005;5:6-13.

[11] Zhou S. Polymorphism of human cytochrome P450 2D6 and its clinical significance. Clin Pharmacokinet. 2009;48(11):689-723.

[12] Li-Wan-Po A, Girard T, Farndon P, Cooley C, Lithgow J. Pharmacogenetics of CYP2C19: functional and clinical implications of a new variant CYP2C19*17. Br J Clin Pharmacol. 2010;69(3):222-30.

[13] Xie HG, Kim RB, Wood AJJ, Stein CM. Molecular Basis of ethnic differences in drug disposition and response. Ann Rev Pharmacol Toxicol. 2001;41:815-50.

[14] Chen S, Chou WH, Blouin RA, Mao Z, Humphries LL, Meek QC et al. The cytochrome P450  2D6  (CYP2D6)  enzyme  polymorphism:  screening  costs  and  influence on  clinical outcomes in psychiatry. Clin Pharmacol Ther. 1996;60(5):522–34.

[15]  Hugo  G.  Migration  between  Africa  and  Australia:  a  demographic  perspective  – Background paper for African Australians: A review of human rights and social inclusion issues. Australian Human Rights Commission [Internet]. 2009 Dec [cited 2014 April 26]. Available  from:  https://www.humanrights.gov.au/sites/default/files/content/Africanaus/papers/Africanaus_paper_hugo.pdf

[16]  Joint  Standing  Committee  on  Foreign  Affairs,  Defence  and  Trade.  Inquiry  into Australia’s relationship with the countries of Africa [Internet]. 2011 [cited 2014 April 26]. Available  from:  http://www.aph.gov.au/Parliamentary_Business/Committees/House_of_Representatives_Committees?url=jfadt/africa%2009/report.htm

[17] Census 2006 – People born in Africa [Internet]. Australian Bureau of Statistics; 2008 August 20 [updated 2009 April 14; cited 2014 April 26]. Available from: http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/3416.0Main+Features32008

[18]    Greater    Western    Sydney    Economic    Development    Board.    Some    national transport  and  freight  infrastructure  priorities  for  Greater  Western  Sydney  [Internet]. Infrastructure    Australia;    2008    [cited    April    25    2014].    Available    from:    http:// w w w. i n fras tru ctu r eau s tral i a. g o v. au /p u b l i c_su b mi ssi o ns/p u b l i sh ed /fi l es/368_ greaterwesternsydneyeconomicdevelopmentboard_SUB.pdf

[19] Furler J, Kokanovic R, Dowrick C, Newton D, Gunn J, May C. Managing depression among ethnic communities: a qualitative study. Ann Fam Med. 2010;8:231-6.

[20] Robjant K, Hassan R, Katona C. Mental health implications of detaining asylum seekers: systematic review. Br J Psychiatry. 2009;194:306-12.

[21] Hall-Flavin DK, Winner JG, Allen JD, Carhart JM, Proctor B, Snyder KA et al. Utility of integrated pharmacogenomic testing to support the treatment of major depressive disorder in a psychiatric outpatient setting. Pharmacogenet Genomics. 2013;23(10):535- 48.

[22] Stanek EJ, Sanders CL, Taber KA, Khalid M, Patel A, Verbrugge RR et al. Adoption of pharmacogenomics testing by US physicians: results of a nationwide survey. Clin Pharmacol Ther. 2012;91(3):450-8.

[23] Sheffield LJ, Phillimore HE. Clinical use of pharmacogenomics tests in 2009. Clin Biochem Rev. 2009;30(2):55-65.

[24] Corkindale D, Ward H, McKinnon R. Low adoption of pharmacogenetic testing: an exploration and explanation of the reasons in Australia. Pers Med. 2007;4(2):191-9.

[25]  McKinnon  R,  Ward  M,  Sorich  M.  A  critical  analysis  of  barriers  to  the  clinical implementation of pharmacogenomics. Ther Clin Risk Manag. 2007;3(5):751-9.

[26]  Gurwitz  D,  Lunshof  J,  Dedoussis  G,  Flordellis  C,  Fuhr  U,  Kirchheiner  J  et  al. Pharmacogenomics      education:      International      Society      of      Pharmacogenomics recommendations for medical, pharmaceutical, and health schools deans of education. Pharmacogenomics J. 2005;5(4):221-5.

[27]  Pharmacogenomics  [Internet].  Healthscope  Pathology;  2014  [cited  2014  October 22]    Available    from:    http://www.healthscopepathology.com.au/index.php/advanced pathology/pharmacogenomics/

[28]  Overview  of  Pharmacogenomic  testing.  Douglas  Hanley  Moir  Pathology;  2013 [cited  2014  October  22].  Available  from:  http://www.dhm.com.au/media/21900626/pharmacogenomics_brochure_2013_web.pdf

[29] van Puijenbroek E, Conemans J, van Groostheest K. Spontaneous ADR reports as a trigger for pharmacogenetic research: a prospective observational study in the Netherlands. Drug Saf. 2009;32(3):225-64.

[30]  Rogausch  A,  Prause  D,  Schallenberg  A,  Brockmoller  J,  Himmel  W.  Patients’  and physicians’ perspectives on pharmacogenetic testing. Pharmacogenomics. 2006;7(1):49- 59.

[31] Akilillu E, Persson I, Bertilsson L, Johansson I, Rodrigues F, Ingelman-Sundberg M. Frequent distribution of ultrarapid metabolizers of debrisoquine in Ethopian population carrying duplicated and multiduplicated functional CYP2D6 alleles. J Pharmacol Exp Ther. 1996;278(1):441-6.

[32] Bradford LD. CYP2D6 allele frequency in European Caucasians, Asians, Africans and their descendants. Pharmacogenomics. 2002;3:229-43.

[33] Cresci S, Depta JP, Lenzini PA, Li Ay, Lanfear DE, Province MA et al. Cytochrome p450 gene variants, race, and mortality among clopidogrel-treated patients after acute myocardial infarction. Circ Cardiovasc Genet. 2014 7(3):277-86.

[34] Becquemont L. Evidence for a pharmacogenetic adapted dose of oral anticoagulant in routine medical practice. Eur J Clin Pharmacol. 2008 64(10):953-60

Categories
Original Research Articles

Adequacy of anticoagulation according to CHADS2 criteria in patients with atrial fibrillation in general practice – a retrospective cohort study

Background: Atrial fibrillation (AF) is a common arrhythmia associated with an increased risk of stroke.  Strategies to reduce stroke incidence involve identification of at-risk patients using scoring systems such as the CHADS2  score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes or Stroke) to guide pharmacological prophylaxis. Aim: The aim of this research project was to determine the prevalence and management of AF patients within the general practice (GP) setting and to assess the adequacy of anticoagulation or antiplatelet prophylaxis according to the CHADS2  score. Methods: This study was a retrospective cohort study of 100 AF patients ≥50 years conducted at a South Coast NSW Medical Centre over a 3-year period.   Data was obtained from existing medical records. CHADS2   scores were determined at baseline, 12 months and 3 years and were compared with medications to assess whether patients were undertreated, adequately treated or over-treated according to their CHADS2 score. Results: Prevalence of AF in patients >50 years was 5.8%. At baseline, 65% of patients (n=100) were at high risk of stroke (CHADS2  score ≥2).   This increased to 75.3% of patients at 12 months (n=89) and 78.4% of patients at 3 years (n=60).  Adequate treatment occurred in 79.0% of patients at baseline and 83.1% and 76.7% at 12 months and 3-years, respectively.  There were three instances of stroke or trans-ischemic attack during the study period. Conclusion: GPs play a critical role in prevention of stroke in patients with AF.   Adequate pharmacological interventions occurred in the majority of cases, however, identification and treatment of at-risk patients could be further improved.

v6_i1_a22a

Introduction

Atrial fibrillation (AF) is the most common cardiac arrhythmia in Australia, affecting 8% of the population over the age of 80 years. [1,2]  The morbidity and mortality associated with AF is primarily due to an increased risk of thromboembolic events such as stroke, with studies reporting up to a five-fold increase in the annual risk of stroke among patients with AF who have not received prophylaxis with either anticoagulant or antiplatelet therapies. [3,4]

It has been demonstrated that the incidence of stroke in patients with AF can be significantly reduced with the use of pharmacological agents, such as anticoagulant and antiplatelet medications including warfarin and aspirin, respectively. [5] More recently, the development of new oral anticoagulant (NOAC) medications such as dabigatran and rivaroxaban have also been approved for use in patients with AF. [6] However, several studies indicate that the use of anticoagulants and antiplatelets for the prevention of thromboembolic events is often underutilised. [7,8]  It is estimated that up to 51% of patients eligible for anticoagulant therapy do not receive it. [9]   Furthermore, an estimated 86% of patients who suffer from AF and have a subsequent stroke were not receiving adequate anticoagulation therapy following their AF diagnosis. [10]

In contrast, pharmacological treatments for stroke prophylaxis have been associated with an increased risk of intracerebral haemorrhage, particularly amongst the elderly. [11]  A study of 170 patients with AF over the age of 85 years demonstrated that the rate of haemorrhagic stroke was 2.5 times higher in those receiving anticoagulant therapy compared to controls (OR=2.5, 95% CI: 1.3-2.7). [12]  Therefore, the need to optimise the management of patients with AF in the general practice (GP) setting is of high importance for stroke prevention and requires an individualised pharmacological approach in order to achieve a balance between stroke reduction and bleeding side effects.

Consequently, the development of validated risk stratification tools such as the CHADS2 score (Congestive Heart Failure, Hypertension, Age ≥75 years, Diabetes, Previous Stroke or Trans-ischemic Attack (TIA)) has enabled more accurate identification of AF patients who are at an increased risk of stroke by assessing co-morbidities and additional risk factors to determine the appropriateness of anticoagulation or antiplatelet prophylaxis to reduce the risk of thromboembolic events. [13]

The aim of this research project was to determine the prevalence of AF among patients within a GP cohort and to assess the adequacy of pharmacological stroke prophylaxis according to the CHADS2  criteria. The results of this study will enable GPs to determine whether the current management of patients with AF is adequate and whether closer follow-up of these patients needs to occur in order to minimise associated bleeding and stroke complications.

Methods

Study design and ethics

This study was a retrospective cohort study of the prevalence, patient characteristics and adequacy of anticoagulation according to the CHADS2  score in GP patients with AF over a 3-year period.  The study was approved by the University of Wollongong Human Research Ethics Committee (Appendix 1, HREC 13/031).

Participants

Participants were identified using a search of the practice database (Best Practice, Version 1.8.3.602, Pyefinch Software Pty Ltd), at a South Coast NSW Medical Centre using the database search tool.  Search criteria included any patient (recorded as alive or deceased) who attended the practice with a recorded diagnosis of AF over a 3-year period (between November 2010 – November 2013) and were ≥50 years of age. This included both patients with long-term AF diagnosed before the study period in addition to those newly diagnosed with AF during the study period.   The total number of all patients aged ≥50 years who attended the practice at least once during the same period was recorded to determine the prevalence of AF at the practice.

Exclusion Criteria

Exclusion   criteria   included   patients   <50   years   of   age,   patients with incomplete medical records or those diagnosed with AF who subsequently moved from the practice during the study period.

 

CHADS2  score

The CHADS2   score was chosen for the purpose of this study as it is a validated risk-stratification tool for patients with AF. [13-15]  The scoring system assigns one point each for the presence of Congestive Heart Failure, Hypertension, Age ≥75 years or Diabetes and assigns two points if a patient has a history of previous Stroke or TIA.  AF patients with a CHADS2 score of 0 are considered to be at low risk of a thromboembolic event (0.5 – 1.7% per year stroke rate); a score of 1 indicates intermediate risk (2.0% per year stroke rate) and a score ≥2 indicates high risk (4.0% per year stroke rate). [16]

Data Search and Extraction

Patient data was manually extracted from individual patient records, coded  and  recorded  into  a  spreadsheet  (Microsoft  Excel,  2007). Basic data including date of birth and sex were recorded.  Date of AF diagnosis (assessed as the first documented episode of AF within the patient record) and co-morbidities including hypertension, congestive heart failure, diabetes, stroke or TIA were included if documented within the patient medical record.   Correspondence from specialists and hospital discharge summaries were also analysed for any diagnosis made outside of the medical centre and not subsequently recorded in the medical record.

Lifestyle factors were recorded from the practice database including
alcohol use (light/moderate/heavy or none) and smoking status (nonsmoker, ex-smoker or current smoker). Complications arising from
pharmacological prophylaxis (including any documented bleeding or
side-effects) or discontinuation of treatments were included. Individual
patient visits were analysed for any documented non-compliance with
medications. Where possible, cause of death was also recorded.

Adequacy of Anticoagulation

Individual CHADS2 scores were determined for each patient at baseline,
12 months and 3 years. At each of these time points, CHADS2 scores
were compared to each patient’s medication regime (i.e. no medication
use, an anticoagulant agent or an antiplatelet agent). The use of other
medications for the treatment of AF (for example, agents for rate or
rhythm control) was not assessed. Patients were then classified as
being undertreated, adequately treated or over-treated according to
the CHADS2 score obtained at baseline, 12 months and 3 years as per
the current therapeutic guidelines (Figure 1). [17]

v6_i1_a22c

Adequate treatment was considered to be patients receiving treatments
in accordance with the therapeutic guidelines. [17] Undertreated
patients included those who received no treatment when an oral
anticoagulant was indicated (CHADS2 score ≥2). Over-treated patients
included those treated with an oral anticoagulant where it was not
indicated according to the current guidelines (CHADS2 score = 0).

Statistical Analysis

Results are presented as mean ± standard deviation.   A p-value of <0.05 was considered to be statistically significant.  One-way ANOVA was  used  to  assess  between-group  differences  in  CHADS2    scores at each time point (Baseline, 12 months and 3 years).   Descriptive data is presented where relevant.   Prevalence of AF at the practice was calculated using the formula; (patients with AF ≥50 years / total number of patients ≥50 years at the practice, X 100).

 

 

Results

A total of 346 patients with AF aged ≥50 years were identified. Of these, 246 participants were excluded – (n=213 due to insufficient data within their medical record, and n=33 patients had left the practice during the study period) leaving a total of 100 patients for inclusion in the analysis (Figure 2).  Due to the nature of the search strategy (which identified any patient with AF during the period of November 2010-November 2013), both newly-diagnosed patients and patients with long-term AF were included in the analysis. Therefore, long-term data was available for n=89 participants at 12 months, and n=60 participants at 3 years. There were no statistically significant differences in age (p=0.91) or sex (p=0.86) between the included and excluded participants.

v6_i1_a22d

Including all patients initially identified with AF (n=346), the overall prevalence of AF among patients at the practice was 5.8%. Participant characteristics are presented in Table 1.  The mean age of participants at diagnosis was 74.9 ± 10.0 years, with more males suffering from AF (60%) compared to females (40%).  Over half of patients had a history of smoking (57%), and hypertension was the most common co- morbidity (74%).  13% of participants were listed within the practice database as being deceased.

v6_i1_a22e

 

At baseline, 65.0% of patients were classified as high risk of stroke (CHADS score ≥2).  This increased to 75.3% of patients and 78.4% of patients at 12 months and 3 years, respectively (Graph 1). There were no patients with a CHADS2 score of 6 at any of the study time points. Analysis of participants who had 3-year follow-up data available (n=60) demonstrated  a  statistically significant increase  in  average  CHADS2 scores among patients between baseline vs. 12 months (p<0.05) and baseline vs. 3 years (p<0.01).   There was no statistically significant difference in CHADS2 scores between 12 months vs. 3 years (p=0.54).

v6_i1_a22f

Graph 2 demonstrates changes in treatment adequacy over time based on patients’ initial treatment group allocation at baseline. For patients who were initially identified as being undertreated at baseline, there was a trend toward adequate treatment by 3-years.  For patients initially identified as over-treated at baseline, the trend towards adequate treatment occurred more rapidly (p=non-significant) (on average by 12 months).

v6_i1_a22b

Patient pharmacological  treatments and adequacy of treatment at baseline, 12 months and 3 years are shown in Table 2.

v6_i1_a22g

 

There were several reported side-effects and documented instances of medication cessation from anticoagulation and antiplatelet therapy. A total of eight patients were non-compliant and ceased warfarin during the study period and eight patients had their warfarin ceased by their treating doctor (reason for cessation unknown).  A further eight patients ceased warfarin therapy due to side-effects (Intracranial haemorrhage   (n=1),   Gastrointestinal  bleeding   (n=3),   Haematuria (n=1), Unknown bleeding (n=3)).   One patient ceased aspirin due to oesophageal irritation.   No other pharmacological therapies were ceased due to side-effects. Warfarin was ceased in one case due to an elective surgical procedure.

A total of two patients suffered an embolic or haemorrhagic stroke and a further two patients suffered a TIA during the study period.  Prior to their thromboembolic event, one patient was undertreated with aspirin (CHADS2 score = 2), one was adequately treated with clopidogrel (CHADS2    score  =  1)  and  a  further  one  patient  was  undertreated on aspirin (CHADS2 score = 3).  Cause of death was unknown in six patients. No patients had stroke or TIA listed as their cause of death in their medical record.

Discussion

It  has  been  suggested  that  Australian  patients  with  AF  may  not be receiving optimal prophylactic anticoagulant and antiplatelet medications for the prevention of thromboembolic events. [7,8]  The aims of this retrospective cohort study were to assess stroke risk and the adequacy of anticoagulation in 100 AF patients ≥50 years over a 3 year period in a GP setting.

Results from the current study indicate that overall, the use of anticoagulant and antiplatelet strategies for stroke prophylaxis was appropriate in the majority of cases and consistent with published therapeutic guidelines. [17]  The prevalence of AF at the practice of 5.8% was similar with other studies, which report a prevalence of AF in the GP setting of between 4-8%. [18, 19]  In the current study, there were more males with AF than females, however this trend has also been found in several other studies which have reported a higher prevalence of AF amongst males. [15,18]

CHADS2  scores increased between baseline and 12 months and baseline and 3 years.   This increase was to be expected as patients are likely to gain additional risk factors as they age.  The majority of patients at all time points were at high risk of stroke (CHADS2 score ≥2), with warfarin or similar anticoagulation therapy being indicated.

Overall, treatment adequacy increased between baseline and 12 months (79% versus 83.1%), then decreased by 3 years (83.1% versus 76.7%).  This trend is likely to represent aggressive management of AF at the initial diagnosis then a decline in optimal stroke prophylaxis as patients age, develop additional side-effects or become at increased risk of falls.  Additionally, older patient groups (those >70 years) were more likely to be undertreated.  This may be due to several factors, including patient non-compliance with warfarin therapy, doctor reluctance to prescribe warfarin to patients at risk of falls, and the incidence of side-effects such as bleeding.   Similar causes of under- treatment of elderly patients with AF have been outlined in other studies. [20,21]  In younger patients, there was a trend towards over- treatment at the time of diagnosis.

In the current study, one patient suffered an embolic stroke during the study period and two patients had a TIA. Appropriately, all three of these patients were subsequently changed to warfarin.  One patient who was adequately treated on warfarin with a CHADS2   score of 1 was changed to aspirin following an intracranial haemorrhage (and consequently remained classified as adequately treated).  Although these were isolated cases within the study, it should be noted that the life-long morbidity of stroke for these individuals is significant.

Strengths of the current study include the large number of patients and the comprehensive assessment of medical records for the main study outcomes of CHADS2  scores and anticoagulation or antiplatelet therapies.  By assessing individual medical records, a comprehensive assessment of patient data was available for inclusion in the study analysis.

There are some limitations in the current study. As data was extracted from  an  existing  database  of  patient  medical  records  (which  was not kept for the purpose of conducting research) there were some instances of missing or incomplete data.  However, the majority of missing data was, in general, relating to the patient’s social history (such as smoking rates and alcohol use), which were not central to the main research aims and would not have influenced the results.

A thorough assessment of medication regimes was able to be carried out for the purpose of this study.   As all medication changes are automatically recorded by the Best Practice program at each visit, the author is confident that this aspect of the data is accurate. However, it should be noted that it is possible that some patients may have been taking over the counter aspirin, which may not have been recorded on their medication list and consequently some patients may have been assessed as ‘undertreated’.  An additional consideration relates to the use of warfarin and whether patients’ prescribed warfarin were within the therapeutic range, however, the assessment of multiple INR readings for each patient over a 3-year period was thought to be beyond the scope of this study. Only two patients at the practice had been prescribed NOACs (Dabigatran) for anticoagulation, therefore analysis of this medication was limited.

The  calculation  of  CHADS2   scores was  able  to  be  assessed for all patients.  Although most co-morbidities were well documented, theremay have been some limitations with regards to the identification of some co-morbidities such as hypertension, diabetes and the presence of congestive heart failure among some patients.   For example, in some instances some patients did not have a recorded diagnosis of hypertension, but a review of blood pressure readings demonstrated several high systolic blood pressure readings which could have been diagnostic for hypertension.  Where this occurred, patients were not considered to have hypertension or congestive heart failure and were not assigned an additional CHADS2 point.

The CHADS2   score was chosen for the purpose of this study due to its simplicity and validation for the identification of patients at risk of stroke [13-15].  More recently, refinements to the CHADS2  score has led to the development of the CHA2DS2-VASC score, which assigns additional points to higher age groups, female patients and patients with vascular disease. [22]  The CHA2DS2-VASC score provides a more comprehensive overview of stroke risk factors in an individual and has  also  been  validated  for  the  purpose  of  determining  the  need for pharmacological stroke prophylaxis.  More recently, studies have shown that application of the CHA2DS2-VASC score is most useful for clarifying the stratification of patients within the low-intermediate stroke risk categories (i.e. determining those with CHADS2  scores of 0-1 who are truly at low risk and do not require aspirin). [23]  Because the aims of the current study were to identify patients at high risk of stroke and determine the appropriateness of their treatment, the CHA2DS2-VASC score was not utilised in this study.  However, it should be noted that the CHA2DS2-VASC may provide additional clarification in the assessment of patients with low-intermediate CHADS2 scores.

An additional consideration in this study relates to the nature of the AF suffered by patients.  Although patients were included if they had a known diagnosis of AF, it is almost impossible to determine how long patients had already been suffering from AF prior to and after their diagnosis.  In addition, it was not possible to determine whether patients had paroxysmal or sustained/chronic AF.   However, it has been demonstrated that there may be little difference in outcomes for patients with paroxysmal versus persistent AF, [24,25] with a large cohort study comparing stroke rates in patients with paroxysmal versus sustained  AF  reporting no  significant  difference  in  rates  of  stroke (3.2% versus 3.3%, respectively). [24] Therefore, it is unlikely that determination of paroxysmal and sustained AF patterns would have influenced results of the current study.

Conclusion

The results obtained from this study will allow GPs to optimise the management of patients with AF in the community setting.  Although this study found that the management of patients with AF at the practice is consistent with the current guidelines in the majority of cases, further improvements can be made to minimise the risk of stroke among patients with AF, especially with regards to targeting undertreated patients.   Additionally, the current study may raise greater awareness of the incidence of AF within the practice and the need to assess stroke risk and treat patients accordingly, especially as  CHADS2 scores  were  rarely recorded  formally  at  the  time  of diagnosis.  GPs are well placed to optimise the treatment of AF and prevent strokes though treatment of co-morbidities and implementing lifestyle interventions, such as encouraging smoking cessation and the minimisation of alcohol use, and may further reduce the incidence of stroke and TIA in patients with AF.

Acknowledgements

The author would like to acknowledge Dr Darryl McAndrew, Dr Brett Thomson, Prof Peter McLennan, Dr Judy Mullan and Dr Sal Sanzone for their contribution to this research project.

Conflict of interest

None declared.

Correspondence

S Macleod: dm953@uowmail.edu.au

References

[1] Furberg, C, Psaty, B, Manolio, T, Gardin, J, Smith, V, Rautaharju, P. Prevalence of atrial fibrillation in elderly subjects (The Cardiovascular Health Study). Am J Cardiol. 1994; 74 (3): 236-241.

[2] Wong, C, Brooks, A, Leong, D, Roberts, K, Sanders, P. The increasing burden of atrial fibrillation compared with heart failure and myocardial infarction: A 15-year study of all hospitalizations in Australia. Arch Int Med. 2012; 172 (9): 739-741.

[3] Lip, G, Boos, C. Antithrombotic treatment in atrial fibrillation. Heart. 2006; 92 (2): 155-161.

[4] Medi, C, Hankey, G, Freedman, S. Stroke risk and antithrombotic strategies in atrial fibrillation. Stroke. 2010; 41: 2705-2713.

[5] Gould, P, Power, J, Broughton, A, Kaye, D. Review of the current management of atrial fibrillaiton. Exp Opin Pharmacother. 2003; 4 (11): 1889-1899.

[6] Brieger, D, Curnow, J, Anticoagulation: A GP primer on the new anticoagulants. Aust Fam Physician 2014, 43 (5): 254-259.

[7] Gladstone, D, Bui, E, Fang, J, Laupacis, A, Lindsay, P, Tu, J, et. al. Potentially preventable strokes in high-risk patients with atrial fibrillation who are not adequately anticoagulated. Stroke. 2009; 40: 235-240.

[8] Olgilvie, I, Newton, N, Welner, S, Cowell, W, Lip, G. Underuse of oral anticoagulants in atrial fibrillation: A systematic review. Am J Med. 2010; 123 (7): 638-645.

[9] Pisters, R, Van Oostenbrugger, R, Knottnerus, I. The likelihood of decreasing strokes in atrial fibrillation patients by strict application of guidelines. Europace. 2010; 12: 779-784. [10] Leyden, J, Kleinig, T, Newbury, J, Castles, S, Cranefield, J, Anderson, C, et. al. Adelaide Stroke Incidence Study: Declining stroke rates but many preventable cardioembolic strokes. Stroke. 2013; 44: 1226-1231.

[11] Vitry, A, Roughead, E, Ramsay, E, Preiss, A, Ryan, P, Pilbert, A, et. al. Major bleeding risk associated with warfarin and co-medications in the elderly population. Pharmacoepidem Drug Safe. 2011; 20 (10): 1057-1063.

[12] Fang, M, Change, Y, Hylek, E, Rosand, J, Greenberg, S, Go, A, et. al. Advanced age, anticoagulation intensity, and risk for intracranial hemorrhage among patients taking warfarin for atrial fibrillation. Ann Int Med. 2004; 141: 745-752.

[13] Gage B, Waterman, A, Shannon, W. Validation of clinical classification schemes for predicting stroke: Results from the National Registry of Atrial Fibrillation. JAMA. 2001; 285: 2864-2870.

[14] Khoo, C, Lip, G. Initiation and persistance on warfarin or aspirin as thromboprophylaxis in chronic atrial fibrillation in general practice. Thromb Haemost. 2008; 6: 1622-1624.

[15] Rietbrock, S, Heeley, E, Plumb, J, Van Staa, T. Chronic atrial fibrillation: Incidence, prevalence, and prediction of stroke using the congestive heart failure, hypertension, age >75, diabetes mellitus, and prior stroke or transient ischemic attack (CHADS2) risk stratification scheme. Am Heart J. 2008; 156: 57-64.

[16]  UpToDate.  Antithrombotic  therapy  to  prevent  embolization  in  atrial  fibrillation. [Internet]. 2013 [Cited 2014 Mar 9]. Available from: http://www.uptodate.com/contents/antithrombotic-therapy-to-prevent-embolization-in-atrial-fibrillation

[17] e-Therapeutic Guidelines. Prophylaxis of stroke in patients with atrial fibrillation.[Internet]. 2012 [Cited 2014 Mar 9]. Available from: http://etg.hcn.com.au/desktop/index.htm?acc=36422

[18] Fahridin, S, Charles, J, Miller, G. Atrial fibrillation in Australian general practice. Aust Fam Physician. 2007; 36.

[19] Lowres, N, Freedman, S, Redfern, J, McLachlan, A, Krass, I, Bennet, A, et. al. Screening education and recognition in community pharmacies of atrial fibrillation to prevent stroke in and ambulant population aged ≥65 years (SEARCH-AF Stroke Prevention Study): A cross-sectional study protocol. BMJ. 2012; 2 (Online).

 

[20] Hobbs, R, Leach, I. Challenges of stroke prevention in patients with atrial fibrillation in clinical practice. Q J Med. 2011; 104: 739-746.

[21] Hickey, K. Anticoagulation management in clinical practice: Preventing stroke in patients with atrial fibrillation. Heart Lung. 2012; 41: 146-156.

[22] van Starr, T, Setakis, E, Ditanna, G, Lane, D, Lip, G. A comparison of risk stratification schemes for stroke in 79,884 atrial fibrillation patients in general practice. Thromb Haemost. 2011; 9: 39-48.

[23] Lip, G. Atrial fibrillation and stroke prevention: Brief observations on the last decade. Expert Rev Cardiovasc Ther. 2014; 12 (4): 403-406.

[24] Hart, R, Pearce, L, Rothbart, R, McAnulty, J, Asinger, R, Halperin, J. Stroke with intermittent atrial fibrillation: Incidence and predictors during aspirin therapy. J Am Coll Cardiol. 2000; 35: 183-187.

[25] Nattel, S, Opie, L. Controversies in atrial fibrillation. Lancet. 2006; 367: 262-272.

Categories
Feature Articles

Personal reflection: how much do we really know?

v6_i1_a23

“Hurry up with that blood pressure and pulse,” blurts the ED registrar. “And make sure to do it on both arms this time.” Before I can ask him what’s going on, he’s teleported to the next bed. Great. I’m alone again. But I don’t blame him; it’s a Saturday night, and a volunteer medical student is the least of his worries.

I fumble for what seems like an eternity with the blood pressure cuff, but eventually get it on, much to the amusement of a charge nurse eyeballing me from the nurses’ station. Recording the right arm was textbook, so now it was just the left arm to do. I listen hard for the Korotkoff sounds, but there was nothing. I shut my eyes in a squeamish hope that it might heighten my hearing, but nothing again. I can feel the charge nurse staring again; I fluster and break a cold sweat. I feel for the left radial pulse, but it repeatedly flutters away the moment I find it. I remember thinking: Gosh. Am I really that incompetent? Embarrassed, I eventually concede defeat and ask for a nurse who tells me she’ll be there “in a minute.”

Amidst all this confusion, was John—my patient. I’d gotten so caught up with ‘Operation Blood Pressure’ that I completely forgot that he was lying there with a kind of graceful patience. I quickly apologised and introduced myself as one of the students on the team.

“It’s all right. You’re young; you’ll eventually get the hang of it… Have to start somewhere, right?” His voice had a raspy crispness to it, which was quite calming to actually hear against the dull rapture of a chaotic emergency room.

John was one of those lovely elderly persons who you immediately came to admire and respect for their warm resilience; you don’t meet too many gentlemen like John anymore. Despite his discomfort, he gave me a kind smile and reached out with his right hand to reassuringly touch my hand. It was a beautifully ironic moment: There he lay in bed, and there I stood by his bedside. And for a moment, there I was the patient in distress, and there he was the physician offering me the reassurance I so desperately needed.

Patients teach us to be doctors. Whether it is a lesson in humility or a rare diagnostic finding, patients are the cornerstone of our ongoing clinical expertise and development; they are why we exist. The more we see, the more we learn. The more we learn, the better doctors we become. Sir William Osler was perhaps the first one to formally adopt this into modern medical education. After all, the three-year hospital residency program for training junior medicos was his idea, and is now a curriculum so widely adopted that it’s almost a rite of passage all doctors make.

But how much clinical exposure are we really getting nowadays? With the betterment of societal health, there is a reduced prevalence and incidence  for  rarer  diseases.  Epidemiologically  this  is  undoubtedly a good thing, but it does sadly reduce learning opportunities for upcoming generations of doctors. Our clinical accruement is premised on seeing and doing; through experiences that shape our clinical approach. Earlier this year, an African child presented with mild gut disturbances and some paralysis of his lower limbs. The case baffled three residents and a registrar, but after a quick glance from a consultant, the child was immediately diagnosed with polio (which was confirmed later by one of the myriad of tests the panicking residents had ordered earlier). We’d all read about polio, but either through the lack of clinical exposure or careless assumptions that polio was all cured; we were quick to overlook it. We can only diagnose if we know what we are looking for.

It’s not surprising that preceding generations of senior doctors (and those before them) have such perceived superior clinical intellect, not just with the breadth of their clinical knowledge but with their almost Sherlock Holmes senses of acuity to formulate diagnosis based primarily off history taking and physical examination. Traditionally it is advertised in textbooks that 90% of diagnoses should be made from the history and examination alone. Nowadays, with the advent of improving diagnostic technologies in radiology and pathology, it isn’t surprising that a number of us have replaced this fundamental skill with an apparent dependence on expensive invasive tests. In a recent study physicians at their respective levels were assessed on their ability to identify heart murmurs and associate it with the correct cardiac problem. Out of the 12 murmurs: interns correctly identified 5, senior residents 6, registrars 8 and consultants 9. Makes you wonder how long ago it was when physicians could identify all twelve. I remember an ambitious surgical resident saying – Why bother diagnosing murmurs when you can just order an echocardiogram? And I remembered the humbling answer a grandfather consultant had for him – Because I’m a real doctor and I can.

As for poor John, I was still stuck with getting a blood pressure for his left arm. Two hours earlier, I responded with the ambulance to John at his home, a conscious and breathing 68 year old complaining of severe headaches and back pain. John was a war veteran who lived independently and sadly had no remaining family to care for him. He has had a month’s history of worsening headaches and lumbar back pain with associated sensory loss particularly in his lower limbs that has been affecting his walking recently. Physical exam confirmed his story and he was slightly hypotensive at 100/65 mmHg, but otherwise his ECG and vitals were generally unremarkable. He otherwise looked to be a healthy 68 year old with no significant past medical history. Funnily enough, he’d just been sent home from ED earlier in the day for the same complaint.  As far as we could tell, he was just another old guy with a bad headache, back pain, and possibly sciatica. It wasn’t surprising that he was sent home from ED this morning with a script for Celecoxib, Nurofen, and instructions to follow-up with his GP.

I’ll remember from this moment onwards that when a nurse says that they’ll be a minute, it’s actually a metaphor of an ice age. I eventually decide to fess up to the registrar that I couldn’t do the blood pressure properly. He gives me a disappointing look but I concluded that honesty is usually the best option in healthcare — well, at least, over pride. I remembered reading a case earlier that week about a medical student who failed to admit that he was unable to palpate the femoral and left radial pulses in a neonate, and subsequently missed an early diagnosis of a serious aortic coarctation, which in the end was discovered the following morning after the baby had already become significantly blue and cyanosed overnight.

Much to my relief, the registrar couldn’t find the blood pressure either and ruled it as pathologic. He disappeared to have a word with his consultant, with both of them quickly returning to the bedside to take a brief history from the patient. By that point, the nurse had finally arrived along with a couple more students and an intern. John had an audience. It was bedside teaching time.

“So apparently you’re God?” John asked the consultant, breaking the seriousness of the moment. We all simultaneously swivel our heads to face the consultant liked starved seagulls, only we weren’t looking for a fried chip but craving for a smart response to scribble in our notebooks.

“To them,” the consultant looks at us, “I am. As for you, I’m not sure.” “I survived getting shot you know, during the war…it just nicked some major artery in my chest…clean shot, in the front and out the back… army docs made some stitches, and I healed up just fine by the end of the month. I’ve been fit as a fiddle since—well, at least, up until these last few months.”

The rest of the history was similar to what I’d found out earlier, but I was slightly annoyed and almost felt betrayed that he’d failed to mention this to me earlier.

The fictional TV Dr Gregory House has a saying that “everybody lies.” It’s true to an extent, but I don’t think patients do it deliberately. They generally might discount or overlook facts that are actually an essential part of the diagnostic process; they are human after all (and so are we). There are the psychiatric exceptions, but for the most part, patients do have the good faith of wanting to help us to help them get better. While sending a team of residents to break into a patient’s house is not usually the preferable choice (unless you’re Dr House), we usually try and pick up these extra clues by knowing what questions to ask and through the comfortable rapport we build with our patients as we come to understand them as a person. The trick is to do all of this in a 10 to 15 minute consult.

 

The consultant quickly did a physical exam on John. He closed his eyes as he listened to his chest. And then, a very faint smile briefly came across his face — the epiphany of a pitifully murmuring heart.

“We’re probably going to run some tests to confirm this,” he informs John before turning to us, “but I suspect we might have a case of a dissecting aorta.” Of course; why didn’t I think of that? Hindsight’s always 20-20, but I continue to kick myself for missing that murmur, and not making the (now obvious) connection.

The consultant continues to command his lackeys to request an alphabet of tests. Soon enough the CT images return and it’s evident that there was blood dividing into a false lumen of the descending aorta (likely to have torn at the site where his gunshot injury had traumatised the vascular tissues from decades ago). Urgent surgery was booked, a range of cardiac medications commenced, and by the time I returned from documenting the notes, there was now a bunch of tubes sticking out of him.

The next time I see John is after his surgery and before he was transferred to the rehabilitation unit. I treasure our final meeting.

“So I beat the odds,” John threw a beaming smile towards me. He’s a trooper — I’ll definitely give him that. Assuming his initial dissectional tear occurred when he reported the onset of his headaches and lower back pain, he’d survived a dissecting aortic aneurysm for at least one whole month, not to mention a war before that. (The odds of dropping dead from an aortic dissection in the first 24 hours alone it’s 25%, in 48 hours it’s 50%, in the first week it’s 75% and in the first month it’s 90%.)

“Yes, you definitely beat the odds.” I smile back at him with a certain amount of gained confidence. Our eyes meet briefly, and beneath the toughened exterior of this brave man is the all-too-familiar softened reservoir of unannounced fear. Finally, I extend my hand to shake his and gently squeeze it; it is the blessing of trust and reassurance he first showed me as a patient that I am now returning to him as a physician.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

E Teo: eteo@outlook.com

Categories
Feature Articles

The blind spot on Australia’s PBS: review of anti-VEGF therapy for neovascular age-related macular degeneration

v6_i1_a24

Case scenario

A 72 year old male with a two-day history of sudden blurred vision in his left eye was referred to an ophthalmologist at a regional Australian setting. On best corrected visual acuity (BCVA) testing his left eye had reduced vision (6/12-1) with metamorphopsia. Fundoscopy showed an area of swelling around the left macula and optical coherence tomography and fundus fluorescein angiography later confirmed pigment epithelial detachment of his left macula and subfoveal choroidal  neovascularisation.  He  was  given  a  diagnosis  of  wet macular degeneration and was commenced on monthly ranibizumab (Lucentis®) injections – a drug that costs the Australian health care system approximately AUD $1430 per injection and will require lifelong treatment. Recent debate has risen regarding the optimum frequency of dosing and the necessity of this expensive drug, given the availability of a cheaper alternative.

Introduction

Age-related macular degeneration (AMD) is the leading cause of blindness in Australia. [1] It predominantly affects people aged over 50 years and impairs central vision.  In Australia the cumulative incidence of early AMD for those aged over 49 years is 14.1% and 3.7% for late AMD. [1] Macular degeneration occurs in two forms. Dry macular (non- neovascular) disease comprises 90% of AMD and has a slow progression characterised by drussen deposition underneath the retinal pigment epithelium. [2] Currently there is no agreed treatment of advanced dry AMD and is managed only by diet and lifestyle. [3,4] Late stages of dry macular degeneration can result in “geographic atrophy” causing progressive atrophy of the retinal pigment epithelium, choriocapillaries and photoreceptors. [2]

Wet (neovascular) macular degeneration is less common and affects 10% of AMD patients but causes rapid visual loss. [2] It is characterised by choroidal  neovascularisation  (CNV)  secondary  to  the  effects of vascular endothelial growth factor (VEGF) causing blood vessels to grow from the choroid towards the retina. Leakage of these vessels leads to retinal oedema, haemorrhage and fibrous scarring. When the central and paracentral areas are affected it can result in loss of central vision. [2,5] Untreated, this condition can result in one to three lines of visual acuity lost on the LogMAR chart at three months and three to four lines by one year. [6] Hence visual impairment from late AMD leads to significant loss of vision and quality of life.

Currently there are three main anti-VEGF drugs available for wet macular degeneration:   ranibizumab    (Lucentis®),   bevacizumab    (Avastin®) and aflibercept (Eylea®). This feature article attempts to summarise the development in treatments of wet macular degeneration and highlights the current controversies regarding the optimal drug and frequency of dosing in context of cost to the Australian Pharmaceutical Benefits Scheme (PBS).

Earlier treatments for wet AMD

Neovascular (wet) AMD was largely untreatable over a decade ago but the management has transformed over this period. [2] Initially laser  photocoagulation  was  used  in  the  treatment  of  wet  AMD with the aim of destroying the choroidal neovascular membrane by coagulation.  During the 1980s, the Macular Photocoagulation study reported favourable outcomes for direct laser photocoagulation in small classic extrafoveal and juxtafoveal choroidal neovascularisation (CNV). However the outcomes for subfoveal lesions were poor and laser photocoagulation was limited by lack of stabilisation of vision, high reoccurrence rates in 50%, risk of immediate moderate visual loss in 41% and laser induced permanent central scotomata in sub-foveal lesions. [2,7]

During the 1990s photodynamic therapy (PDT) with verteporfin was introduced. It involved a two stage process: an intravenous infusion of verteporfin that preferentially accumulated in the neovascular membranes, followed by activation with infrared light that generated free radicals promoting closure of blood vessels. The TAP trial reported that the visual acuity benefits of verteporfin therapy in predominantly classic CNV subfoveal lesions was safely sustained for five years. [8] However the mean visual change was still a 13-letter average loss for PDT compared with a 19-letter average loss for untreated controls. [2,9] In addition, photosensitivity, headaches, back pain, chorioretinal atrophy and acute visual loss were observed in 4% as adverse effects. [2]

Anti-VEGF therapies

A breakthrough in treatment came during the mid-2000s with the identification of VEGF as the pathophysiological mechanism in driving the choroidal neovascularisation and associated oedema. This led to the establishment of the first anti VEGF drug, pegatanib sodium, an RNA aptamer that specifically targeted VEGF-165. [10] The VISION trial,  involving 1186 patients with subfoveal AMD receiving pegatanib injections every six weeks, had 70% of patients with stabilised vision (less than three lines of vision loss) compared to 55% of sham controls; yet still only a minority of patients actually gained vision. [10]

A second anti-VEGF agent, bevacizumab (Avastin®) soon came into off- label use. Bevacizumab was initially developed by the pharmaceutical company Genetech® to inhibit the tumour angiogenesis in colorectal cancer but its mechanism of action as a full length antibody that binds to all VEGF isoforms proved to have multiple purposes. Despite a lack of clinical trials to support its use in wet AMD, anecdotal evidence led ophthalmologists to use it in an off-label fashion to inhibit angiogenesis associated with wet macular degeneration. [11,12]

In 2006, however, Genetech® gained Food and Drug Administration (FDA) approval for the anti-VEGF drug ranibizumab, a drug derived from the same bevacizumab molecule, as a fragment but with a smaller molecular size to theoretically aid retinal penetration. [13] Landmark clinical trials established that ranibizumab not only prevented vision loss but also led to a significant gain in vision in almost one-third of patients. [14,15] The ANCHOR trial, involving 423 patients,  compared ranibizumab dosed at 0.3 mg and 0.5 mg given monthly over two years with PDT and verteporfin given as required. This trial found 90% of ranibizumab treated patients achieved visual stabilisation with a loss of < 15 letters compared to 65.7% of PDT patients. Furthermore, up to 41% of the ranibizumab treated group actually gained >15 letters compared to 6.3% of the PDT group. [15]

Further trials including the MARINA, [14] PRONTO, [16] SUSTAIN, [17] and PIER [18] studies confirmed the effectiveness of ranibizumab. Despite these results and the purpose built nature of ranibizumab for the eye, in countries like the US and other countries around the world where patients and health insurance companies bear the cost burden of treatment, bevacizumab (Avastin®) is more frequently used, and constitutes nearly 60% of injections in the US. [19] This occurrence is explained by the large cost difference between ranibizumab (USD $1593) and bevacizumab (USD $42) in context of apparent similar efficacy. [19] The cost difference is due to the fact that one vial of bevacizumab can be fractioned by a compounding pharmacy into numerous unit doses for the eye. [20]

Given the popular off-label use of bevacizumab, the CATT trial was conducted by the US National Eye Institute to establish its efficacy. The CATT trial was a large US multicentre study involving 1208 patients randomised to receive either bevacizumab 1.25 mg or ranibizumab 0.5 mg (monthly or as needed). The CATT trial results demonstrated that monthly bevacizumab was equivalent to monthly ranibizumab (mean gain of 8.0 vs 8.5 letters on ETDRS visual acuity chart in one year). [21] The IVAN trial, a UK multi-centre randomised controlled trial (RCT) involving 628 patients, showed similar results to the CATT trial with a statistically insignificant mean difference in BCVA of 1.37 letters between the two drugs. [22]

Hence debate has mounted in regards to the substantial cost difference in the face of apparent efficacy. [23] On the backdrop of this costly dilemma are three major pharmaceutical companies: Genetech®, Roche®  and Novartis®  Although  bevacizumab  was  developed  in 2004 by the pharmaceutical company Genetech®, the company was taken over in 2009 by the Swiss pharmaceutical giant Roche®, which is one-third owned by another pharmaceutical company, Novartis®. [24] Given that both ranibizumab and bevacizamab are produced essentially by the same pharmaceutical companies (Genetech/Roche/ Novartis) there is no financial incentive for the company to seek FDA or Therapeutic Goods Administration (TGA) approval for the cheaper alternative, bevacizumab. [13,24]

Another  major  concern  that  is  emphasised  in  the  literature  is the potentially increased systemic adverse effects reported with bevacizumab. [22] The systemic half-life of bevacizumab is six days compared to ranibizumab at 0.5 days and in theory it is postulated that systemic inhibition of VEGF could cause higher systemic vascular events. [2] The CATT trial reported similar rates of adverse reactions (myocardial infarction, stroke and death) in both bevacizumab and ranibizumab groups. [21] However, a meta-analysis of the CATT and IVAN data showed that there was an increased risk of serious systemic side effects requiring hospitalisation in the bevacizumab group (24.9% vs 19.0%). Yet this statement is controversial as most events reported were not identified in the original cancer trials involving patients receiving intravenous doses of bevacizumab (500 times the intravitreal dose). [21,22] Hence it has been questioned whether this is more attributable to chance or imbalance in the baseline health status of participants. [2,22] An analysis of US Medicare claims demonstrated that  patients  treated  with  bevacizumab  had  significantly  higher stroke and mortality rates than ranibizumab. [25] However this data is inherently prone to confounding bias considering the elderly at risk of macular degeneration are likely to have risk factors for systemic vascular  disease.  When  corrected  for  comorbidities  there  were no  significant  differences  in  outcomes  between  ranibizumab  and bevacizumab. [23,25] It has been argued that trials to date have been underpowered to investigate adverse events in bevacizumab. Hence until further evidence is available, the risk of systemic adverse effects favouring the use of ranibizumab over bevacizumab is unclear. [22]

Adding to the debate regarding the optimum drug choice for AMD, is the newest anti-VEGF, aflibercept (Eylea®) which attained FDA approval in late 2011. Aflibercept was created by the pharmaceutical companies Regeneron/Bayer® and is a novel recombinant fusion protein designed to bind to all isoforms of VEGF-A, VEGF-B and placental growth factor. [20] Aflibercept has a dispensed price the same as ranibizumab at AUD $1430 per injection on the PBS. [26] The binding affinity of aflibercept to VEGF is greater than ranibizumab and bevacizumab which allows for longer duration of action and hence extended dosing intervals. [27]

The VIEW 1 study, a North American multicentre RCT with 1217 patients, and the VIEW 2 study, with 1240 patients enrolled across Europe, the Middle East, Asia-Pacific and Latin America, assigned patients into one of four groups: 1) 0.5 mg aflibercept given monthly, 2) 2 mg aflibercept given monthly, 3) 2 mg aflibercept at two-monthly intervals after an initial 2 mg aflibercept monthly for three months, or 4) ranibizumab 0.5 mg monthly. The VIEW 1 trial demonstrated that vision was maintained (defined as losing less than 15 ETDRS letters) in 96% of patients on 0.5 mg aflibercept monthly, 95% of patients receiving 2 mg monthly, 95% of patients on 2 mg every two months and 94% of patients on ranibizumab 0.5 mg monthly. [28] Safety profiles of the drugs in both the VIEW 1 and VIEW 2 trials showed no difference between aflibercept and ranibizumab in terms of severe systemic side effects. Hence aflibercept has been regarded as equivalent in efficacy to ranibizumab with potentially less frequent dosing.

Frequency of injections

In addition to the optimal drug of choice for AMD, the optimal frequency of injection has come into question. Given the treatment burden of regular intravitreal injections and risk of endophthalmitis with each injection, extending treatment using “as-required” dosing is often used in clinical practice. Evidence from the integrated analysis of VIEW trials is encouraging as it showed that aflibercept given every two months after an initial loading phase of monthly injections for three months was non-inferior to ranibizumab given monthly in stabilising visual outcomes [28] Although the cost is similar to ranibizumab, the reduced number of injections may represent significant cost savings.

A meta-analysis of the IVAN and CATT trials showed that continuous monthly treatment of ranibizumab and bevacizumab, gives better visual function than discontinuous treatment with a mean difference in BCVA at two years of -2.23 letters. [22] The pooled estimates of macular exudation as determined by optical coherence tomography (OCT) favoured a continuous monthly regimen. However, there was an increase in the risk of developing new geographic atrophy of the retinal pigment epithelium (RPE) with monthly treatment when compared to the as-needed therapy, therefore visual benefits from the monthly treatment may not be maintained long-term. [22] It is unclear whether the atrophy of the RPE represents a drug effect or the natural history of AMD. Interestingly, mortality at two years was lower with the continuous compared to the discontinuous group. In relation to systemic side effects, the pooled results slightly favoured continuous therapy although this was not statistically significant. This appears to contradict the normal dose response framework, however it is hypothesised that immunological sensitisation with intermittent dosing may account for this. [22]

Hence it appears that continuous therapy for bevacizumab and ranibizumab may be favourable in terms of visual outcome. However in clinical practice, given the treatment burden for patients and their carers, the risk of rare sight threatening endopthalmitis and possible sustained rise in intraocular pressure with each injection, [29] the frequency of injections is often individualised based on maintenance of visual acuity and anatomic parameters of macular thickness on OCT.

Currently the “inject and extend” model is recommended, whereby after three monthly injections treatment is extended to five or six weeks if the OCT shows no fluid. Depending on signs of exudation and BCVA, treatment may be reduced or extended by one or two weeks per visit to a maximum interval of ten weeks. Although there are no large prospective studies to support this, smaller studies have reported encouraging results which offers another cost saving strategy. [30] However, given the use of the more expensive ranibizumab, it is still a costly endeavour in Australia.

Current Australian situation

Other practical issues play a role in the choice of anti-VEGF therapy in Australia. For instance, the subsidised cost of ranibizumab to the patient is lower than the unsubsidised full cost of bevacizumab. [13] Patients must pay between AUD $80 and $159 out-of-pocket per injection for bevacizumab, whilst ranibizumab costs the government AUD $1430 and the maximum out of pocket cost for the patient is around AUD $36. [26] Among ophthalmologists there is favour towards the use of ranibizumab because of its purpose built status for the eye. [13] It seems the quantity and quality of evidence for ranibizumab compared to bevacizumab is greater. [29] As bevacizumab is used off- label, its use is not monitored, hence there is no surveillance. Lack of appropriate surveillance has been argued as a case to favour the use of the FDA approved ranibizumab. Essentially the dilemma faced by ophthalmologists is summarised in the statement: “I would personally be reluctant to say to my patients, ‘The best available evidence supports the use of this treatment which is funded, but are you interested in changing to an unapproved treatment [Avastin] for the sake of saving the community some money?” [31]

Another issue in Australia is the need for bevacizumab to be altered and divided by a compounding pharmacist into a product that is suitable and safe for ocular injection. A recent cluster of infectious endophthalmitis  resulting  in  vision  loss  occurred  in  the  US  from non-compliance to recognised standards. [32] The CATT and IVAN studies had stringent quality and safety control with the bevacizumab repackaged in glass vials using aseptic methods. In these trials, the risk of sight-threatening endophthalmitis was rare for both ranibizumab (0.04%) and bevacizumab injections (0.07%). [21] However, in clinical practice, it is argued that many of the compounding pharmacies may not be as regulated as that of the clinical trials to give comparable inferences about safety.

Conclusion

Prior to development of anti-VEGF therapies, patients with wet macular degeneration were faced with a progressive and permanent decline in vision. Today the available treatments not only stabilise vision but also lead to an improvement in vision in a significant portion of patients. Currently there are no published “head-to-head” trials comparing the three available drugs – bevacizumab, ranibizumab and aflibercept – together, which is warranted. In addition, further analyses of the safety concerns of bevacizumab are required. Current research is focusing on improving anti-VEGF protocols to reduce injection burden and combination therapies with photodynamic therapy or corticosteroids. [3] However, topical therapies such as pazopanib, a tyrosine kinase inhibitor that targets VEGF receptors, currently in the pipeline, may offer a possible non-invasive therapy in the future. [2,33]

At present, the evidence and expert opinion is not unanimous in allowing health policy makers to rationalise the substitution of bevacizumab over ranibizumab or aflibercept. Practical concerns in terms of FDA or TGA approval, surveillance, compounding pharmacy and safety are still major issues.  In 2013, ranibizumab was the third- highest costing drug on the PBS at AUD $286.9 million and aflibercept prescriptions cost the Australian government AUD $60.5 million per annum. [26] From a public health policy perspective, Australia has an ageing population and with eye health burden only to increase, there is need to prioritise resources. The cost–benefit analysis is not limited to AMD but applies to other indications of anti-VEGF therapy such as diabetic macular oedema and retinal vein occlusion. Substitution of first-line treatment with bevacizumab, which has occurred elsewhere in the world, has the potential to save the PBS billions of tax-payer dollars over a few years and its review should be considered a high priority in current health policy.

Acknowledgements

Thanks to Dr. Jack Tan (JCU adjunct lecturer (ophthalmology), MMED (OphthalSci)) for reviewing and editing this submission.

Conflict of interest

None declared.

Correspondence

M R Seneviratne: ridmee.seneviratne@my.jcu.edu.au

References

[1]  Wang  JJ,  Rochtchina  E,  Lee  AJ,  Chia  EM,  Smith  W,  Cumming  RG,  et  al.  Ten-year incidence and progression of age-related maculopathy: the blue Mountains Eye Study. Ophthalmology. 2007;114(1):92-8.

[2] Lim LS, Mitchell P, Seddon JM, Holz FG, Wong T. Age-related macular degeneration. Lancet. 2012;379(9827):1728-38.

[3] Cunnusamy K, Ufret-Vincenty R, Wang S. Next generation therapeutic solutions for age-related macular degeneration. Pharmaceutical patent analyst. 2012;1(2):193-206.

[4] Meleth AD, Wong WT, Chew EY. Treatment for atrophic macular degeneration. Current Opinion in Ophthalmology. 2011;22(3):190-3.

[5] Spilsbury K, Garrett KL, Shen WY, Constable IJ, Rakoczy PE. Overexpression of vascular endothelial growth factor (VEGF) in the retinal pigment epithelium leads to the development of choroidal neovascularization. The American Journal of Pathology. 2000;157(1):13544.

[6] Wong TY, Chakravarthy U, Klein R, Mitchell P, Zlateva G, Buggage R, et al. The natural history  and  prognosis  of  neovascular  age-related  macular  degeneration:  a  systematic review of the literature and meta-analysis. Ophthalmology. 2008;115(1):116-26.

[7]  Photocoagulation  Study  Group  .  Argon  laser  photocoagulation  for neovascular maculopathy.  Five-year  results  from  randomized  clinical trials.  Macular.  Archives  of Ophthalmology. 1991;109(8):1109-14.

[8] Kaiser PK. Verteporfin therapy of subfoveal choroidal neovascularization in age-related macular degeneration: 5-year results of two randomized clinical trials with an open-label extension: TAP report no. 8. Graefe’s archive for clinical and experimental ophthalmology. Albrecht   von   Graefes   Archiv   fur   klinische   und   experimentelle   Ophthalmologie. 2006;244(9):1132-42.

[9] Bressler NM. Photodynamic therapy of subfoveal choroidal neovascularization in age-related macular degeneration with verteporfin: two year results of 2 randomized clinical trials. Archives of Ophthalmology. 2001;119(2):198-207.

[10] Gragoudas ES, Adamis AP, Cunningham ET, Feinsod M, Guyer DR. Pegaptanib for neovascular age-related macular degeneration. The New England Journal of Medicine. 2004;351(27):2805-16.

[11] Madhusudhana KC, Hannan SR, Williams CP, Goverdhan SV, Rennie C, Lotery AJ, et al. Intravitreal bevacizumab (Avastin) for the treatment of choroidal neovascularization in  age-related  macular  degeneration: results  from  118  cases.  The  British  Journal  of Ophthalmology. 2007;91(12):1716-7.

[12] Rosenfeld PJ, Moshfeghi AA, Puliafito CA. Optical coherence tomography findings after  an  intravitreal  injection  of  bevacizumab (avastin)  for  neovascular  age-related macular degeneration. Ophthalmic surgery, lasers & imaging : The Official Journal of the International Society for Imaging in the Eye. 2005;36(4):331-5.

[13] Chen S. Lucentis vs Avastin: A local viewpoint, INSIGHT. 2011. Available from: http://www.visioneyeinstitute.com.au/wp-content/uploads/2013/05/Avastin-vs-Lucentis-Insight-article-Nov-2011.pdf.

[14] Rosenfeld PJ, Brown DM, Heier JS, Boyer DS, Kaiser PK, Chung CY, et al. Ranibizumab for neovascular age-related macular degeneration. The New England Journal of Medicine. 2006;355(14):1419-31.

[15] Brown DM, Michels M, Kaiser PK, Heier JS, Sy JP, Ianchulev T. Ranibizumab versus verteporfin photodynamic  therapy  for  neovascular  age-related  macular  degeneration: Two-year results of the ANCHOR study. Ophthalmology. 2009;116(1):57-65.

[16] Lalwani GA, Rosenfeld PJ, Fung AE, Dubovy SR, Michels S, Feuer W, et al. A variable-dosing  regimen  with  intravitreal  ranibizumab  for  neovascular  age-related  macular degeneration:  year  2  of  the  PrONTO  Study.  American  Journal  of  Ophthalmology. 2009;148(1):43-58.

[17] Holz FG, Amoaku W, Donate J, Guymer RH, Kellner U, Schlingemann RO, et al. Safety and efficacy of a flexible dosing regimen of ranibizumab in neovascular age-related macular degeneration: the SUSTAIN study. Ophthalmology. 2011;118(4):663-71.

[18]  Abraham  P,  Yue  H,  Wilson  L.  Randomized,  double-masked,  sham-controlled  trial of ranibizumab  for  neovascular age-related macular degeneration: PIER study year 2. American Journal of Ophthalmology. 2010;150(3):315-24.

[19] Brechner RJ, Rosenfeld PJ, Babish JD, Caplan S. Pharmacotherapy for neovascular age-related macular degeneration: an analysis of the 100% 2008 medicare fee-for-service part B claims file. American Journal of Ophthalmology. 2011;151(5):887-95.

[20] Ohr M, Kaiser PK. Aflibercept in wet age-related macular degeneration: a perspective review. Therapeutic Advances in Chronic Disease. 2012;3(4):153-61.

[21] Martin DF, Maguire MG, Ying GS, Grunwald JE, Fine SL, Jaffe GJ. Ranibizumab and bevacizumab for neovascular age-related macular degeneration. The New England Journal

[22] Chakravarthy U, Harding SP, Rogers CA, Downes SM, Lotery AJ, Culliford LA, et al.Alternative treatments to inhibit VEGF in age-related choroidal neovascularisation: 2-year findings of the IVAN randomised controlled trial. Lancet. 2013;382(9900):1258-67.

[23] Aujla JS. Replacing ranibizumab with bevacizumab on the Pharmaceutical Benefits Scheme: where does the current evidence leave us? Clinical & experimental optometry : Journal of the Australian Optometrical Association. 2012;95(5):538-40.

[24] Seccombe M. Australia’s Billion Dollar Blind Spot. The Global Mail. 2013 June 4:5.

[25] Curtis LH, Hammill BG, Schulman KA, Cousins SW. Risks of mortality, myocardial infarction,  bleeding,  and  stroke  associated  with  therapies  for  age-related  macular degeneration. Archives of Ophthalmology. 2010;128(10):1273-9.

[26]  PBS.  Expenditure  and  prescriptions  twelve  months  to  30  June  2013.  Canberra, Australia Pharmaceutical Policy Branch; 2012-2013 [4th of June 2014]. Available from: http://www.pbs.gov.au/statistics/2012-2013-files/expenditure-and-prescriptions-12-months-to-30-06-2013.pdf.

[27] Stewart MW, Rosenfeld PJ. Predicted biological activity of intravitreal VEGF Trap. The British Journal of Ophthalmology. 2008;92(5):667-8.

[28] Heier JS, Brown DM, Chong V, Korobelnik JF, Kaiser PK, Nguyen QD, et al. Intravitre-al aflibercept (VEGF trap-eye) in wet age-related macular degeneration. Ophthalmology. 2012;119(12):2537-48.

[29] Tseng JJ, Vance SK, Della Torre KE, Mendonca LS, Cooney MJ, Klancnik JM, et al. Sustained increased intraocular pressure related to intravitreal antivascular endothelial growth  factor  therapy  for  neovascular  age-related  macular  degeneration.  Journal  of Glaucoma. 2012;21(4):241-7.

[30] Engelbert M, Zweifel SA, Freund KB. Long-term follow-up for type 1 (subretinal pigment epithelium) neovascularization using a modified “treat and extend” dosing regimen of intravitreal antivascular endothelial growth factor therapy. Retina (Philadelphia, Pa). 2010;30(9):1368-75.

[31] McNamara S. Expensive AMD drug remains favourite. MJA InSight. 2011. Available from;                     https://www.mja.com.au/insight/2011/16/expensive-amd-drug-remains-favourite?0=ip_login_no_cache%3D22034246524ebb55d312462db14c89f0.

[32] Gonzalez S, Rosenfeld PJ, Stewart MW, Brown J, Murphy SP. Avastin doesn’t blind people, people blind people. American Journal of Ophthalmology. 2012;153(2):196-203.

[33] Danis R, McLaughlin MM, Tolentino M, Staurenghi G, Ye L, Xu CF, et al. Pazopanib eye drops: a randomised trial in neovascular age-related macular degeneration. The British Journal of Ophthalmology. 2014;98(2):172-8.

Categories
Feature Articles

Trust me, I’m wearing my lanyard

The medical student’s first lanyard represents much more than a device which holds clinical identification cards – it symbolises their very identity, first as medical students and eventually as medical practitioners. The lanyard allows access to hospitals and a ready way to discern who’s who in a fast paced environment. It is the magic ticket that allows us to wander hospital corridors (often aimlessly) without being questioned.

Despite this, the utility of the lanyard as a symbol of “an insider” is being questioned, with mounting evidence showing it to be a harbour for the indirect transmission of bacteria from health care staff to patients. It may be time for the lanyard, like the white coat before it, to be retired as a symbolic but potentially harmful relic of the past.  This essay investigates the validity of these concerns by examining available literature and the results of a small pilot study.

v6_i1_a25

Background

In  May  2014  Singapore  General  Hospital  announced  a  new  dress policy for all staff. Hanging lanyards were banned and replaced with retractable identification card holders. Dr Ling Moi Lin, the hospital’s director of infection control, explained that the hospital aimed “to ensure that ties and lanyards do not flap around when staff examine patients, these objects can easily collect germs and bacteria – we do not want to carry them to other patients.” [1]

This hospital is not alone on their stance against hanging lanyards. The British National Health Service (NHS) Standard Infection Prevention and   Control   guidelines   published   in   March  2013  lists   wearing neckties or lanyards during direct patient care as “bad practice”. The guidelines state that lanyards “come into contact with patients, are rarely laundered and play no part in patient care”. [2] Closer to home the current 2013 Bare Below the Elbows campaign, a Queensland Government initiative aiming to improve the effectiveness of hand hygiene performed by health care workers, recommends that retractable (or similar) identification card holders are used in place of lanyards. [3] Other Australian states and many individual hospitals have adopted similar recommendations. [4,5]

However, some hospitals and medical schools continue to require staff and students to wear lanyards. For example James Cook University medical students are provided with one lanyard, which must be worn in all clinical settings (whether that be at the medical school during clinical skills sessions or at the hospital) for the entire duration of their six-year degree. [6] The University of Queensland 2013 medical student guide for their Sunshine Coast clinical school states that students must wear their lanyards and display identification badges at all times in teaching locations. [7] This is not concordant with the current Queensland Government initiative recommendations.

The NHS Standard Infection Prevention and Control guidelines are also being breached by medical schools requiring their students to wear lanyards. London Global University states that lanyards are important in that they remind patients who students are and clinical teachers and other professionals that they are in a teaching hospital. However students are required to use a safety pin to attach the end of their lanyard to fixed clothing. [8] A similar policy is in place in Cardiff University where students must wear lanyards but ensure that they are not “dangling” freely when carrying out examinations and procedures. [9] So how harmful could the humble, dangling lanyard really be?

How harmful could the lanyard be?

Each year there are around 200,000 healthcare-associated infections in Australian  acute  healthcare  facilities.  Nosocomial  infections  are the most common complication affecting patients in hospital. These potentially preventable adverse effects cause unnecessary pain and suffering for patients and their families, prolong hospital stays and are costly to the health care system. [10]

Improving hand hygiene among healthcare workers is currently the single most effective intervention to reduce the risk of nosocomial infections in Australian hospitals. [11] The World Health Organisation guidelines on Hand Hygiene in Health Care indicate five moments when the hands must be washed. Two of these are before and after contact with a patient. [12]

In between these two crucial hand washes several objects are frequently touched by health care staff. Objects such as a doctor’s neckties [13-17], stethoscopes [18-20] and pens [21,22] have all been shown to carry pathogenic bacteria. The bacteria isolated include methicillin resistant Staphylococcus aureus (MRSA), found on doctors’ ties [14,16] and stethoscopes. [19] Making contact with these objects during an examination can result in the indirect transmission of microorganisms, transferring the infectious agents to a susceptible host via an intermediate object termed a fomite.

The infectious agents must be transferred from the fomites to the hands of the health care practitioners before they can be spread to patients. The efficiency of transfer of a pathogen on a surface to a practitioner’s hand after a single contact was tested by a recent study published in 2013. It isolated five known nosocomial pathogens and placed  them  on  non-porous  surfaces;  after 10  seconds  of  contact time between a finger and the surface under a known pressure the microorganism transferred to the finger was examined. It showed that under relative humidity non-porous surfaces had a transfer efficiency of up to 79.5%. [23] This study indicates that after one contact with a contaminated fomite there is a significant transfer of microorganisms to the hands, these can then be transferred to patients.

Furthermore,  if  no  regular  preventative  disinfection  is  performed the most common nosocomial pathogens may survive or persist on inanimate surfaces for months and can therefore be a continuous source of transmission. [24] One study conducted in the United Kingdom in 2008 approached 100 hospital staff randomly and asked them to state the frequency and method by which their lanyards were washed or decontaminated. Only 27% had ever washed their lanyards and 35% of lanyards appeared noticeably soiled. [25] This suggests that the lanyards, which doctors carry with them daily, could potentially harbour acquired infectious agents for an extended periods of time.

Two recent studies have shown that lanyards do carry pathogenic bacteria. [25,26] An Australian study by Kotsanas et al. tested lanyards and identification cards for pathogenic bacteria and found that 38% of lanyards harboured these. Nearly 10% of lanyards grew MRSA, and other pathogens found included methicillin sensitive Staphylococcus aureus,  enterococci  and  Gram-negative  bacilli.  The  bacterial  load on lanyards was 10 times greater per unit surface area than the identification cards themselves. [26]

It has been suggested that contaminated fomites are a result of poor hand hygiene. As such it is assumed that with good hand hygiene practices wearing these objects is acceptable. It has been widely reported that nurses have far better hand hygiene habits than doctors. A  recent  Australian  study  conducted  in  82  hospitals  reports  that nurses consistently have significantly higher levels of hand hygiene compliance. [27] If the fomite pathogenic carriage is dependent on hand hygiene then one might expect that lanyards worn by nurses would have lower pathogenic carriage. However Kotsanas et al. showed that although there was a difference in organism composition there was no significant difference between total median bacterial counts isolated from nurses’ and doctors’ lanyards. [26] This suggests that the carriage of pathogens on lanyards is not solely dependent on compliance with hand hygiene protocols.

Lanyards have thus been shown to carry bacteria, which may remain on them for months, regardless of hand hygiene practices, and have high rates of transfer to the hands of practitioners. However there have been no studies conducted to directly show that their use results in the increased transmission of bacteria. There are however some studies which have shown bacterial transfer from neckties to patients. Lanyards are similar to neckties in that they have been shown to carry pathogenic bacteria, are made of a textile material which is rarely laundered, are positioned at the waistline, have a nature to swing and inadvertently touch patients or the practitioner’s cleansed hands and have no direct role in patient care. [13-17]

A study in Pakistan found that the bacteria collected from the lower part of neckties worn by physicians correlated with bacteria isolated from their patients’ wounds after surgical review. [17] This suggests that bacterial transmission occurred. More convincingly, a recent study by Weber et al. tested the transmission of bacteria, to dummies, from doctors wearing different combinations of clothing inoculated with comparable levels of bacteria to those previously reported. After a brief 2.5-minute history and exam, cultures were obtained from the dummies at three sites. The number of contaminated mock patients was six times higher and total colony units cultured was 26 times higher when the examiner was wearing an unsecured necktie. [28] This showed that unsecured neckties do result in greater transmission of bacteria from doctors to patients. The ties may swing to directly transmit bacteria to the patient or to the cleansed hands of the doctor, which are then transferred to the patient. Lanyards would likely pose a similar risk.

In my clinical experience, unlike ties, lanyards are often inadvertently touched and fiddled with by medical students and doctors during the clinical examination of a patient. This can recontaminate hands with pathogens even after hand-washing procedures have been followed. Thus, because of this additional contact, lanyards potentially have a higher rate of bacterial transmission than neckties.

What did my pilot study show?

To test this theory I conducted a small observational study, in which 20 James Cook University fourth-year medical students were observed during the focused examination of a volunteer, posing as a patient in an imitated hospital bed setting. Twelve students conducted a focused head and neck examination whilst eight conducted an abdominal examination. The students were unaware of the nature of the study. All students observed washed their hands prior to and at the end of each clinical examination. I observed the students from when they washed their hands prior to the physical exam until their last physical contact with the patient. The mean time taken was 12 minutes. During this period two things were noted: the number of times that their hands made contact with their lanyards and the number of times that the lanyard made contact with the patient. 70% of the students’ lanyards touched their patient during the exam at least once; the mean number of times was 2.65 (SD = 2.99). 95% of students touched their lanyards during the exam; the mean number of times was 7.35 (SD = 5.28).

Many made contact with their lanyard as part of their introduction to the patient, holding their lanyard to “show” that they are in fact a medical student. Some held the lanyards to their abdomen with one hand whilst examining the patient with the other hand to prevent it making contact with the patient. Others fiddled with the lanyard whilst talking to the patient. During hand gestures, the lanyards often collided with the student’s hands and the students’ stethoscopes, prominently displayed around their necks, were often entangled with their lanyards. The amount of contact was to the extent that some students default position was standing with their hands holding their lanyards. After each forced hand movement their hands were returned to holding their lanyards.

It is also interesting to note that several students had attached objects such as pens, USBs and keypads to their lanyards. The attachment of additional objects had a slightly increased correlation with the amount of times that their hands made contact with the lanyard but almost doubled the times the lanyard made contact with the patient (2.65 to 4.67).

One student had a lanyard clip which fastened the end of his lanyard to his shirt. This student did not touch his lanyard once during the exam and his lanyard also did not make contact with the patient. There may thus be some benefit in following the lead of London Global University and Cardiff University in enforcing the use of lanyard clips or safety pins to prevent their students’ lanyards from dangling. [8,9]

This observational study adds another dimension to the argument against wearing lanyards. Like neckties, lanyards have been shown to carry pathogenic bacteria, swing to make contact with the patient, are rarely laundered, and have no direct part in patient care. This small observational study confirmed my assumption that lanyards also come into contact with examiners’ hands a significant number of times during an examination.

Role models

During the influential years at some medical schools, it is standard policy that students are required to wear a hanging lanyard even though there is a growing body of evidence which indicates that hanging lanyards should not be worn. These students can only dream of the day when their blue medical student lanyards are replaced with the lanyards with “DOCTOR” repeatedly printed. Our role models are wearing improved, larger, better lanyards. It has been proposed that advocating the presentation of up-to-date evidence based information with an emphasis on role modelling should be made an educational priority to improve hand hygiene rates. [29] Research has indicated that targeting medical students may be an effective approach to raising the low compliance rates of hand hygiene procedures of doctors. [29] Clearly advocating the role that fomites like lanyards play in the spread of nosocomial infections has not been made an educational priority and may be part of the reason why compliance with current health hygiene policies regarding their use are low.

It seems contradictory that if I do not to wash my hands at the start of a clinical examination I will fail but I could, like one student in the observation study did, touch an object shown to carry pathogenic bacteria, which I am required to wear, 23 times and still pass. Making contact with an object shown to carry pathogenic bacteria more than once per minute of clinical examination is alarming and arguably diminishes the purpose of rigorous hand washing procedures.

Conclusion

Lanyards are an easy way to carry identification cards that identify who’s who in a fast paced environment. However there is a growing body of evidence that indicates that they may be the harbour for the indirect transmission of infectious agents to patients. Several health hygiene policies have been updated to encourage health professionals not to wear lanyards during direct patient care. Some medical schools have not followed these guidelines and still require students to wear lanyards. While there is no definitive link showing the transmission of an acquired infection from the tip of a medical student’s lanyard, there is very reasonable circumstantial evidence indicating that this could easily happen. Obeying current state infection prevention guidelines and  swapping  hanging  lanyards  for  retractable  identification cards or simply preventing them from dangling may be useful in reducing nosocomial infections in Australia. It is about time that the lanyard is retired as a symbolic but potentially harmful relic of the past.

Acknowledgements

James Cook University clinical staff and fourth year medical students for allowing me to observe their clinical skills assessment.

Conflict of interest

None declared.

Correspondence

E de Jager: elzerie.dejager@my.jcu.edu.au

References

[1] Cheong K. SGH staff roll up their sleeves – under new dress code for better hygiene. The Straits Times [Internet]. 2014 May 16 [cited 2014 Jun 25]. Available from: www.straitstimes.com/news/singapore/health/story/sgh-staff-roll-their-sleeves-under-new- dress-code-better-hygiene-2014050

[2] NHS: National Health Service. CG1 Standard infection prevention and control guidelines [Internet]. 2014 Mar [cited 2014 Jun 25]. Available from: http://www.nhsprofessionals. nhs.uk/download/comms/cg1%20standard%20infection%20prevention%20and%20 control%20guidelines%20v4%20march%202013.pdf

[3] Queensland Government Department of Health. Bare below the elbows [Internet]. 2013 Sep [cited 2014 Jun 24]. Available from: http://www.health.qld.gov.au/chrisp/hand_hygiene/fsheet_BBE.pdf

[4] Tasmanian Government Department of Health and Human Services. Hand hygiene policy [Internet]. 2013 Apr 1[cited 2014 Jun 24] Available from: http://www.dhhs.tas.gov.au/data/assets/pdf_file/0006/72393/Hand_Hygiene_Policy_2010.pdf

[5] Australian Capital Territory Government Health. Standard operating procedure 1, hand hygiene [Internet]. 2014 March [cited 2014 June 24]. Available from: http://health.act.gov.au/c/health?a=dlpubpoldoc&document=2723

[6] James Cook University School of Medicine. Medicine student lanyards [Internet]. 2014 [cited 2014 June 24] Available from: https://learnjcu.jcu.edu.au/

[7] The University of Queensland Sunshine Coast Clinical School. 2013 Medical student guide [Internet]. 2013 [cited 2014 July 5]. Available from: https://my.som.uq.edu.au/mc/media/25223/sccs%20medical%20student%20guide%202013.pdf

[8] University College London Medical School. Policies and regulations, identity cards and name badges [Internet]. 2014 [cited 2014 July 5]. Available from: http://www.ucl.ac.uk/medicalschool/staff-students/general-information/a-z/#dress

[9]   Cardiff  University   School   of   Medicine.   Personal   presentation  [Internet].   2013 August  22  [cited  2014  July  5].  Available  from:  http://medicine.cf.ac.uk/media/filer_public/2013/08/22/perspres.pdf. Published August 22, 2013

[10] Australian Government National Health and Medical Research Council. Australianguidelines for the prevention and control of infection in healthcare [Internet]. 2010 [cited2014  June  24].  Available  from: http://www.nhmrc.gov.au/book/australian-guidelines-prevention-and-control-infection-healthcare-2010/introduction

[11] Australian Commission on Safety and Quality in Healthcare Hand Hygiene Australia. 5 Moments for hand hygiene [Internet]. 2009 [cited 2014 July 7]. Available from: http://www.hha.org.au/UserFiles/file/Manual/ManualJuly2009v2(Nov09).pdf

[12]  World  Health  Organisation.  WHO  guidelines  on  hand  hygiene  in  health  care [Internet]. Geneva. 2009 [cited 2014 July 7]. Available from: http://whqlibdoc.who.int/publications/2009/9789241597906_eng.pdf

[13] Dixon M. Neck ties as vectors for nosocomial infection. ICM. 2000;26(2):250.

[14] Ditchburn I. Should doctors wear ties? J Hosp Infect. 2006;63(2):227-8.

[15] Lopez PJ, Ron O, Parthasarathy P, Soothill J, Spitz L. Bacterial counts from hospital doctors’ ties are higher than those from shirts. Am J Infect Control. 2009; 37(1):79-80.

[16] Bhattacharya S. Doctors’ ties harbour disease- causing germs. NewScientist.com [Internet]. 2004 May 24 [cited 2014 June 20]. Available from: http://www.newscientist. com/arti- cle/dn5029-doctors-ties-harbour-diseasecaus- ing-germs.html

[17] Shabbir M, Ahmed I, Iqbal A, Najam M. Incidence of necktie as a vector in nosocomial infection. Pak J Surg. 2013; 29(3):224-225.

[18]  Marinella  MA,  Pierson  C,  Chenoweth  C.  The  stethoscope.  A  potential  source  of nosocomial infection? Arch Intern Med. 1997;157(7):786-90.

[19]  Madar  R,  Novakova  E,  Baska  T.  The  role  of  non-critical health-care  tools  in  the transmission of nosocomial infections. Bratisl Med J. 2005;106(11):348-50.

[20] Lokkur PP, Nagaraj S. The prevalence of bacterial contamination of stethoscope diaphragms: a cross sectional study, among health care workers of a tertiary care hospital. Indian I Med Microbiol. 2014;32(2):201-2.

[21] French G, Rayner D, Branson M, Walsh M. Contamination of doctors’ and nurses’ pens with nosocomial pathogens. Lancet. 1998;351(9097):213.

[22] Datz C, Jungwirth A, Dusch H, Galvan G, Weiger T. What’s on doctors’ ball point pens? Lancet. 1997;350(9094):1824.

[23] Lopez GU, Gerba CP, Tamimi AH, Kitajima M, Maxwell SL, Rose JB. Transfer efficiency of bacteria and viruses from porous and nonporous fomites to fingers under different relative humidity conditions. Appl Environ Microbiol. 2013;79(18):5728-34.

[24] Kramer A, Schwebke I, Kampf G. How long do nosocomial pathogens persist on inanimate surfaces? A systematic review. BMC Infect Dis. 2006;6:130.

[25] Alexander R, Volpe NG, Catchpole C, Allen R, Cope S. Are lanyards a risk for nosocomialtransmission of potentially pathogenic bacteria? J Hosp Infect. 2008;70(1):92-3.

[26] Kotsanas D, Scott C, Gillespie EE, Korman TM, Stuart RL. What’s hanging around your neck? Pathogenic bacteria on identity badges and lanyards. Med J Aust. 2008;188(1):5-8.

[27] Azim S, McLaws ML. Doctor, do you have a moment? National Hand Hygiene Initiative compliance in Australian hospitals. Med J Aust. 2014;200(9):534-7.

[28] Weber RL, Khan PD, Fader RC, Weber RA. Prospective study on the effect of shirt sleeves and ties on the transmission of bacteria to patients. J Hosp Infect. 2012;80(3):252-4.

[29] Hall L, Keane L, Mayoh S, Olesen D. Changing learning to improve practice – Handhygiene  education  in  Queensland  medical  schools.  Healthcare  Infection.  2010  Dec;15(4):126-129.

Categories
Feature Articles

Making the cut: a look at female genital mutilation

Female Genital Mutilation (FGM) is a procedure of historical, cultural and religious derivation that continues its practice worldwide, involving partial or total removal of the external female genitalia.   The stand of many international bodies, including the United Nations, is that it epitomises a violation of the human rights of girls and women. Australian state and territorial law prohibits and categorises FGM as a criminal offence, as do RANZCOG guidelines  for  medical  practitioners.  Reducing  the  practice  of FGM worldwide encompasses involvement in awareness and education programs at an individual and societal level, beginning with local communities, elders/leaders, young men and women, and traditional health practitioners. Approaching the request for FGM or reinfibulation in an Australian healthcare setting requires an understanding of the socio-cultural influences surrounding the practice and empathy towards the needs of the patient and their cultural identity. It also requires a comprehensive understanding of the myriad physical and psychological health risks posed by FGM.

v6_i1_a26

Introduction

The  continued  worldwide  practice  of  female  genital  mutilation (FGM) or traditionally, ‘circumcision’ is one that has sparked much controversy within the ethics of Western medicine. Is the centuries old socio-cultural ritual a violation of the rights of a woman or child hiding behind the label of ‘custom’? Or has the Western world perceived ‘degradation’ where there is only an exercising of free will that is perhaps  unfathomable  but  not  necessarily  unethical?  How  much of ‘free will’ is truly an expression of an individual’s autonomy? To what extent does culture impinge upon it? And how do we as health practitioners balance this societal commentary with the bioethical principles underlying medical practice? These are questions that have come to the forefront of the FGM debate and that will be examined here. Perhaps, one of the more overarching issues we should also ponder is this: are and should the principles of what is ‘ethical’ be derived from socio-cultural forces?

According to the World Health Organisation (WHO), female genital mutilation (FGM) comprises all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs for  non-medical  reasons.[1] The current position of the WHO is that ‘FGM is a violation of the human rights of girls and women’.[1]

The World Health Organisation (WHO) estimates 100-140 million women worldwide are affected by female genital mutilation.[1] 28 countries of Africa, as well as a few countries of Middle East and Asia have documented practice of FGM.[1] Of these, the four countries with highest prevalence are Somalia, Sudan, Guinea and Djibouti (>90% of women). 1] In Australia, there have been an increasing number of migrants from countries practising FGM, particular over the past decade.[2]

The current laws and guidelines surrounding FGM

Under NSW Law, FGM is prohibited; Section 45 of the 1900 NSW Crimes Act extensively covers prohibition of female genital mutilation. [3] In fact, in all jurisdictions of Australia (though covered exclusively by differing states and territories), FGM is considered a criminal offence. [3] Current Royal Australian and New Zealand College of Obstetricians and Gynaecologists’ (RANZCOG) guidelines strongly recommend that all health practitioners do not acquiesce to the requests for elective reinfibulation or indeed other forms of FGM.[2] The United Nations has, as of December 2012, passed a resolution banning the practice of FGM worldwide, as a violation of human rights and dignity.[1]

The arguments ‘for’ prohibition of FGM

In terms of establishing a perspective on the matter, the tone of the commentary to follow is ultimately averse to the practice of FGM. At the forefront of this argument are the adverse health effects. A study by Hosken et al showed that 83 percent of women who had undergone FGM would require medical attention at some point in their lives for a condition resulting from the procedure.[4] In terms of a statistical look at the associated health problems, according to a survey of 55 health providers in the Nyamira District of Kenya, 49.1% reported obstructed labour, dyspareunia, bleeding, urinary problems, and fear and anxiety. [5] The World Health Organisation (WHO) estimates that women who have undergone FGM are twice as likely to die during childbirth and are more likely to give birth to a stillborn child when compared to those women who have not undergone FGM.[1]

Central to the argument is that it confers no health benefit to a woman, and contrarily presents a myriad collection of damaging consequences upon one’s health. Proponents of prohibiting the practice, such as Toubia et al, suggest that non-therapeutically excising an otherwise functioning body  part  is  not  simply  abhorrent;  it  is  a  violation of the codes of medical practice and an obstruction to the bioethical principles of non-maleficence and beneficence.[6]

An important detail is that the procedure is often performed on children (a large proportion pre-pubertal), who by virtue of medical ethics are not able to provide informed consent. But what of consenting adults? Whilst it is difficult to ignore the requests made by consenting adults in a sterile, medical environment within the healthcare systems of the Western World, this could condone the practice worldwide.[6] In many instances FGM has (despite it being a social custom of historical derivation) signified the degradation of the rights and dignity of women internationally.[1,6,7] Many argue that if health practitioners do not perform the procedure in a safe sterile manner, women will seek infibulation/reinfibulation from an untrained and often medically unsafe source.[8] However, the underlying point remains that it is the responsibility of the medical profession to uphold certain ethical principles of beneficence, non-maleficence and justice that are violated by FGM. The harm minimisation of performing re-infibulation/ infibulation sterilely as opposed to at the hands of a non-medical entity is ultimately not outweighed by the consequences of condoning said practice and failing to reduce the practice worldwide.[6,7]

Elchalal et al, in Female Genital Mutilation: the peril remains, consolidated the views of Toubia et al, in elucidating that societies and countries that promote the practice of FGM should seek to empower their women (over time) and symbolise social acceptance and respectability in practices that do not confer such negative health risks and psychological trauma.[6,7] What must be highlighted is the importance placed on healthcare workers to utilise their position of trust and objectivity, when relaying the health risks associated with FGM to patients.[6,7]

The arguments ‘against’ prohibition of FGM

It is important that whilst being in support of eradicating FGM, one examines the counter arguments. Those who defend the practice, hold the value of social integration and cultural importance to the sense of identity held by many consenting adult women, in a higher regard. [8,9] Bronnit et al identifies the psychological health benefits that can be derived from compliance with the practice of FGM, as often outweighing the adverse health risks.[8] Defenders of FGM question the betterment of the cultural and ritualistic component of mental health as being a valid justification for performing FGM.[8,9]

Whilst many commentators also refuse to condone the practice on children, Bronnit states that in denying requests for reinfibulation/ infibulation to consenting adults, you risk retreating to the ‘archaic’ models of paternalism.[8] It is an interesting argument to consider here: what of the adult woman who, in full knowledge of the risks of the procedure, requests it as it holds importance to her cultural and personal identity? It is undeniably difficult to criticise the respect for patient autonomy.

In response to this argument, the facet of autonomy that can be questioned  in  these  scenarios  is  whether  the  request  for  FGM  is a product of cultural embedding. [1,2,5,6] This does not mean to demean the cultural background of the patient. It instead allows us to contemplate the possibility that what is desired by the patient is the sociocultural integration and acceptance FGM affords them.[1,2,5] There is anecdotal evidence in current literature to suggest that fear of rejection by family and community is a potent driving force in desiring FGM.[1,5] It is difficult to assess what component of the request is entrenched in a socio-cultural need for assimilation, and this could impede the voluntariness of consent. It is important to assert that fear is no justification for condoning what is unquestionably a practice with harmful health consequences. The solution is not to acquiesce to pressure to perform FGM but to educate the community as to the risks and impacts of FGM.

Some commentators reinforce that if patient autonomy is stated to be an adequate justification for performing female cosmetic genital surgery, it should also apply as adequate justification for medically performed FGM.[8,9] Many advocates of similarly banning labioplasty argue that certain forms of cosmetic surgery on female genitalia pose similar health risks to FGM. [10] However, perhaps what this should invoke is a questioning of the ethical soundness of female genital cosmetic surgery. Despite said assertions that the legal permitting of labioplasty should likewise permit FGM, the converse can and must be argued. Performing one potentially unethical procedure should not allow the medical practice of other unethical procedures.

The final stance

It is of great interest in finally evaluating this argument, to return to a question posed at the beginning of this paper: should ethics be removed from socio-cultural standpoints?  The answer is yes, and herein lies the core opposition to the practice of FGM. Ethics are

grounded in the basic human rights and preservation of the dignity of a person. As E.H Kluge postulates in Female Genital Mutilation, Cultural Values and Ethics, ethics apply to the nature of what it is to be human, and consequently apply to all human beings irrespective of their background or belief system.[11] Therefore, if cultural frameworks fail to meet these universal standards, they can be subject to ethical critique.[11] Consequently despite having respect for the autonomy of the patient, this writer holds the opinion, as do several international bodies, that FGM has led to worldwide incidences of violations of the  rights  of  a  woman,  and  degradation  of  their  inherent  dignity and should be prohibited.[1] Also as health practitioners objectively upholding what is in the best health interests of the patient, we cannot ignore the high risks of varying adverse physical and psychological health outcomes that are often inevitable with FGM.[1,4,5]

Reducing the practice of FGM internationally

Legislation that is effective in countries condoning FGM is well and good, but how does one begin to turn a centuries old wheel? International organisations, such as UNICEF, have mapped out goals for eliminating FGM internationally.[12] These are mainly aimed at affecting change at an individual and societal level by challenging age-old customs. [12] Koso-Thomas et al found, in examining populations and countries that practice FGM, levels of education and literacy were inversely proportional to rates of FGM, so these are areas to be addressed in terms of empowering women to have the correct educational tools for informed decision making.[13] Community based interventions, which bring together leaders and elders of local communities as well as women and their families, are one method. They can permit open discourse and awareness programs to take effect.[12,13] An intriguing concept in implementing strategies for change is that of decreasing the

‘supply and demand’ of FGM.[12] This involves educating target groups such as the local health practitioners carrying out the infibulations.[12] It encompasses educating them as to the dangers of FGM or retraining practitioners of traditional medicine in women’s health and midwifery, hence providing them with a more ethically suitable position.[12,13] Educating young men and their families is also vital in terms of reducing the stigma surrounding women who do not receive FGM.[12] This will assist in challenging the association of FGM with marriageability.[12]

Managing requests for FGM in medical practice

The views of Elchalal et al and RANZCOG guidelines still hold; cultural sensitivity and probing the cross cultural barrier is necessary in providing comprehensive healthcare whilst denying the request of FGM. [2,7] Extensive antenatal/gynaecological counselling may allow a  healthcare  practitioner to  not  only  build  rapport  and  trust,  but also allow one to elicit details of what influences requests for the procedure.[2] This therefore reduces adverse mental health outcomes that may arise from a refusal of the request. The inclusion of family members  (whilst  carefully  documenting  their  views),  is  not  only in keeping with the desire of the patient; it allows you the unique opportunity to hear their opinions, understand their influence on the patient, and   incorporate them into your educational strategies.[2] The guidelines have stressed the vital importance of treating women who have undergone FGM without ‘alarm or prejudice’, as allowing them the confidence to access healthcare is an imperative outcome of treatment.[2] Educational outreach programs, namely the National Education Program on Female Genital Mutilation and FARREP (Family and Reproductive Rights Education Program) utilise both multilingual and multicultural health workers who can assist in offering culturally sensitive healthcare.[2] Ultimately, it is important to uphold the quality of life of the patient and identify the factors that contribute to it.

Acknowledgements

Dr. Vicki Langendyk for providing vital feedback about this topic for students undertaking the Obstetrics and Gynaecology ethics curriculum at the University of Western Sydney School of Medicine.

 

Conflict of interest

None declared.

Correspondence

N Vigneswaran: nilanthy.vigneswaran@gmail.com

References

[1]   World Health Organisation (WHO). Female Genital Mutilation Fact Sheet [Internet]. 2014  [Updated  2014  Feb,  Cited  2014  Jul  19].  Available  from:  http://www.who.int/mediacentre/factsheets/fs241/en/.

[2] Gilbert, E. Female Genital Mutilation: Information for Australian Health Professionals. The Royal Australian College of Obstetricians and Gynaecologist. Victoria. 1997.

[3] Australasian Legal Information Institute (AustLII): NSW Consolidated Acts. NSW Crimes Act 1900: Section 45 [Internet]. 2014. [Updated 2014 Jun 13, cited 2014 Jul 18]. Available from: http://www.austlii.edu.au/au/legis/nsw/consol_act/ca190082/.

[4] Hosken, F. The Hosken Report: Genital and Sexual Mutilation of Females, fourth edition. Lexington, MA: Women’s International Network. 1997; pp. 48.

[5] Program for Appropriate Technology in Health (PATH) and Seventh Day Adventist-Rural Health Services. “Qualitative Research Report on Health Workers’ Knowledge and Attitudes About Female Circumcision in Nyamira District, Kenya”. Nairobi. 1996; pp. 83.

[6]  Toubia  N.  Female genital  mutilation and  the responsibility  of  reproductive health professionals. International Journal of Gynecology & Obstetrics. 1994; 46:127-135.

[7 ] El Chalal, U, Ben-Ami B, Gillis R, Brzezinski A. Ritualistic Female Genital Mutilation: Current Status and Future Outlook. Obstetrical & Gynecological Survey.1997;52(10):643–651.

[8] Bronnit, S. Female genital mutilation: Reflections on law, medicine and human rights. Health Care Analysis. 1998; 6 (1):39-45.doi:10.1007/bf02678079

[9] Berer, M. Labia reduction for non-therapeutic reasons vs. female genital mutilation: contradictions in law and practice in Britain. Reproductive health matters. 2010; 18(35);106-110. doi: http://dx.doi.org/10.1016/S0968-8080 (10)35506-6.

[10] Selvaratnam, N. Concerns over female genital cosmetic surgery. SBS News Australia [Internet]. 2013 Aug 26 [cited 2014 Jul 19];Health; [1 screen]. Available here from: http://www.sbs.com.au/news/article/2012/12/27/concerns-over-female-cosmetic-genital-surgery

[11] Kluge, E. Female circumcision: when medical ethics confronts cultural values. CMAJ. 1993;148(2):288–289.

[12] UNICEF Somalia. Eradication of female genital mutilation [Internet]. 2004. [Updated 2014  Feb,  cited  2014  Jul  28].  Available  here  from:  http://www.unicef.org/somalia/resources_11628.html

[13] Koso-Thomas, O. The circumcision of women: a strategy for eradication. London, England, Zed Books, 1987. p109.

Categories
Feature Articles

Managing complicated malaria in pregnancy: beating the odds

Malaria, especially falciparum malaria, has the potential to cause multi-organ failure and is a major cause of morbidity and mortality in pregnant women. It is defined by the World Health Organisation (WHO)  as  the  presence  of  asexual  parasitaemia  and  one  or more of the following manifestations: cerebral oedema; severe anaemia; renal failure; pulmonary oedema; adult respiratory distress syndrome (ARDS); disseminated intravascular coagulation (DIC); acidosis; hypotension or shock. [1] The pathophysiology underlying this disease will be discussed in this paper and will serve as a basis for outlining the importance of immediate supportive management and prompt administration of appropriate anti- malarial chemotherapy.

 

Introduction

Malaria is an infectious, tropical disease caused by parasitic protozoa of the species Plasmodium. The malaria parasites are transmitted via the bite of an infected female Anopheles mosquito (vector), the most virulent species being Plasmodium falciparum (P. falciparum). [2,3] Malaria in pregnancy is a major public health concern and contributes heavily to maternal and neonatal deaths worldwide. [3] In this article, the pathophysiology of P. falciparum malaria will be discussed to provide a background for the relevant management options and ethical decisions faced when treating pregnant women with complicated P. falciparum malaria.

v6_i1_a27aCase Presentation

History

A 22-year-old woman, Ms AP, was admitted to Colombo General Hospital with high-grade fever, tremors and confusion. A detailed history revealed that she was a married, small business owner from a rural, farming region located in a malaria endemic area in Sri Lanka. [2] Her medical history was clear of any clinically significant past or current illnesses. However, it was found that she was four weeks pregnant with her first child.

Findings

On examination, she was alert but appeared fatigued with visible jaundice. Her blood pressure was 110/60mmHg and she was tachypnoeic and tachycardic with a heart rate of 110bpm. Her temperature was 37.8oC, indicating a pyrexial illness and her oxygen saturation was 94% on room air. Further physical examination revealed scleral icterus and splenomegaly but was otherwise unremarkable with no elevated jugular venous pressure or signs of pulmonary oedema. Laboratory investigations revealed normocytic normochromic anaemia (haemoglobin (Hb) 90 g/L), thrombocytopenia (platelet count 100 x 109   cells/L) and hypoglycaemia (blood glucose 3 mmol/L). Ms AP’s liver function tests were also abnormal, with raised liver enzymes and increased total bilirubin (12 mmol/L). Importantly, her blood results showed stage 3 kidney failure, with increased serum urea (12mmol/L), creatinine (180mmol/L) and reduced glomerular filtration rate (36 ml/ min). While viral serology and bacterial culture were negative, thick and thin blood films for malaria revealed ringed trophozoites, typical of P. falciparum (Figure 1), and parasitemia with more than 6% infected erythrocytes. Based on the World Health Organisation (WHO) criteria for severe malaria and her above presentation, indicating major organ dysfunction and asexual parasitemia, Ms AP was diagnosed with complicated malaria.

v6_i1_a27c

 

Discussion

Effect of malaria in pregnancy

Infection of red blood cells by the asexual forms of P. falciparum and the involvement of inflammatory cytokines result in the prototypical clinical manifestations of malaria. [4] The initial paroxysm of P. falciparum malaria presents as non-specific ‘flu-like’ symptoms including malaise, headache, diarrhoea, myalgia and periodic fever every 48 hours. [4,5] These  symptoms  are  associated  with  an  immune  response  which is triggered when infected red blood cells (RBCs) rupture, releasing RBC remnants, parasitic antigens, and toxins into the bloodstream. [4,5] If untreated, this fairly un-alarming presentation can quickly progress to complicated malaria involving vital organ dysfunction. [4,5] In   pregnancy, complicated malaria is more common due to altered immune function.[5,6].Discussed below is the pathophysiology and supportive management of the major manifestations of complicated malaria in pregnant women.

Severe anaemia (Hb less than 80g/L) is a major manifestation of pregnant women with P. falciparum complicated malaria and has the potential to cause maternal circulatory collapse. This is due to additional demands of the growing foetus and the ability of P. falciparum to invade RBCs of all maturities. [6,7] Both chronic suppression of erythropoiesis by tumour necrosis factor alpha (TNF-α) and synchronous eruption of erythrocytic schizonts contribute to severe anaemia. [6,7] P. falciparum also derives energy via breakdown of haemoglobin, making infected RBCs more rigid and less able to navigate the micro-circulation. [6,7] This, along with alteration of non-infected RBC membranes, by addition of glycosylphosphatidylinositol (GPI), cause increased haemolysis and accelerated splenic clearance of RBCs. [6,8] Increased activity of the spleen manifests clinically as splenomegaly. [8] In pregnant women, like Ms AP, who present with severe anaemia, packed red cells are transfused when a safe blood supply is acquired. [9] However, blood transfusions should be used sparingly in resource poor areas where the risk of negative outcomes, such as incidental transfer of human immunodeficiency virus (HIV), is great. [9,10]

Cerebral malaria is an ominous sign in pregnant women and is a neurological syndrome characterised by altered consciousness (Glasgow Coma Scale ≥ 8) and uncontrolled, sub-clinical seizures. The precise mechanisms involved in the onset of this phenomenon remains unclear; however, localised perfusion defects, metabolic disturbances, and host immune responses all play a critical role. [11,12] Decreased localised perfusion is primarily due to microvascular changes. P. falciparum proteins, such as Plasmodium falciparum erythrocyte membrane protein 1 (PfEMP-1), form knobs on the surfaces of infected RBCs and bind to receptors such as intracellular adhesion molecule 1 (ICAM-1) on the endothelium. This ability to cyto-adhere to cells results in sequestration of infected RBCs in blood vessels, causing endothelial inflammation and obstruction. [7,11] Interestingly, in P. falciparum malaria, a phenomenon known as rosetting also occurs. [11,12] Here, PfEMP-1 on infected RBCs bind to glycosaminoglycan receptors on uninfected RBCs, causing aggregation. [11,12] This further slows microcirculatory flow, reducing perfusion and causing ischemia-induced functional deterioration in organs such as the brain. [9,10] Mechanical ventilation in conjunction with appropriate anti- malaria drugs may be life-saving; preventing fatal hypoxemia and organ failure. [13,14] Seizures and other complications of cerebral malaria are treated with anti-convulsants to protect against rapid neurologic deterioration. [13,14]

Hypoglycaemia is a common manifestation in pregnant women with complicated malaria and arises from increased anaerobic glycolysis when P.falciparum metabolises glucose to lactic acid for energy. [7,8] In addition, decreased hepatic gluconeogenesis and increased demands of a febrile illness contribute to lowered blood glucose levels. [7,8] Intravenous administration of 25-50% dextrose solution injection is standard treatment and benefits both mother and foetus. [9,10]

Hepatic and renal failure occurs in complicated malaria due to mechanical obstruction of blood vessels by infected erythrocytes and via immune-mediated destruction of cells. [15,16] Loss of function of these organs result in poor lactate clearance, which along with increased anaerobic glycolysis and parasitic lactate production, potentiate metabolic acidosis. [15,16] Ultimately, these changes can progress to respiratory and circulatory distress. [15,16] In conjunction with anti- malarial agents, the best supportive therapy is fluid resuscitation or if required, renal replacement therapy. [15-19] Caution should be taken when treating malaria-induced hypertension in pregnancy as excessive fluid resuscitation via intravenous (IV) saline could worsen pulmonary oedema, triggering respiratory failure. [18, 19]

ARDS is more common in pregnant women and can be precipitated by  pulmonary  oedema,  compensatory  metabolic  acidosis,  sepsis, and severe anaemia. [11] It stems primarily from increased vascular permeability due to ongoing inflammation, as well as reduced pulmonary micro-circulatory flow. [11,12] Mechanical ventilation is essential in patients with ARDS as it helps maintain positive expiratory pressure and oxygenation. [13, 14] In patients presenting with hypotension, secondary sepsis due to bacterial co-infection should be suspected. Appropriate bloods including blood cultures should be taken and immediate treatment with broad-spectrum antibiotics such as clindamycin commenced. [14,19]

Effect of malaria on the foetus

Foetal distress is a concern in complicated malaria. Sequestration of RBCs in the placenta, via the binding of PfEMP-1 to chondroitin sulphate A (CSA) on the syncytiotrophoblast, can cause placental insufficiency. This results in poor oxygen supply to the foetus and may cause miscarriage, premature labour, still birth, growth restriction, and low birth weight. [20,21] This phenomenon is more likely in primigravida patients, such as Ms AP, and is thought to be due to lack of a specific immune response to the unique placental variant surface antigens (VSA) expressed by placental parasites. This hypothesis is supported by a longitudinal study by Maubert et al. which showed that antibodies against CSA-binding parasites were present in 76.9% of multigravida women by 6 months compared to only 31.8% of primigravida women. [22]  In  addition,  severe  fever  and  hypoglycaemia  disrupts  normal fetal development, which may induce premature labour and cause intrauterine growth restriction. [20,21] Micro-trauma to the placenta also increases the risk of infected maternal erythrocytes crossing into foetal circulation. Inevitably, this has the potential to cause congenital malaria and adds to the burden of complicated malaria in pregnant women. [20,21] Evidently, promoting prompt and efficacious drug treatment of malaria is necessary to reduce the systemic impact of malarial hyperparasitemia and to reduce foetal distress and mortality. Furthermore, due to the risk of congenital malaria, placenta, cord blood and neonatal thick and thin blood films should be considered for detection of malaria at an early age. [23]

Anti-malarial drugs and pregnancy

According to the South East Asian Artesunate Malaria Trial (SEAQUAMAT) study, a multi-centred, randomised controlled trial in South East Asia, artemisinin derivatives such as parenteral artesunate are the drugs of choice in pregnant women with complicated malaria. [24] These drugs are superior to quinine which is associated with a narrow therapeutic window, hypotension, and hyperinsulinemic hypoglycaemia. [24] While quinine was the traditional drug of choice, it is now considered outdated and the drug artemenisin is currently used. [24,25] Artemisinin derivatives work by producing cytotoxic oxygen radicals within the parasite.[24] Unlike other anti-malarial drugs, such quinine and chloroquine, artesunate is toxic not only to mature schizont forms of P. falciparum but also to early ring stage endoerythrocytic  trophozoites.  [24,25]  Therefore,  they  work  faster to clear parasites from the blood, reducing complications linked with micro-vascular damage and parasite glucose consumption. [24-27] While relatively safe, these drugs have been associated with foetal anaemia and lowered bone density in early trials. [23-28] However, it is important to remember that in complicated malaria the mother is the priority as without her survival, foetal mortality is highly likely. Importantly, efficacy of above drugs in pregnancy should also be monitored as pregnancy appears to alter the efficacy of anti-malarial agents. [23]. Patients should be advised of the risk of recurrence and offered regular blood films throughout their pregnancy. [23]

v6_i1_a27b

Conclusion

Complicated malaria in pregnancy is a medical emergency and can result in death if not treated properly. Like in Ms AP’s case, prompt administration of parenteral artesunate in conjunction with general supportive therapy is required for the best chance of survival for both the mother and foetus. [29]

Acknowledgements

None.

Consent Declaration

Informed  consent  was  obtained  from  the  patient  in  regard  to publication of this article for educational purposes.

Conflict of interest

None declared.

Correspondence

P Adkari: prasadi.adikari@my.jcu.edu.au

References

[1] WHO. Severe falciparum malaria. World Health Organization, Communicable Diseases

Cluster. Trans R Soc Trop Med Hyg 2000;94(Suppl. 1):S1–90.

[2]  Rajakuruna  R,  Amerasinghe  P,  Galappaththy  G,  Konradsen  F,  Briets  O,  Alifrangis Current status of malaria and anti-malarial drug resistance in Sri Lanka. Cey. J. Sci 2008;37(1):15-22.

[3] Kumar A, Chery L, Biswas C, Dubhashi N, Dutta P, Kumar V. Malaria in south asia: prevalence and control. Acta Tropica 2012;121:246-255.

[4] Lyke K, Burges R, Cissoko Y, Sangare L, Dao M, Diarre I et al. Serum levels of the pro-inflammatory cytokines interleukin-1 beta (IL-1), IL-6, IL-8, IL-10, tumor necrosis factor alpha, and IL-12(p70) in Malian children with severe Plasmodium falciparum aalaria and matched uncomplicated malaria or healthy controls. American society of Microbiology 2004;72:5630-7.

[5] Clark I, Budd A, Alleva L, Cowden W. Human malarial disease: a consequence of inflammatory cytokine release. Malaria Journal 2006;5:2875-85.

[6] Buffet. P, Safeukui I, Deplaine G, Brousse V, Prendki V, Thellier M et al. The pathogenesis of Plasmodium falciparum malaria in humans: insights from splenic physiology. Journal of American Society of Haematology 2011;117:381-92.

[7] Cowman A, Crabb B. Invasion of red blood cells by malaria parasites. Cell 2006;124:755-66.

[8]  Dondorp  A,  Pongponratan  E,  White  N.  Reduced  microcirculatory flow  in  severe falciparum  malaria:  pathophysiology  and  electron microscopy pathology.  Acta Tropica 2005;89:309-17.

[9] Day N, Dondorp A. The management of patients with severe malaria. Am J Trop Med. 2007;77(6):29-35.

[10] Mishra SK, Mohanty S, Mohanty A, Das B. Management of severe and complicated malaria. ICMR 2006;52(4):281-7.

[11] Barfod L, Dalgaard M, Pleman S, Ofori M, Pleass R, Hvidd L. Evasion of immunity to Plasmodium falciparum malaria by IgM masking of protective IgG epitopes in infected erythrocyte surface-exposed PfEMP1. PNAS 2011;10:1073-1078.

[12] Chakravorty S, Hughes K, Craig A. Host response to cytoadherence in Plasmodium falciparum. Biochemical Society Transactions 2008;45:221-228.

[13]  Uneke  C.  Impact  of  Plasmodium  falciparum  malaria  on  pregnancy and  perinatal outcome in Sub-Saharan Africa. Yale J Biol Med 2007; 80:95-103.

[14] Tongo O, Orimadegun A, Akinynika O. Utilisation of malaria preventative measures during pregnancy and birth outcomes. BMC 2011;11:1471-1478.

[15] Das B. Renal failure in malaria. ICMR 2008;45:83-97.

[16] Patel D, Pradeep P, Surti M, Agarwal SB. Clinical Manifestations of Complicated Malaria. JIACM 2003; 4(4):323-33.

[17] Gillion R. Medical ethics: four principles plus attention to scope. BMJ 2003;309: 184.

[18] Whitty C, Edmonds S, Mutabingwa T. Malaria in pregnancy. BJOG 2005;112:1189-95.

[19] Nosten F, Ashely E. The detection and treatment of Plasmodium falciparum. JPGM

2004;50:35-39.

[20] Pasvol G. The treatment of complicated and severe malaria. BMB 2006;75:29-47.

[21] Maitland K, Marsh K. Pathophysiology of severe malaria in children. Acta Tropica 2004;90:131-40.

[22] Maubert B, Fievet N, Deloron P. Development of antibodies against chondroitin sulfate A adherent Plasmodium falciparum in pregnant women. Infec. Immun.1999;67(10):5367-71.

[23] Royal College of Obstetricians and Gynaecologists. The diagnosis and treatment of malaria in pregnancy. Greentop Guideline No. 54B. London: RCOG; 2010.

[24]  South  East  Asian  Artesunate  Malaria  Trial  group.  Artesunate  versus  qunine  for treatment  of  severe  falciparum  malaria:  a  randomised  trial.  The  Lancet  2005;366(9487):717-25.

[25] McGready R, Lee S, Wiladphaigern J, Ashely E, Rijken M, Boel M et al. Adverse effects of falciparum and vivax malaria and the safety of antimalarial treatment in early pregnancy: a population based study. Lancet 2012;12: 388-96.

[26]  McGready  R,  White  N,  Nosten  F.  Parasitological  efficacy  of  antimalarials  in  the treatment and prevention of falciparum malaria in pregnancy 1998-2009: a systematic review. BJOG 2011;118:123-35.

[27] McIntosh H, Olliaro P. Artermisinin derivatives for treating severe malaria. Cochrane Collaboration 2012;1:33-47.

[28] Adebisi S. The toxicity of artesunate on bone developments: the wistar rat animal model of malarial treatment. Journal of Parasitic Diseases 2008;4(1).

[29] Briand V, Cottrell G, Massougbodji A, Cot M. Intermittent preventative treatment for the prevention of malaria during pregnancy in high transmission areas. Malaria Journal 2007;6:160-6.

Categories
Feature Articles

Ki-67: a review of utility in breast cancer

Ki-67 is a protein found in proliferating cells that is identifiable by immunohistochemistry (IHC).   Its prognostic and predictive value in breast cancer has been an area of avid research in recent literature and is increasingly shown to be of value.   Identifying the presence of Ki-67 protein is now an accepted technique to differentiate hormone receptor (HR)-positive breast malignancies, and as a marker of prognosis in these tumours.  It is also shown to  have  predictive  value  in  neoadjuvant  chemotherapy,  and post-neoadjuvant endocrine therapy.  Whilst it is not currently recommended as a routine investigation in the diagnosis of breast cancer, with standardisation of its methodology it has potential to become so.

v6_i1_a28

Introduction

Breast  cancer  is  the  most  frequent  cancer  of  women  (excluding non-melanoma skin cancer) in Australia.   Survival of breast cancer has improved significantly in recent decades, with five-year relative survival increasing from 72% in the mid-1980s to 89% by 2010. [1] Survival rates have improved as a result of developments in screening, treatment and also diagnosis.

It is currently an exciting era in diagnostic medicine, with rapidly increasing knowledge and research leading to increased availability of diagnostic techniques. Improved diagnostics are allowing us to classify tumours not only based on their anatomical location and pathological appearance,   but   also   by   molecular   and   genetic   typing.      This increasing complexity of diagnosis and subtyping is allowing for more individualised cancer treatments and better outcomes for patients. Immunohistochemistry is an area of diagnostics that has blossomed over the past two decades. One of the most frequent uses of diagnostic IHC is in breast pathology.  IHC techniques may have prognostic and predictive value, [2] and contribute to the trend towards targeted and bespoke therapies.  Numerous tests have now been developed and some have become a standard part of the diagnostic work-up, such as for oestrogen receptors (ER) and progesterone receptors (PR), and human epidermal growth factor receptor 2 (HER2).

Despite the improvements in diagnosis, there remains a group of patients whose risk of recurrence is indistinguishable based on current standard tests.  This leads to potential overtreatment of patients who would not benefit from therapy, and potential under treatment of those who would. [3] Other tests include multi-gene predictors and urokinase plasminogen activator testing that also have proven benefit as prognostic factors, and possibly have predictive value.   The Ki-67 protein is a marker of proliferation that has been known for over two decades and has been the subject of renewed study and reporting of late. However, its popularity and integration into practice has been somewhat controversial.

Ki-67 as a proliferation marker

Ki-67 is a unique protein that is found exclusively in proliferating cells. It is present in the nuclei of cells in the G1, S and G2 phases of cell division and peaks in mitosis. Cells in the G0 phase do not express Ki-67. It is present in all cells, both tumour and non-tumour, and its presence is a marker of growth fraction for a certain cell population. [4] Despite the numerous studies demonstrating its presence in proliferating cells, the exact role of Ki-67 in cell division is as yet unknown. [5] Ki-67 was first assessed for prognostic value in non-Hodgkin’s lymphoma, but is increasingly used in various malignancies, [4] most notably in breast cancer. It has now been proven that a higher fraction of stained nuclei is associated with worse prognosis, and healthy breast tissue exhibits low levels of Ki-67 (<3%). [6]

Counting mitoses, flow cytometry (for determining S-phase fraction), and IHC for Ki-67 are common techniques for determining growth fraction.  Flow cytometry is not recommended in prognostication due to difficulty with methodology. [7] Logically, counting mitoses and Ki-67 should correlate highly but clinical studies have shown that only 51% of high Ki-67 expressing breast tumours have a high mitotic index. [8] Ki-67 and the other proliferation markers, despite showing promise, are not recommended as a routine part of breast cancer workup currently. [3]

Ki-67 as a surrogate genetic marker

Ki-67 and mitotic rate are both considered markers of cell proliferation, however Ki-67 is considered a superior prognostic marker. [6] One reason it can be used for prognostication is that it may act as a surrogate for genetically different tumours.  Patients with ER-positive tumours, like other malignancies, are known to display a great variance in behaviour, including response to therapy. This occurs because tumours display a heterogeneous mix of gene expression grade index. [9] To improve prognostication and therapy recommendations, breast malignancies were genetically subclassed into five subtypes (luminal A, luminal B, HER2-enriched, basal-like, and normal breast-like). Of most interest is the differentiation between luminal A and luminal B, which (by one author’s definition) are both ER-positive and HER2-negative tumours but display contrasting behaviour. [10] Luminal B tumours typically have worse outcomes and demonstrate higher proliferation. Genetic typing showed certain genes (such as CCNB1, MK167, and MYBL2) have higher expression in luminal B tumours. [10] Given that genetic testing is expensive, and hence impractical, as a routine test in some settings, [8] Ki-67 can be used as a surrogate measure.  This phenomenon has been studied wherein the combined prognostic value (IHC4) of ER status, PR status, Ki-67, and HER2 was shown to be of similar prognostic value to a more expensive 21-gene test. [11] Very recent Australian data shows that when tumours are divided into luminal A and B with the use of Ki-67 “the 15-year breast cancer specific survival was 91.7% [and] 79.4%” respectively. [8] This confirms the clinical variation in these tumours. These figures were only in lymph node-negative breast cancer treated with breast-conserving surgery and postoperative radiotherapy.

Prognostic value

Ki-67 has been accepted to differentiate between luminal B and luminal A tumours without additional genetic testing. [12,13] The best cut-off score to differentiate ER-positive HER2-negative tumours is currently thought to be around 14%. At or above this figure, a tumour can be regarded as luminal B subtype and hence having a poorer prognosis. However Ki-67 is also associated with “younger age at diagnosis, higher grade, larger tumor size, positive lymph node involvement, and lymphovascular invasion.’ [10] This is echoed in other large preclinical trials. [14]

A high Ki-67 is also shown to be associated with poorer ten-year relapse-free survival and breast cancer specific survival. This has been demonstrated in node-positive tumours, node-negative tumours, those treated with tamoxifen as the only agent, and those who are treated with combination therapy of tamoxifen and a chemotherapeutic agent. [6,10] A large retrospective Australian study has confirmed that Ki-67 appears to have significant mortality prediction.  In their experience, a Ki-67 cut-off of 10% yielded the highest sensitivity and specificity, and at this level patient mortality rose from 3% in the low-Ki-67 group to 22%, and 15-year survival increased from 3% to 22%. Of note, this study did not differentiate luminal A and luminal B, and this did not exclude ER-negative tumours, nor-HER2-negative tumours, and so only looked at outcomes based on Ki-67.  Interestingly, all HER2-positive tumours were high-Ki-67 tumours.  The difference in the Ki-67 cut-off when compared with the 14% from previous trials is likely explained by the lack of inter-laboratory validation. The poorer 15-year survival of the high-Ki-67 tumours, compared with Pathmanathan’s [8] study, can be partially explained by the inclusion of HER2-positive tumours and triple negative tumours, which are known to have poorer prognosis.

Aleskandarany  et al. [15] in their larger study confirmed the variation between luminal tumour but also suggest that there is little prognostic value in Ki-67 in subcategorising HER2-positive and triple negative tumours[16].  Further, they revealed that “ [a high Ki-67 is] associated with premenopausal status, larger tumor size, definite vascular invasion, and lymph node involvement”, thus in non-luminal tumours may be selecting a patient group with other predictors of poor prognosis.

Ki-67 predictive value

Studies regarding the predictive value of the test are not yet as convincing as for prognostication, but continue to be an area of continued research and debate.

There are potential roles for Ki-67 in directing therapy in primary chemotherapy, neoadjuvant chemotherapy, neoadjuvant endocrine therapy, and in radiotherapy case selection. Chang et al. [17] suggested that tumours with a high Ki-67 are likely to respond more favourably to chemotherapeutic agents in the primary setting and that Ki-67 as a marker may be measured temporally during treatment to assess response.  This study, however, had a small sample size and a single therapeutic regime, making it difficult to adopt in clinical practice.

Viale, [18] in his large retrospective review, showed that Ki-67 did not predict the relative efficacy of neoadjuvant chemoendocrine therapy in node-negative hormone receptor (HR)-positive tumours. However, this does not imply that Ki-67 has no role in directing adjuvant chemotherapy in other groups of breast malignancy.   This has been further studied in a group of high risk breast malignancies by Denkert et al. [19]  Denkert’s group demonstrated that Ki-67 predicts response to neoadjuvant chemotherapy in HR-positive, HR-negative, HER2-negative, and triple negative groups.  It also shows an effect on disease free survival (DFS), and overall survival (OS) in HR-positive and HER-negative groups.  This study also reveals that Ki-67 percentage is a continuum and subsets may not be simply broken down into ‘high’ and ‘low’; rather, multiple cut-off points may be required for a single tumour type and a variation of cut points required based on the studied endpoint (e.g. DFS or pathological response) and different tumours. To achieve this, further trials recording information prospectively will be necessary.

Ellis studied Ki-67 in the neoadjuvant endocrine therapy setting, and reported that it has limited role in pre-treatment biopsies, but its post- neoadjuvant treatment value predicts relapse-free survival. [20]  Ellis suggests that when Ki-67 and ER status are combined post-surgery, a low value is correlated with low levels of relapse, and states that therapy beyond continuation of endocrine agent is likely unnecessary. In contrast, poor biomarker profile post-surgery is associated with significantly earlier relapse, more typical of ER-negative tumours; patients should be “offered all adjuvant treatments”. [20]

Ki-67 also has predictive value outside of HR-positive tumours. There is evidence showing that in HR-negative tumours, a Ki-67 >20% is a predictor for clinical and pathological response in the neoadjuvant setting with anthracycline-based chemotherapy. [21] It showed that patients with HR-negative status and Ki-67 >20% were much more likely to respond to their prescribed regime. However the authors did not give the absolute variation in response based on Ki-67, and did not test with a variety of agents or protocols to see if IHC could be used to recommend a particular agent.

Another role for Ki-67 in the neoadjuvant chemotherapy setting is in reviewing the response to therapy. A number of authors have shown that Ki-67 percentage often decreases after adjuvant therapies, and that  reduction may  correlate  with  pathological  response  and  DFS. [22] Dowsett and colleagues [23,24] measured Ki-67 both at baseline and two-week post-neoadjuvant endocrine therapy.  These authors suggest that the Ki-67 after two weeks of neoadjuvant therapy is of greater prognostic value than at baseline.  They hypothesised that a great change in Ki-67 should also be predictive of outcome, but the trial failed to show this.

Despite the scarcity of high-quality data the latest St Gallens consensus supports the use of Ki-67 in defining luminal B tumours and states, “For patients with luminal B (HER2-negative) disease, the majority of the panel considered chemotherapy to be indicated. Chemotherapy regimens for luminal B (HER2-negative) disease should generally contain anthracyclines and… taxanes”. [12]  This suggests that some groups have already adopted Ki-67 as a significant predictive factor in the management of HR-positive tumours.

Barriers to Ki-67 being used as a routine component of breast cancer workup

When Ki-67 staining is performed, nuclei display brown pigmentation. The area of greatest staining is used for counting, and the fraction of nuclei stained by the antibody is used to determine a percentage score. Ki-67 score is the first IHC marker that requires exact quantification to assess its benefit and there is currently no standardised methodology to do this. [25,26] This has led to a broad range of recommendations regarding the minimum number of cells analysed to accurately ascertain the percentage. [19] There are also many antibodies that are commercially available which may display subtle variances in result. [27] Further variations may also be seen based on the method of counting, i.e. computer aided versus human analysis. [28] The lack of a standard method to ascertain the percentage in a reproducible way combined with the other variances in techniques leads to inter/ intra operator and laboratory variances.  These have made it currently difficult to incorporate Ki-67 into routine use. [26]

Other IHC assays have been validated in the field of breast malignancies, such as for HER2, [29] and have led to more concrete recommendations. [12] Validation involves standardised recommendations for numerous factors including tissue handling, fixation, assay selection, comparison to standards, and ensuring inter and intra-laboratory concordance. [30] Further, this has been complimented by the development of HER2 in-situ hybridisation (ISH) to assess the underlying gene expression, which may be superior or complimentary. [30] These advancements are yet to be achieved in Ki-67 analysis. Validation and standardisation of Ki-67 in a similar way has been called for by many authors, and if achieved will increase confidence in results and may allow for it to be used as part of routine testing. [25]

Conclusion

The renewed interest in Ki-67 in breast malignancies has proved its prognostic value, particularly in subgrouping HR-positive HER2- negative breast cancers.   There is now increasing evidence to show that it may have a predictive role, with most evidence pointing to its role in both directing neoadjuvant chemotherapy and in assessing tumours  post-neoadjuvant  therapy  to  help  direct  further  adjuvant

therapy. Ki-67, along with other commonly used IHC assays and genetic testing are facilitating a move away from previously crude methods of treatment to increasingly tailored treatment solutions for our patients. Once standardised, Ki-67 may provide a cost-effective contribution to this trend.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

K Parthasarathi: krishnanpartha@hotmail.com

References

[1] Cancer Australia. Breast cancer statistics [Internet]. 2013 Mar. 6 [cited 2013 Sep. 29];1–2. Available    from:    http://canceraustralia.gov.au/affected-cancer/cancer-types/breast-cancer/breast-cancer-statistics

[2] Bhargava R, Esposito NN, Dabbs DJ. Chapter 19 – Immunohistology of the breast. 3rd Elsevier Inc.; 2011.

[3] Patani N, Martin L-A, Dowsett M. Biomarkers for the clinical management of breast cancer: International perspective. Int J Cancer. 2013;133(1):1–13.

[4] Scholzen T, Gredes J. The Ki-67 protein: From the known and the unknown. J Cell Physiol. 2000;182:311–322.

[5]  Jalava  P,  Kuopio  T,  Juntti-Patinen  L,  Kotkansalo  T,  Kronqvist  P,  Collan  Y.  Ki67 immunohistochemistry:   A   valuable   marker   in   prognostication  but   with   a   risk   of misclassification: Proliferation subgroups formed based on Ki67 immunoreactivity and standardized mitotic index. Histopathology. 2006;48(6):674–682.

[6] Yerushalmi R, Woods R, Ravdin PM, Hayes MM, Gelmon KA. Ki67 in breast cancer: Prognostic and predictive potential. Lancet Oncol. 2010;11(2):174–183.

[7] van Diest PJ. Prognostic value of proliferation in invasive breast cancer: A review. J Clin Pathol. 2004;57(7):675–681.

[8] Pathmanathan N, Balleine RL, Jayasinghe UW, Bilinski KL, Provan PJ, Byth K, et al. The prognostic value of Ki67 in systemically untreated patients with node-negative breast cancer. J Clin Pathol. 2014;67(3):222–228.

[9] de Azambuja E, Cardoso F, de Castro G, Colozza M, Mano MS, Durbecq V, et al. Ki-67 as prognostic marker in early breast cancer: A meta-analysis of published studies involving 12 155 patients. Br J Cancer. 2007;96(10):1504–1513.

[10] Cheang MCU, Chia SK, Voduc D, Gao D, Leung S, Snider J, et al. Ki67 index, HER2 status,  and  prognosis  of  patients  with  Luminal  B  breast  cancer.  J  Natl  Cancer  Inst. 2009;101(10):736–750.

[11] Cuzick J, Dowsett M, Pineda S, Wale C, Salter J, Quinn E, et al. Prognostic value of a combined estrogen receptor, progesterone receptor, Ki-67, and human epidermal growth factor receptor 2 immunohistochemical score and comparison with the genomic health recurrence score in early breast cancer. J Clin Oncol. 2011;29(32):4273–4278.

[12] Goldhirsch A, Winer EP, Coates AS, Gelber RD, Piccart-Gebhart M, Thurlimann B, et al. Personalizing the treatment of women with early breast cancer: Highlights of the St Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2013. Ann Oncol. 2013;24(9):2206–2223.

[13] Goldhirsch A, Wood WC, Coates AS, Gelber RD, Thurlimann B, Senn HJ, et al. Strategies for  subtypes–dealing  with  the  diversity  of  breast  cancer:  Highlights  of  the  St  Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2011. Ann Oncol. 2011;22(8):1736–1747.

[14] Engels CC, Ruberta F, de Kruijf EM, van Pelt GW, Smit VTHBM, Liefers GJ, et al. The prognostic value of apoptotic and proliferative markers in breast cancer. Breast Cancer Res Treat. 2013;142(2):323–339.

[15] Aleskandarany MA, Green AR, Rakha EA, Mohammed RA, Elsheikh SE, Powe DG, et Growth fraction as a predictor of response to chemotherapy in node-negative breast cancer. Int J Cancer. 2010;:NA–NA.

[16] Aleskandarany MA, Green AR, Benhasouna AA, Barros FF, Neal K, Reis-Filho JS, et al. Prognostic value of proliferation assay in the luminal, HER2-positive, and triple-negative biologic classes of breast cancer. Breast Cancer Res. 2012;14(1):R3.

[17] Chang J, Ormerod M, Powles TJ, Allred DC, Ashley SE, Dowsett M. Apoptosis and proliferation as predictors of chemotherapy response in patients with breast carcinoma. Cancer. 2000;89(11):2145–2152.

[18]  Viale  G,  Regan  MM,  Mastropasqua  MG,  Maffini  F,  Maiorano  E,  Colleoni  M,  et Predictive value  of  tumor  Ki-67  expression  in  two  randomized  trials  of  adjuvant chemoendocrine therapy for node-negative breast cancer. J Natl Cancer Inst. 2008 Feb. 5;100(3):207–212.

[19] Denkert C, Loibl S, Muller BM, Eidtmann H, Schmitt WD, Eiermann W, et al. Ki67 levels  as  predictive  and  prognostic  parameters  in  pretherapeutic  breast  cancer  core biopsies:  A  translational  investigation  in  the  neoadjuvant  GeparTrio  trial.  Ann  Oncol. 2013;24(11):2786–2793.

[20] Ellis MJ, Tao Y, Luo J, A’Hern R, Evans DB, Bhatnagar AS, et al. Outcome prediction for estrogen receptor-positive breast cancer based on postneoadjuvant endocrine therapy tumor characteristics. J Natl Cancer Inst. 2008;100(19):1380–1388.

[21] Petit T, Wilt M, Velten M, Millon R, Rodier JF, Borel C, et al. Comparative value of tumour grade, hormonal receptors, Ki-67, HER-2 and topoisomerase II alpha status as predictive markers in breast cancer patients treated with neoadjuvant anthracycline-based chemotherapy. Eur J Cancer. 2004;40(2):205–211.

[22] Nishimura R, Osako T, Okumura Y, Hayashi M, Arima N. Clinical significance of Ki-67 in neoadjuvant chemotherapy for primary breast cancer as a predictor for chemosensitivity and for prognosis. Breast Cancer. 2010;17(4):269–275.

[23] Dowsett M, Smith IE, Ebbs SR, Dixon JM, Skene A, A’Hern R, et al. Prognostic value of Ki67 expression after short-term presurgical endocrine therapy for primary breast cancer. J Natl Cancer Inst. 2007;99(2):167–170.

[24] Dowsett M, Smith IE, Ebbs SR, Dixon JM, Skene A, Griffith C, et al. Short-term changes in  Ki-67  during  neoadjuvant  treatment  of  primary  breast  cancer  with  anastrozole  or tamoxifen alone or combined correlate with recurrence-free survival. Clin Cancer Res. 2005;11(2 Pt 2):951s–8s.

[25] Jonat W, Arnold N. Is the Ki-67 labelling index ready for clinical use? Ann Oncol. 2011;22(3):500–502.

[26] Dowsett M, Nielsen TO, A’Hern R, Bartlett J, Coombes RC, Cuzick J, et al. Assessment of Ki67 in breast cancer: Recommendations from the International Ki67 in Breast Cancer working group. J Natl Cancer Inst. 2011;103(22):1656–1664.

[27] Colozza M, Sidoni A, Piccart-Gebhart M. Value of Ki67 in breast cancer: The debate is still open. Lancet Oncol. 2010;11(5):414–415.

[28] Fasanella S, Leonardi E, Cantaloni C, Eccher C, Bazzanella I, Aldovini D, et al. Proliferative activity in human breast cancer: Ki-67 automated evaluation and the influence of different Ki-67 equivalent antibodies. Diagn Pathol. 2011;6 Suppl 1:S7.

[29] Wolff AC, Hammond MEH, Schwartz JN, Hagerty KL, Allred DC, Cote RJ, et al. American Society of Clinical Oncology/College of American Pathologists guideline recommendations for human epidermal growth factor receptor 2 testing in breast cancer. Arch Pathol Lab. Med. 2007;131(1):18–43.

[30] Hicks DG, Schiffhauer L. Standardized assessment of the HER2 status in breast cancer by immunohistochemistry. Lab Med. 2011;42(8):459–467.

Categories
Feature Articles

The role of general practice in cancer care

The incidence of cancer has risen in Australia and globally over the past few decades. Fortunately, advances in medicine have enabled cancer patients to live longer. We now have the means to provide better healthcare and support for this group of ‘survivors’. However, this situation also poses unique challenges to the healthcare system as resources are limited but healthcare professionals are required to do more. In recent years, there has been a call for an expansion of the role of general practitioners (GPs) in cancer care. Such a primary care-based approach allows GPs to pursue their interests in cancer management and enables diversification of healthcare resources. This article will attempt to examine how general practice can be involved in cancer care in Australia.

v6_i1_a29

Introduction

Cancer is a chronic disease on the global scale. In Australia, cancer accounts for approximately a quarter of all deaths. [1] By the age of 75, one in three males and one in four females will be expected to be diagnosed with cancer. [1] These figures may be attributed to higher population growth and an ageing population. [2] As patients are diagnosed earlier and receive better treatment, more cancer patients transit into survivorship. [3] Consequently, the immediate demands of cancer care extend beyond diagnosis and treatment and towards multi-disciplinary care, which focuses on providing support and improving the quality of life of patients. This article will briefly examine the factors influencing the involvement of primary care physicians in cancer care in Australia and reference initiatives implemented by other countries.

Patterns of cancer care and areas of GP involvement

Cancer management is complex and involves different healthcare providers. According to Norman et al., cancer care patterns may be sequential, parallel or shared. [4] In sequential care, patients are mainly cared for by oncology teams while parallel care requires general practice (GP) management of non-cancer problems. Shared care has the greatest GP involvement and requires joint management of cancer care by GP and oncology teams. GPs in Australia are mostly involved in screening and diagnosis of cancer and, eventually, referral to specialists who take over treatment and patient follow-up. GPs also play a role in managing the side effects of treatment as well as education (including prevention measures) of patients and their families. Depending on the treatment outcome, supportive or palliative care may also be provided by GPs.

In the future, it is expected that GPs will need to accept responsibilities outside their remit. This is due to a limited number of specialists in rural and remote areas and the need to diversify and expand the healthcare workforce. [5] Furthermore, health systems that include strong primary medical care were shown to be more efficient and have better health outcomes. [6] Therefore, there is a gradual move towards shared care models with GPs playing a central role alongside other healthcare providers. In this context, it will be important to understand the factors influencing the involvement of GPs in cancer care and how to maximize their involvement throughout the spectrum of cancer care.

Factors influencing GP involvement in cancer care

Location of GPs

The degree of involvement of GPs may depend on where they are based. [7] Out of necessity, GPs in rural and remote areas could be involved in coordination of cancer care and also some aspects of treatment (e.g. pre-chemotherapy checks) and follow-up of side effects. Conversely, GPs working in urban settings were more likely to refer patients upon diagnosis.

Studies have shown that indigenous Australians and other minority groups living in rural or remote areas have higher cancer mortality rates due to reduced access to healthcare. [8] GPs working in these settings could reduce this inequality through better prevention and diagnosis,  timely  referrals  as  well  as  treatment  of  co-morbidities- areas which are traditionally within the remit of primary care. [9] Although the cancer curriculum in Australian GP training focuses on these areas, it is estimated that GPs only encounter about four new cancer cases each year with cases exhibiting huge variability in cancer types and treatment requirements. [7] Such a scenario necessitates opportunities for GPs to improve their skills and experience through case-based learning and seminars. [7] Online learning modules offered by Cancer Australia are a good starting point but more effort will be required to promote these learning opportunities as GPs may not be aware of such resources. [7,10]

In recent years, the rise of telemedicine has provided an important tool in connecting rural GPs and specialists. This has enabled rural GPs to be more involved in cancer care as they can easily gain access to specialist knowledge. In Queensland, medical oncology services via videoconferencing were trialed and provided to remote and rural communities. [11] Satisfaction levels were high among both patients and rural health workers with such benefits as reduced time and money,  improved  communication between specialists  and  patients and greater access to specialist support by rural GPs. [11]

Communication pathways

Communication between GPs and hospital-based services is regarded as a major challenge facing general practice in Australia. The main form of communication from hospitals to GPs is the discharge summary and specialist letter with GPs receiving information mainly from  hospital  medical  officers.  [5]  The  variable  quality  and  poor

timeliness of information received has been shown to impede quality communication between GPs and hospitals. These factors were attributed to poor understandings of GP roles in cancer care and their information needs, as well as inexperience of medical officers. [5] It was found that hospital communications to GPs tend to omit social information about the patient. As cancer patients have been shown to be dependent on GPs for psychosocial support, the social needs of cancer patients may not be addressed adequately by GPs if poor communication persists. [1]

 

It was also shown that GPs preferred to receive a multi-disciplinary discharge summary containing input from all health professionals involved. [5] The creation of electronic health records may facilitate the  development  of  such  a  discharge  summary.  In  Canada,  the British Columbia (BC) e-health initiative allows authorized health professionals working in BC to access complete patient records when and where they were required. [12] This initiative was shown to reduce patient delays and costs to healthcare providers and patients and is a great demonstration of how improved communication via improved access to patient records may improve healthcare outcomes of cancer patients. Nonetheless, it is important that such electronic platforms are developed for and with healthcare practitioners to allow them to tackle the patient’s needs without being burdened by technology. [12]

Regular  meetings  may  also  improve  communication  between  GPs and specialists. Mitchell et al. suggested that GPs should be regularly involved in hospital-based multi-disciplinary team (MDT) meetings. [13] It is heartening that a national survey found that 84% of GPs would consider taking part in MDT meetings should the opportunity arise. [14] This suggests that formalization of MDT meetings is highly feasible. Cancer patients may benefit from the sharing of experiences between members of a formalized MDT team and this could be crucial to patients who suffer from low-incidence cancers where experience of the team matters and also to GPs, who would otherwise have little awareness about which specialists to approach for specific cancers. [13]

Remuneration and financial incentives

Inadequate remuneration may also deter GPs from accepting additional responsibilities.  A recent study found an increasing proportion of Australian GPs are not involved in palliative care (25%) as compared to previous rates of 5% and 8% in 1993 and 1998 respectively. [15] Poor remuneration in relation to the time and knowledge required for palliative care may be a deterring factor. There is currently no requirement for GPs to provide after-hour services for palliative care and some GPs also reflect that they are not confident enough to manage the technical and psychosocial aspects of palliative care. [15]

Financial incentives may be helpful as the workload of GPs has increased but their incomes have decreased relative to specialist incomes. [6] In the United Kingdom, the Gold Standards Framework for palliative care rewards GPs who are interested in palliative care and demonstrate quality care through regular meetings and maintenance of a patient register. [16] Such a scheme may attract GPs to be more involved in palliative care. In addition, to increase involvement of GPs in population-based screening programs, the current payment scheme in Australia should be revised to reward service not just based on service to symptomatic patients but also asymptomatic cancer patients who approach GPs for counseling and other psychosocial issues. [8]

Role of healthcare providers

The  roles  of  healthcare  providers  are  often  unclear.  Holmberg  et al. reported that while some people understand the role of GPs in cancer care, others felt that their roles were not stated explicitly in guidelines. [17] The varying perception of GP roles may hinder GPs from expressing their information needs and prevent their expanded involvement in treatment and follow-ups.  It has been shown that patients prefer to know who is in charge and parallel care may provide a clearer definition of GP and specialist roles. [18] Moreover, parallel care is not as demanding as shared care in terms of the level of communication required to facilitate coordination of cancer care and may therefore be favoured by both GPs and specialists. [18] While it is important to align patients’ perception with the preferences of healthcare providers, a parallel pattern of care may not be necessarily be the most effective. This explains why there is now a gradual move towards multi-disciplinary care based on shared care models, which was highlighted in Australia’s 2009 report on ‘A healthier future for all Australians’. [19]

A shared care model would require clarity of roles and a need to recognize and expand the role of primary care without compromising healthcare outcomes. Two randomized control trials in the United Kingdom (UK) and Canada showed that follow-up of breast cancer patients by GPs was as safe as follow-up by specialists while an Australian study showed no difference in recurrence rates of colorectal cancer patient after follow up by GPs or specialists. [20,21] These studies imply that GPs may undertake a greater role in the follow- up phase. Similarly, there may also be a growing role for GPs in the treatment phase,  in  terms of  management of toxicity episodes or pre-chemotherapy checks, as new oral chemotherapeutic agents are developed. [13]

Access  to  protocols  such  as  The  Cancer  institute  NSW  Standard Cancer Treatment Program (CI-SCaT) may allow GPs to manage cancer patients without requiring too much reliance on specialist expertise. [13] Similarly, GPs can access wiki-based clinical practice guidelines which are developed and constantly updated by Cancer Council Australia. [22] GPs based in rural/remote areas have been relying on generic clinical skills adapted to cancer care to manage cancer patients for years and supplementation of these skills by specialized cancer information may improve the feasibility and practicality of GP-based cancer management. [23]

GP preferences and input

While there is much potential for the expansion of GP roles, GP preferences and their input in cancer plans needs to be valued. GPs generally express interest in being involved in areas that are traditionally within their remit such as prevention, diagnosis, surveillance and psychological support but less than 50% of GPs expressed a desire to undertake coordination roles in treatment and supportive care. [7] These observations may reflect underlying structural and systemic constraints (e.g. workload and payment structures) that could only be addressed effectively at a governmental level. Conversely, as mentioned previously, GPs in rural/remote areas are already actively involved in coordination of cancer and psychological care and thus they may accept expanded roles more readily.

Ultimately, there needs to be a buildup of trust and confidence in GP capabilities and increased involvement of GPs in cancer control plans will  be  necessary.  Internationally,  the  UK  National  Health  Service (NHS) has involved GPs in its cancer plan since 2000. [1] Similarly, in Australia, GPs have been involved in the National Service Improvement Framework for Cancer while a scoping exercise undertaken by the National Cancer Control Initiative in 2004 has sought to identify areas of priority to support cancer care by primary healthcare providers. [1] A result of which was the Cancer Service Networks National Demonstration Program (CanNET) which was funded by the Australia government in seven states. It was conceived as a means of identifying opportunities to improve the organization and delivery of cancer care via MDTs and managed clinical networks (MCNs) so as to improve outcomes and reduce disparities in cancer survival rates across population groups. [24]

Lessons from CanNET

The evaluation of CanNET provided valuable insights into the provision of multi-disciplinary cancer care. For example, in addition to effective communication, it was found that networking events and activities were essential  to  building  up  professional  relationships  between healthcare providers. [24] Moreover, although GPs were willing to be involved in MDT sessions, engaging GPs was found to be difficult due to constraints imposed on general practice. [24] This suggests that while examining constraints on the specialist side is important and has been researched extensively, increased focus should also be placed on alleviating constraints on the GP side.

CanNET was also found to increase the work burden for healthcare providers. [24] This has prompted a re-think of healthcare providers’ roles to incorporate more flexibility.  A number of innovative roles are found overseas and could be trialed in various CanNET networks. For example, the Uniting Primary Care and Oncology Network (UPCON) in Manitoba advocated the use of medical leaders in the form of lead family physicians (FPs). [25] These lead FPs are primary care physicians within a practice who have an interest in cancer care and constantly engage in regular education programs and meetings jointly organized by oncologists and FPs. They disseminate useful information to colleagues and also play an advisory role by raising issues pertaining to primary care during meetings with oncologists and the Manitoba cancer agency. Besides occasionally accepting referrals, lead FPs did not have to perform difficult or unfamiliar tasks and they were remunerated according to their level of involvement. [25] This program managed to improve the partnership between GPs and other healthcare providers and could potentially fit into the Australian system.

Consistent with the theme of medical leadership, it was found that the introduction of continuing professional development (CPD) was effective in promoting local champions in some CanNET networks. CPD opportunities such as mentoring and clinical placements were received positively and more than half of the healthcare providers surveyed acknowledged that these activities helped increased their knowledge and skills and provided valuable networking opportunities. [24] Nonetheless, more work is required to address potential constraints such as workload and staff shortages. This again raises the importance of tele-oncology as a possible solution as essential oncology skills may be learnt during GP sit-ins with patients, therefore reducing the need for face-to-face attendance of workshops.

Looking to the future- the ideal oncology curriculum

The Oncology Education Committee of Cancer Council Australia has developed an ideal oncology curriculum for medical schools with the aim of equipping students with the knowledge, skills and attitude to provide quality care to cancer patients and their caregivers. This curriculum  has  been  reviewed  recently  to  include  more  emphasis on clinical experiences such as ‘observing all components of multi- disciplinary  cancer  care’.  [26]  These  changes  reflect  the  need  for future doctors who are able to work within a multi-disciplinary cancer care setting and who can understand the role of healthcare providers (including GPs) in different phases of a cancer patient’s journey. [26] Students who are interested in becoming GPs will need to be familiar with the specific needs and requirements of cancer patients as GPs are often the first point of call. Furthermore, students who take up the Medical Rural Bonded Scholarship Scheme (MRBS) and end up in rural settings will be expected to take up more responsibility than their urban counterparts. As such, changes in medical education may pave the way for changes in future medical practice.

Conclusion

Cancer management in Australia is gradually changing toward a shared care model with a focus on multi-disciplinary care. In this context, there is an increasing demand for GPs to expand their roles to relieve the pressure on other healthcare providers. Existing constraints that impede the involvement of GP will need to be addressed. These include issues pertaining to communication, remuneration, role clarity as well as GP preferences and input. A number of initiatives such as CanNET were implemented and has helped identify areas which could promote a greater role for general practice in cancer care. Overseas healthcare initiatives such as UPCON and the BC e-health initiative will also provide further valuable lessons in our search for solutions. Currently, tele-oncology appears to be a viable approach in improving rural GP involvement in cancer care and alleviating workload and staff shortages.

In conclusion, GPs have the capacity to provide quality cancer care alongside their specialist counterparts and it would be a more efficient use of healthcare resources to involve rather than neglect them. It is unlikely that specialist cancer care will be compromised as they form the core component of the actual treatment process whereas GPs are envisioned to take up coordinating as well as diagnosis and follow-up roles. As the roles of the GP can be flexible depending on preference and expertise, this is in itself advantageous as cancer care is no longer limited by the number of specialists. Specialist care may also be enhanced due to a more focused and individualized approach afforded by the less workload taken on by the specialists.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

K Ho: koho2292@uni.sydney.edu.au

References

[1] McAvoy BR. General practitioners and cancer control. Med J Aust 2007; 187(2):115-7.

[2] Australian Institute of Health and Welfare, Australasian Association of Cancer Registries. Cancer in Australia 2001. AIHW Cancer Series No. 28. (AIHW Cat.No. CAN 23.) Canberra: AIHW, 2004.

[3] Phillips JL, Currow DC. Cancer as a chronic disease. Collegian 2010; 17(2):47-50.

[4] Norman A, Sisler J, Hack T, Harlos M. Family physicians and cancer care.Palliative care patients’ perspectives. Can Fam Physician 2001; 47:2009-16

[5] Rowlands S, Callen J, Westbrook J. Are general practioners getting the information they need from hospitals to manage their lung cancer patients? A qualitative exploration. HIMJ 2012; 41(2)4-13.

[6] Harris MF, Harris E. Facing the challenges: general practice in 2020. Med J Aust 2006; 185(2):122-4.

[7] Johnson CE, Lizama N, Garg N, Ghosh M, Emery J, Saunders C. Australian general practitioners’ preferences for managing the care of people diagnosed with cancer. Asia Pac J Clin Oncol 2012;doi: 10.1111/ajco.12047

[8] Jiwa M, Saunders CM, Thompson SC, Rosenwax LK, Sargant S, Khong EL, et al. Timely cancer diagnosis and management as a chronic condition: opportunities for primary care. Med J Aust 2008; 189(2):78-82.

[9] Campbell NC, Macleod U, Weller D. Primary care oncology: essential if high quality cancer care is to be achieved for all. Fam Pract 2002; 19(6):577-8.

[10] Cancer Australia. Cancer learning.  2011. Available from: http://www.cancerlearning.gov.au/.

[11] Sabesan S, Simcox K, Marr J. Medical oncology clinics through videoconferencing: an  acceptable  telehealth  model  for  rural  patients and  health  workers.  Intern  Med  J 2012;42(7):780-5.

[12] British Columbia eHealth Steering Committee. eHealth Strategic Framework. British Columbia Ministry of Health, Vancouver 2005.

[13] Mitchell, G. (2008). The role of the general practice in cancer care. Australian Family Physician 2008; 37(9):698-702.

[14] Australia Government: Cancer Australia. CanNET national evaluation (final report-phase  1).  2009.  Available  from:  http://canceraustralia.gov.au/publications-resources/cancer-australia-publications/cannet-national-evaluation-final-report-phase-1

[15] Rhee JJ, Zwar N, Vagholkar S, Dennis S, Broadbent AM, Mitchell G. Attitudes and barriers to involvement in palliative care by Australian urban general practitioners. J Palliat Med 2008; 11(7):980-5.

[16] Munday D, Mahmood K, Dale J, King N. Facilitating good processes in primary palliative care: does the Gold Standards Framework enable quality performance? Fam Pract 2007:1-9.

[17]  Holmberg,  L.  The  role  of  the  primary-care  physician  in  oncology care.  Primary healthcare and specialist cancer services. The Lancet Oncology 2005;6:121-122.

[18] Aubin M, Vezina L, Verreault R, Fillion L, HudonE, Lehmann F, et al. Family physician involvement in cancer care follow up: the experience of a cohort of patients with lung cancer. Ann Fam Med 2010; 8(6):526-32

[19]  National  Health  and  Hospitals  Reform  Commission.  A  Healthier  Future  for  All Australians  –  Final  Report  of  the  National Health  and  Hospitals  Reform  Commission. Canberra, 2009:107.

[20] Grunfeld E. Cancer survivorship: a challenge for primary care physicians. Br J Gen Pract 2005; 55(519):741-742

[21] Esterman A, Wattchow D, Pilotto L, Weller D, McGorm K,Hammett Z. Randomised controlled trial of general practitioner compared to surgical specialist follow up of patients with colorectal cancer. 2004. Paper presented at the GP & PHC Research Conference. http://www.phcris.org.au/conference/2004/index.php

[22] Cancer Council Australia. Cancer council Australia wiki platform. 2012. Available from: http://wiki.cancer.org.au/australia/Main_Page

[23] Mitchell GK, Burridge LH, Colquist SP, Love A. General practitioners’ perceptions of their role in cancer care and factors which influence this role. Health Soc Care Community 2012; 20(6):607-16.

[24] Australia Government: Cancer Australia. CanNET national evaluation (final report-phase  1).  2009.  Available  from:  http://canceraustralia.gov.au/publications-resources/cancer-australia-publications/cannet-national-evaluation-final-report-phase-1

[25] Sisler J, McCormack-Speak P. Bridging the gap between primary care and the cancer system: the UPCON network of CancerCare Manitoba. Cam Fam Physician 2009; 55(3):273-8.

[26]  Cancer  Council  Australia.  Ideal  oncology  curriculum  for  medical  schools.  2012. Available from: http://www.cancer.org.au/health-professionals/oncology-education/ideal-oncology-curriculum-for-medical-schools.html