Categories
Editorials

Editor’s welcome: healthcare leaders of tomorrow

v6_i1_a1It  is  with  great  pleasure  that  I  welcome you to Volume 6, Issue 1 of the Australian Medical Student Journal (AMSJ); the  national  peer-reviewed  journal  for medical students. The AMSJ serves two purposes: firstly, to provide a stepping-stone for medical students wishing to advance their skills  in  academic  writing  and  publication; and secondly, to inform Australian medical students of important news relating to medical education and changes in medical care. This issue of the AMSJ showcases an array   of   research,   reviews,   and   opinions that address a wide range of contemporary subjects. In particular, there is a trend for articles on translational research and national healthcare matters.

Australia’s healthcare system is evolving rapidly to accommodate an ageing demographic, growing epidemics of chronic disease, and the introduction of new and often expensive medical technology. We are concurrently faced with major challenges including declining economic growth and considerable budget cuts in an attempt control national debt. The coming decades will be particularly challenging for our healthcare system, but also for us as future doctors. We will have to make difficult decisions to limit healthcare spending whilst ensuring that Australia maintains a leading world-class healthcare system. More than ever, doctors will be required to be leaders in the national healthcare arena, and it will be up to you and your colleagues to direct our ever-changing healthcare system.

In light of this, I am pleased to introduce this issue with a guest article by Professor Brian Owler,  President of  the  Australian  Medical Association. Professor Owler discusses the potential threat of university fee deregulation to Australia’s future medical profession. The AMA and others will be launching a social media and public campaign in February to discourage senators from passing a reformed bill.

This issue of the AMSJ has a record number of original research articles, reflecting some of the best research conducted by medical students across Australia. Not only have the authors written excellent papers, they have spent months, even years conducting these extensive projects. Mr Edward Teo reports a large study comparing specialty choices and rural intentions of students graduating from a private medical program compared to those from other Australian medical schools. Ms Skye MacLeod reports on the adequacy of anticoagulation according to the CHADS2 score in patients with atrial fibrillation. Another two studies address the impact of language and literacy respectively on hospital care.

The reviews and feature articles in this issue cover a diverse array of topics. In particular, there  are  several articles  addressing   the role of novel oral anticoagulants in the management of atrial fibrillation and venous thromboembolism. This is a large area of interest and transition and we are pleased to inform medical students of the latest evidence and guidelines in this field. It is interesting to observe a growing trend in the publication of systematic reviews in our journal. Systematic literature appraisal  and assessment of  bias are highly  useful  skills,  which  are not  only vital for advancing research, but also facilitate the delivery of evidence-based medical care. We encourage students to learn about these methods and consider writing a systematic review during their medical education.

The AMSJ is staffed by a large team of volunteer medical students from almost every medical school in the country. This issue we received a record number of submissions, with all staff increasing their workload to review and manage each manuscript. I would like to commend the editorial team that have worked tirelessly over the last year. I also acknowledge  the  new  proof-editing  team that have been swift at proof-reading all manuscripts and assisting in the development of the new AMSJ style guide. The printed copies of the AMSJ and the AMSJ website would not be possible without help from the print-layout team, IT officers, and sponsorship officers, together led by Miss Biyi Chen. Our Director  Mr  Christopher  Foerster  has  given his heart and soul to ensure that the AMSJ is of the highest possible standard. Finally, I thank our readers, authors, peer-reviewers, and sponsors who continue to support our journal.

On behalf of the staff of the AMSJ, I hope you enjoy this issue.

Thank you to AMSJ Peer Reviewers:

  • Emeritus Prof Francis Billson
  • Prof Richard Murray
  • Prof Ajay Rane
  • Prof Andrew Bonney
  • Prof Andrew Somogyi
  • Prof Andy Jones
  • Prof Anne Tonkin
  • Prof Jan Radford
  • Prof Jon Emery
  • Prof Louise Baur
  • Prof Lyn Gilbert
  • Prof Mark Harris
  • Prof Michael Chapman
  • Prof Nicholas Zwar
  • Prof Paul Thomas
  • Prof Rakesh Kumar
  • Prof Sarah Larkins
  • Prof Tomas Corcoran
  • A/Prof Anthony Harris
  • A/Prof David Baines
  • A/Prof Janette Vardy
  • A/Prof Roslyn Poulos
  • A/Prof Sabe Sabesan
  • A/Prof William Sewell
  • A/Prof Peter Gonski
  • A/Prof Debbie Wilson
  • Dr Adam Parr
  • Dr Andrew Chang
  • Dr Andrew Henderson
  • Dr Anna Johnston
  • Dr Cristan Herbert
  • Dr Dan Hernandez
  • Dr Danforn Lim
  • Dr Danielle Ni Chroinin
  • Dr Darren Gold
  • Dr Despina Kotsanas
  • Dr Freda Passam
  • Dr Greg Jenkins
  • Dr Haryana Dhillon
  • Dr John Reilly
  • Dr Justin Burke
  • Dr Justin Skowno
  • Dr Kathryn Weston
  • Dr Lynnette Wray
  • Dr Mark Donaldson
  • Dr Mark Reeves
  • Dr Matthew Fasnacht
  • Dr Mike Beamish
  • Dr Nolan McDonnell
  • Dr Nollaig Bourke
  • Dr Nuala Helsby
  • Dr Peter Baade
  • Dr Pooria Sarrami Foroushani
  • Dr Rachel Thompson
  • Dr Ross Grant
  • Dr Sal Salvatore
  • Dr Shir-Jing Ho
  • Dr Sid Selva-Nayagam
  • Dr Stephen Rogerson
  • Dr Sue Hookey
  • Dr Sue Lawrence
  • Dr Sue Thomas
  • Dr Susan Smith
  • Dr Venkat Vangaveti
  • Ms Dianna Messum
  • Ms Margaret Evans
Categories
Letters

Surgical hand ties: a student guide

Surgical  hand  ties  are  a  procedural  skill commonly employed in surgery; however, student    exposure    to    practical    surgical experience  is  often  limited.  Students are therefore often excited at the opportunity to learn these skills to practise for themselves. Often the only opportunities to formally learn these skills come in the form of workshops presented at student conferences or run by university special interest groups.

Having attended such surgical skills workshops I have noticed the difficulty demonstrators and students have had in teaching and learning learn and master hand ties.

In addition to being an individual resource, this guide was also created for use in a workshop setting. Ideally, a demonstrator would show the students the basic steps involved in hand ties. The guide could then be used to reinforce this learning, where the student can practise with the sutures in their hands while following the steps using a combination of pictures, text, and memory aids. This would also have the benefit of letting the demonstrator help students with more specific questions on technique, rather  than  repeating  the  same the skill of surgical hand ties. I felt this was the product of two things: the difficulty the tutors had in demonstrating the small movements of the fingers to an audience; and the students’ difficulty with remembering each step later. Therefore, I combined an easy to follow graphic with some helpful memory aids into a simple resource to help medical students demonstration multiple times.

The overall aim of this guide is to make the process of learning and teaching surgical hand ties to students easier, and to improve recall and proficiency for students performing the skill through the use of simplified steps and diagrams.

v6_i1_a2a v6_i1_a2b

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

J Ende: jesse@ende.com.au

Categories
Guest Articles

Medical degrees being priced out of reach

v6_i1_a3One of the most disappointing and troubling aspects of this year’s Federal Budget was the Government’s decision to deregulate university fees and to reduce the subsidy for Commonwealth Supported Places by an average of 20 per cent.

It’s a decision that the AMA has been fighting, given the harm it could do to the medical profession.

Medicine has always been seen as an attractive career; however, university fee  deregulation  will  mean  that  medical  graduates  will be burdened with large debts as they enter the medical workforce as interns. They will carry that debt with them, and this will have consequences that do not seem to have been considered.

At the same time as fee deregulation, the Government is reducing funding for Commonwealth Supported Places.

So, what is the impact of the proposed changes? What will the fees be?

Many medical degrees are now four-year graduate degrees. There are also still a number of undergraduate degrees of five and six years. Let’s consider the graduate degree example.

Prior to studying medicine, students have usually completed a science degree of three years, while many others do four or five year degrees. Before they even start medicine, they are likely to have significant debt. Using fees charged to international students as a guide, this debt can be in the order of $50,000.

Medical Deans Australia and New Zealand has estimated that a student completing a four-year professional entry medical course – which is 63 per cent of Australian medical courses − would have a final HECS debt of $55,656, compared to the current debt of $40,340. [1]

The fine print of this modelling is that it is based on the break-even scenario in which fees are limited.

Even the Medical Deans note that their costs are greater than these amounts to provide medical training. Following detailed modelling in 2011, Medical Deans found that it actually costs a University $50,272–51,149 per year to train a single doctor.

The equation does change if universities raise fees past their benchmarked costs to the absolute limit of international student fee levels, which are flexible in their own right.

It is naive to think that universities would not soon raise their fees to the level that covers their costs, even if they would be restrained from going further and making a profit. The current fee for an international medical student at one prominent medical school averages around $76,000 per year.

For a full fee paying domestic student, the fee averages around $64,000 per year.

Indications are that, with fee deregulation, the fee for medicine will likely fall between these numbers, at round $70,000.

They will, of course, be subsidised by the Commonwealth to the order of $18,000.

Therefore the debt accumulated is around $52,000 per year.

Over four years this is more than $200,000 on top of the $50,000 debt that they are likely to enter medicine with from their undergraduate degrees. So, conservative estimates put the debt at around $250,000 for a medical graduate.

By way of comparison, Bond University has published its fees for 2015.

To study a Bachelor of Medicine, Bachelor of Surgery will cost $331,380− $23,670 per semester x 14 semesters, with the requirement to pay $47,340 in advance for the first two semesters.

Some medical deans will tell you that collection of fees from the Faculty of Medicine will be used to subsidise other areas of the university that are more price sensitive. Bursaries, such as those paid by some universities, will have to be sourced from fees such as those collected from medical students.

There will be immense pressure to raise fees for medical students accordingly. I suspect that the estimate of $250,000 will seem very conservative indeed.

Many reading this may be wondering that, if a medical degree is price insensitive, then what is the issue? Well, there are a number of issues.

There is good evidence that high fee levels and the prospects of significant debt deter people from lower socio-economic backgrounds from entering university.

One of the strengths of medical education in Australia is diversity in the selection of students, including those from lower socio-economic backgrounds. Even under the current arrangements, we still fall short.

In 2009, the former Federal Government outlined a goal that, by 2020, 20 per cent of higher education enrolments at undergraduate level should be filled with students from low socio-economic backgrounds. [2]

A report commissioned on selection and participation in higher education in March 2011 by the Group of Eight (Go8) revealed that low socio-economic status (SES) applicants – from the lowest 25 per cent SES bands − were under-represented at 18 per cent, and high SES applicants – from the top 25 per cent SES bands – were over- represented at 31.6 per cent, relative to their population share in terms of applications for university.

Even now, we are still under the target of 20 per cent.

Closer examination of applications for health disciplines by field of education shows a greater proportion of high SES students − 45 per cent − applying for medicine, compared to 15 per cent applying for medicine from low SES backgrounds. [3]

A significant number of rural students come from a low socio-economic background. High fee levels and the prospect of significant debt will deter them from entering university.

Rural medical students already incur substantive extra costs in accommodation and travel. To place further financial barriers to these students would result in many finding the costs prohibitive. Aboriginal and Torres Strait Islander students may well be hardest hit and discouraged by such measures.

No matter what upfront loan assistance is provided, it will deter students from a low income background from entering medicine. This is a real issue in medicine. We want the best and the brightest, not the wealthiest. And we want the medical profession to have the same diversity as the communities it serves.

I believe Australia has gained significant benefit by attracting medical students  from  diverse  backgrounds  who  have  entered  medicine, either through graduate or undergraduate programs, based on merit. That is something that we should continue to value, something that we should continue to benefit from, but it is something that is under threat through these changes.

In the context of this debate, some have suggested that one way to meet demand for medical school places would be to uncap places. The AMA believes this would be a recipe for disaster.

The AMA does not support the creation of new medical schools or additional places until it has been established that there are sufficient training posts and clinical supervisors to provide prevocational and vocational training for the increased numbers of students currently enrolled in Australian medical schools.

Further, any expansion of medical school places should be consistent with workforce planning and the much anticipated five-year training plan we are expecting from the National Medical Training and Advisory Network.

We must not underestimate the impact that an uncapped market would have on demand for health services either.

So, why does graduate debt matter in medicine? There’s a perception that doctors earn enough to pay for this level of debt, but the context of debt is important.

A Go8 report on understanding graduate earnings from July 2014 this year suggests that, after 20 years of employment, medicine and law graduates are the top performers, earning $117,000 and $107,000 respectively. [4]

Note, however, that this is after 20 years. In the years preceding that, earnings can be extremely variable depending on individual career paths.

In terms of average earnings, there is a wide variance in the average wage according to discipline.

This also has an impact on the relative financial attractiveness of different medical specialties.

 

The OECD reports that self-employed general practitioners in Australia earned 1.7 times the average wage in 2011, compared to self-employed specialists who earned 4.3 times the average wage. [5]

Understanding the context of debt incurred by medical students is also important in light of the significant costs of further training required by junior doctors to achieve specialist qualification, and the loss of earning potential for up to 15 years while doing so.

Overseas evidence shows that, in relation to medicine, a high level of student debt is a factor in career choice, driving people towards better remunerated areas of practice and away from less well paid specialties like general practice.

Areas of medicine that are better remunerated will become more attractive. Procedural specialties will be more attractive compared to general practice or areas such as rehabilitation, drug and alcohol, or paediatrics.

Ultimately, these decisions will exacerbate doctor shortages in rural and regional areas.

We do not want to move to a US-style medical training system where students’ career choices are influenced by degree of debt. This would have a significant impact on access to services and on workforce planning.

Before its abolition by the Government, Health Workforce Australia published medical workforce projections through until 2025. While these show that, by 2025, the overall medical workforce will be very close to being in balance, there will be geographic shortages as well as shortages in specific specialties.

Encouraging doctors to work in these areas and specialties will be much more difficult if they are saddled with high levels of debt. This would undermine the significant effort that has been made by the Commonwealth to expand doctor numbers, as well as attract graduates to work in underserviced communities and specialties.

Finally, I would like to discuss the implications for higher degrees. These are significant for medical students with an interest in research and academic work.

High debt levels among medical graduates will deter our best and brightest, our future leaders, from undertaking PhD programs.

The numbers of medical graduates in some universities are significant – up to a third.

It is already a major commitment, not only in terms of the minimum three years of time, but also financially.

As a medical graduate, already with significant debt, often at the stage of life of starting a family, it would not be surprising to see commitment to further research, to science, questioned.

This is a real issue for the people who undertake such degrees − our clinician scientists, our future medical leaders. They are the doctors who lead departments, who lead research teams, and run laboratories.

For all the talk of the Medical Research Future Fund, it is disappointing that implications such as these do not seem to have been considered.

The AMA believes that the Commonwealth should be providing additional support for primary medical education, not less, and we do not see fee deregulation as a solution to funding problems.

References

[1] Medical Deans Australia and New Zealand. Factsheet: Contribution and costs of a medical qualification. Sydney: MDANZ, 2014. http://www.medicaldeans.org.au/fact-sheet- contributions-and-costs-of-a-medical-qualification.html (accessed Nov 2014).

[2]  Department  of  Industry.  Future  directions  for  tertiary  education.  Canberra:  DI, 2009 .htt p: //w w w.indust r y. go v.a u/hig he re ducat ion/Re sourc e sA ndP ublicat ions/ReviewOfAustralianHigherEducation/Pages/FutureDirectionsForTertiaryEducation.aspx (accessed Nov 2014).

[3] Palmer N, Bexley E, James R; Centre for the Study of Higher Education, University of Melbourne. Selection and participation in higher education: university selection in support of student success and diversity of participation. Prepared for the Group of Eight, 2011. http://www.cshe.unimelb.edu.au/people/james_docs/ Selection_and_Participation_in_Higher_Education.pdf (accessed Nov 2014).

[4] Group of Eight Australia. Policy Note: understanding graduate earnings. 2014. https:// go8.edu.au/sites/default/files/docs/publications/understanding_graduate_earnings_-_ final.pdf (accessed Nov 2014).

[5] Organisation for Economic Co-operation and Development. Health at a Glance 2013. Seventh edition: OECD, 2013. http://dx.doi.org/10.1787/health_glance-2013-en (accessed Nov 2014).

Categories
Guest Articles

Introducing JDocs, a competency framework for junior doctors

v6_i1_a4aIntroduction

The Royal Australasian College of Surgeons is pleased to announce the launch of JDocs, a competency framework supported by a suite of educational resources that have been designed to promote flexible and  self-directed  learning,  together with  assessment opportunities to record and log procedural experiences and capture evidence of personal achievements.  These resources will be available online later this year, and will continue to evolve and expand over time.   Some resources will be available for an annual subscription fee.

Why has College engaged in the prevocational space?

The College recognised the need and importance of re-engagement with prevocational junior doctors to provide guidance and education that would assist with their development towards a proceduralist career. Key to this was to ensure that the doctor entering any procedural speciality program would be well-prepared and clinically competent relevant to their postgraduate year. As a result, the College established JDocs, which is available to any doctor registered in Australia and New Zealand, from and including internship, with the level of engagement determined by the individual doctor.

Junior doctors will also be eligible to apply for the General Surgical Sciences Examination from 2015. This exam tests anatomy, physiology and pathology to a high level.

JDocs does not guarantee selection into any procedural specialty training program, however, engagement with the Framework and its supporting resources describes the many tasks, skills and behaviours a junior doctor should achieve at defined postgraduate levels, and will help the self-motivated junior doctor recognise the skills and performance standards expected prior to applying to a specialty training program.

What does the JDocs Framework cover?

The JDocs Framework is based on the College’s nine core competencies, with each competency considered to be of equal importance, and is described in stages appropriate for each of the first three postgraduate clinical years, as well as those beyond. In order to link the many tasks, skills and behaviours of the Framework to everyday clinical practice, key clinical tasks have been developed that are meaningful for the junior doctor. These tasks can be used to demonstrate achievement of the competencies and standards outlined in the Framework, and also make it possible for the junior doctor to show they are competent at the tasks and skills required in order to commence specialty training.

Accessing the JDocs

The first phase of the JDocs website, http://jdocs.surgeons.org/signup. htm, enables the junior doctor to register for updates and download a copy of the Framework. Additionally, the College’s website and social media feeds will also deliver updates as to JDocs progression and launch of resources, as they become available later this year.

A shareable app has been developed that provides an overview of JDocs, as well as a sample of learning resources, and can be accessed in the following ways:

  • SMS JDocs to 0400813813
  • Scan the following QR code

v6_i1_a4b

Social media

Twitter: @RACSurgeons, #RACSJDocs

Facebook: Royal Australasian College of Surgeons

Summary

In summary, the JDocs Framework is about the professional standards and learning outcomes to be achieved during the early postgraduate/ prevocational clinical years.   It describes and assists early career professional development for junior doctors aspiring to procedural medical careers, including surgery.

Categories
Review Articles

Bispectral analysis for intra-operative monitoring of the neurologically impaired: a literature review

Introduction: The bispectral index (BIS) is a technology which uses a modified electroencephalogram (EEG) to predict the likelihood that an anaesthetised patient has awareness of their surroundings. This method of monitoring was developed by analysing the EEGs of approximately 1000 patients with normal neurological function. It therefore has questionable applicability to those with neurological disability which may cause abnormal EEG patterns. Aim: To review the literature and establish whether the BIS monitor can be used to measure depth of anaesthesia in patients with neurologic disability. Method: Databases including Ovid MEDLINE, the Cochrane Central Register of Controlled Trials, EMBASE and PubMed were searched to identify studies investigating the use of the BIS in patients with neurological disability causing atypical EEG patterns. Results: Four case reports and four observational studies were found describing patients with Alzheimer’s disease, vascular dementia, intellectual disability, epilepsy and congenitally low EEG, who were monitored with the BIS when undergoing anaesthesia. In general, these studies showed patients with neurologic disabilities score lower on the BIS even when fully aware than their non-disabled peers; however, relative changes in BIS score appear to reflect reasonably accurately changes in conscious state and likelihood of awareness. Conclusion: The BIS score fails to provide an absolute measure of level of consciousness in patients with neurological impairment and should not be relied upon as the sole measure of awareness. It can, however, provide a relative measure of change in consciousness.

“The anaesthetist and surgeon could have before them on tape or screen a continuous record of the electric activity of… [the] brain.” F. Gibbs, 1937 [1]

v6_i1_a5

Introduction

Originally, monitoring depth of anaesthesia involved the use of clinical signs considered proxies for consciousness, such as those described by Snow in 1847 and later by Guedel. [2,3] Subsequent calculations of the minimum alveolar concentration improved monitoring and reduced  the  incidence  of  awareness.  More  recently,  however,  it has been recognised that intra-operative awareness can occur independently of sympathetic responses or changes in end tidal concentration parameters. [4] Awareness under anaesthesia is defined as “consciousness under general anaesthesia with subsequent recall”, [5] which is commonly detected via patient self-reports or the use of a structured interview, such as a ‘Brice’ questionnaire. [6] The current incidence of awareness is estimated as occurring in 0.1-0.2% of surgical procedures. [7] Though uncommon, episodes of intra-operative awareness can have significant negative psychological consequences. [8] These consequences have the potential to be greater in patients with neurological disease as they may lack insight into their medical condition and the need for surgery.

The EEG was first suggested as a way to overcome the shortcomings of clinical measures of awareness in 1937. [1] Since then, there have been numerous attempts to achieve this, culminating with the production of the bispectral index (BIS) in 1996. The BIS uses a proprietary algorithm to transform the EEG into a single, dimensionless number between 0 and 100. 100 correlates to “awake”, 40 to 60 to “general anaesthesia” and 65-85 to “sedation”. The mathematics of bispectral analysis are beyond the purview of this paper but are detailed elsewhere. [9] A trial in patients at high-risk for awareness, but without neurological illness, found significant reductions in rates of intra-operative awareness, though similar successes have not been replicated elsewhere. [10,11]

Importantly, the algorithm underpinning the BIS was developed by analysing the normal electroencephalograms (EEG) of over 1000 healthy volunteers. Patients with neurologic disease, however, often have underlying structural or physiological abnormalities that manifest themselves as abnormal EEG findings. This has been demonstrated in a variety of psychiatric, degenerative and developmental disabilities. [12] Atypical EEG patterns not taken into consideration during the development of the algorithm can therefore influence BIS levels independently of the depth of anaesthesia. [13] Theoretically, this reduces the BIS’s ability to accurately measure depth of anaesthesia in patients with neurological disease. [14]

Aim

To review the literature and establish whether the BIS monitor can be used to measure depth of anaesthesia in patients with neurologic disability.

Search strategy

A search was undertaken of the medical literature. The following keywords and their alternative spellings were mapped to their medical subject headings: neurology, cognitive disability, intellectual disability, BIS, bispectral index and intra-operative monitoring. These keywords were combined with appropriate Boolean operators and used to search databases including Ovid MEDLINE, the Cochrane Central Register of Controlled Trials, EMBASE and PubMed.

Literature review

There were four case reports and four observational studies found. Conditions described in the literature were Alzheimer’s disease and vascular dementia (one observational study), intellectual disability (two observational studies), seizures (three case reports and one observational study), and congenital low amplitude EEG (one case report).

The results of a prospective observational study suggest the BIS may be of limited use in monitoring patients with Alzheimer’s disease or vascular dementia. [15] The study compared 36 patients with dementia with 36 age-matched controls. It found that patients with these conditions had an awake BIS on average of 89.1, 5.6 lower than age-matched controls with a baseline of 94.7, and below 90, considered the cut-off point indicating sedation. [16] These results indicate the BIS values corresponding to awareness validated in normal patients may not apply to those diagnosed with Alzheimer’s disease or vascular dementia. Participants in this study were not anaesthetised, so response in BIS to anaesthesia was unable to be assessed. Therefore, it could not be determined whether the BIS intervals which correspond to general anaesthesia and sedation in normal patients were applicable to Alzheimer’s patients or alternatively whether they would need to be anaesthetised to a lower BIS given their lower baseline level.

The BIS in intellectually-disabled patients has been investigated in two prospective, observational studies, though these provided conflicting results. The first compared 20 children with quadriplegic cerebral palsy and intellectual disability with 21 matched controls at a number of clinical endpoints. [17] The mean BIS of children with cerebral palsy was significantly lower at sedation (91.63 vs 96.79, p = 0.01), at an end-tidal sevoflurane concentration of 1% (48.55 vs 53.38, p = 0.03) and at emergence (90.73 vs 96.45, p = 0.032). The authors concluded validation of the BIS in children with intellectual disability may be tenuous. However, though the absolute BIS scores were different between these groups, the relative reduction in BIS score and pattern of change at increasing levels of anaesthesia was similar. The BIS may therefore not be a guide to the absolute depth of anaesthesia, but changes indicate increasing or decreasing awareness. It should be noted that this study was performed in children, for whom the BIS was not developed, as opposed to adults. The difference in EEGs between adults and children may therefore have confounded these results.

The  second  article described  a  prospective observational study  of 80 adolescent and adult patients with varying degrees of intellectual disability undergoing general anaesthesia for dental procedures. [18] The aetiology of intellectual disability varied between patients but was predominately due to autism, cerebral palsy or Down syndrome. The study found no statistically significant difference in BIS scores between patients with mild, moderate, severe or profound disability at eight different clinical endpoints (awake, induction of anaesthesia, intravenous catheter placement, tracheal intubation, start of surgery, end of surgery, awakening to commands, and tracheal extubation). The only statistically significant finding of the study was that patients with more severe intellectual disability took longer to emerge from anaesthesia. The BIS monitor, however, accurately predicted this and provided an additional clue to the anaesthetist of the time required until extubation. These results indicate that intellectual disability does not affect the BIS and support the authors’ hypothesis that the BIS score is “a measure of global neuronal function, not a measure of the aberrant neuronal connection” [18] and could therefore be applied to these patients.

Though these two studies provide conflicting results on whether intellectual  disability  affects  the  absolute  BIS  level,  both  provide good evidence that relative reductions in BIS scores correlate well with increasing depth of anaesthesia in these patients. The BIS may therefore have a role in monitoring changes in conscious states.

Despite the known ability of epilepsy to cause significant derangement of the EEG, only three case reports were found which dealt with this in relation to the BIS. The first describes a patient with pre-existing epilepsy undergoing surgery. [19] Despite no clinical change, the patient’s BIS score dropped sharply from 40 to 20 before recovering every five minutes. This occurred over a period of hours until the raw EEG was checked and found to show epileptiform activity. Anticonvulsants were given at which point the BIS stabilised. In another report, seizures were evoked using photic stimulation. [20] Despite the patient remaining conscious, the BIS level dropped to 63 during the seizure. In the third case report, a patient in status epilepticus had a measured BIS of 93 despite being unconscious. [21] This dropped to 23 with control of the seizures. These reports provide strong support for the assertion that “BIS values may not accurately reflect the actual level of consciousness when abnormal EEG activity is evoked in epileptic patients”. [20]

In these studies of epilepsy, when patients were not ictal, BIS scores provided measures of depth of anaesthesia that were as accurate as would be expected in non-epileptic patients. The seizures themselves were heralded by large, rapid changes in the BIS, as was their recovery. Epilepsy is not therefore a contraindication to monitoring with the BIS, but anaesthetists should be aware that abnormal BIS scores may be the result of seizures rather than changes in depth of anaesthesia. Furthermore, in instances of sudden changes in the BIS the raw EEG can be checked to determine if the change is due to seizure activity.

The final description of a neurological condition affecting the BIS found in the literature was a congenital, non-pathological low amplitude EEG. In one case report, a man with this condition, despite being fully conscious, had a recorded BIS of 40. [22] This is on the low edge of the level considered ideal for general anaesthesia. As many as 5-10% of the population may show this rhythm when attached to an EEG, which is genetically determined and not associated with any pathology. [23] The current BIS algorithm is incapable of distinguishing awareness from anaesthesia in these patients.

Conclusion

A search of the literature showed almost all neurological conditions which were studied cause abnormal BIS levels. Alzheimer’s disease, vascular dementia, intellectual disability, epilepsy and congenitally low amplitude EEG were studied and all disease states, except intellectual disability, in which the results were conflicting, were shown to affect the BIS. It is far from clear whether the BIS may have a role in intra- operative awareness in addition to standard clinical measures in patients  with  neurological  disease.  The  use  of  BIS  in  these  cases may therefore mislead the anaesthetist rather than help them. If the anaesthetist does choose to use the BIS to monitor these patients, the BIS should be measured at baseline as the relative reduction in BIS scores may be more important than the absolute value in these patients. Given the lack of published data on this subset of patients, further controlled trials or subgroup analysis of existing trials that compares the use of the BIS against anaesthetic outcomes in patients with neurological disease would be a worthy avenue of future research.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

J Gipson: jsgip2@student.monash.edu.au

References

[1] Gibbs FA, Gibbs EL, and Lennox WG. Effect on the electro-encephalogram of certain drugs which influence nervous activity. Arch Intern Med 1937;60:154-66.

[2] Snow J. On the inhalation of the vapor of ether in surgical operations: containing a description of the various stages of etherization and a statement of the result of nearly eighty operations in which ether has been employed in St. George’s and University College Hospitals. London: Churchill J; 1847.

[3] Guedel A. Inhalational anesthesia, A fundamental guide. New York: Macmillan; 1937. [4] Domino KB, Posner KL, Caplan RA, Cheney FW.  Awareness during anesthesia: a closed claims analysis. Anesthesiology. 1999;90(4):1053-61.

[5] American Society of Anesthesiologists Task Force on Intraoperative, A., Practice advisory for intraoperative awareness and brain function monitoring: A report by the American Society  of  Anesthesiologists  Task  Force  on  Intraoperative  Awareness.  Anesthesiology. 2006;104(4):847-64.

[6] Pandit JJ, Andrade J, Bogod DG, Hitchman JM, Jonker WR, Lucas N et al. 5th National Audit  Project  (NAP5)  on  accidental  awareness  during  general  anaesthesia:  protocol, methods, and analysis of data. Br J Anaesth. 2014;113(4):540-8.

[7] Myles PS, Williams DL, Hendrata M, Anderson H, Weeks AM. Patient satisfaction after anaesthesia and surgery: results of a prospective survey of 10,811 patients. Br J Anaesth. 2000;84(1):6-10.

[8] Lennmarken C, Bildfors K, Eulund G, Samuelsson P, Sandin R. Victims of awareness. Acta Anaesthesiol Scand, 2002;46(3):229-31.

[9] Sigl JC, Chamoun NG. An introduction to bispectral analysis for the electroencephalogram. J Clin Monit. 1994;10(6):392-404.

[10] Myles PS, Leslie K, McNeil J, Forbes A, Chan MT. Bispectral index monitoring to prevent awareness during anaesthesia: the B-Aware randomised controlled trial. Lancet. 2004;363(9423):1757-63.

[11] Avidan MS, Zhang L, Burnside BA, Finkel KJ, Searleman AC, Selvidge JA et al. Anesthesia awareness and the bispectral index. N Engl J Med. 2008; 358(11):1097-108.

[12]  Hughes  JR,  John  ER.  Conventional  and  quantitative  electroencephalography  in psychiatry. J Neuropsychiatry Clin Neurosci. 1999;11(2):190-208.

[13] Dahaba AA. Different conditions that could result in the bispectral index indicating an incorrect hypnotic state. Anesth Analg. 2005;101(3):765-73.

[14] Bennett C, Voss LJ, Barnard JP, Sleigh JW. Practical use of the raw electroencephalogram waveform during general anesthesia: the art and science. Anesth Analg. 2009;109(2):539-50.

[15] Renna M, Handy J, Shah A. Low baseline Bispectral Index of the electroencephalogram in patients with dementia. Anesth Analg. 2003;96(5):1380-5

[16] Johansen JW Sebel PS, Development and clinical application of electroencephalographic bispectrum monitoring. Anesthesiology. 2000;93(5):1336-44.

[17] Choudhry DK, Brenn BR. Bispectral index monitoring: a comparison between normal children and children with quadriplegic cerebral palsy. Anesth Analg. 2002. 95(6):1582-5 [18] Ponnudurai RN, Clarke-Moore A, Ekulide I, Sant M, Choi K, Stone J et al. A prospective study of bispectral index scoring in mentally retarded patients receiving general anesthesia. J Clin Anesth. 2010;22(6):432-6.

[19] Chinzei M, Sawamura S, Hayashida M, Kitamura T, Tamai H, Hanaoka K. Change in bispectral index during epileptiform electrical activity under sevoflurane anesthesia in a patient with epilepsy. Anesth Analg. 2004;98(6):1734-6

[20] Ohshima N, Chinzei M, Mizuno K , Hayashida M, Kitamura T, Shibuya H et al. Transient decreases in Bispectral Index without associated changes in the level of consciousness during photic stimulation in an epileptic patient. Br J Anaesth. 2007;98(1): 100-4.

[21]  Tallach  RE,  Ball  DR,  Jefferson  P.  Monitoring  seizures  with  the  Bispectral  index. Anaesthesia. 2004;59(10): 1033-4.

[22] Schnider TW, Luginbuhl M, Petersen-Felix S, Mathis J. Unreasonably low bispectral index values in a volunteer with genetically determined low-voltage electroencephalographic signal. Anesthesiology. 1998;89(6):1607-8.

[23] Niedermeyer E. The normal EEG of the waking adult, in Electroencephalography : basic principles, clinical applications, and related fields. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins Health; 2011. 1275p.

Categories
Review Articles

Venous thromboembolism: a review for medical students and junior doctors

Venous  thromboembolism,  comprising  deep  vein  thrombosis and pulmonary embolism, is a common disease process that accounts  for  significant  morbidity  and  mortality  in  Australia. As  the  clinical  features  of  venous  thromboembolism  can  be non-specific, clinicians  need  to  have  a  high  index  of  suspicion for   venous   thromboembolism.   Diagnosis   primarily   relies   on a combination of clinical assessment, D-dimer testing and radiological investigation. Following an evidence-based algorithm for  the  investigation  of  suspected  venous  thromboembolism aims to reduce over investigation, whilst minimising the potential of missing clinically significant disease. Multiple risk factors for venous thromboembolism (VTE) exist; significant risk factors such as recent surgery, malignancy, acute medical illness, prior VTE and thrombophilia are common amongst both hospitalised patients and those in the community.  Management of VTE is primarily anticoagulation and this has traditionally been with unfractionated or low molecular weight heparin and warfarin. The non-vitamin K antagonist oral anticoagulants, also known as the novel oral anticoagulants (NOACs), including rivaroxaban and dabigatran, represent an exciting alternative to traditional therapy for the prevention and management of VTE. The significant burden of venous thromboembolism is best reduced through a combination of prophylaxis, early diagnosis, rapid implementation of therapy and management of recurrence and potential sequelae. Junior doctors are in a position to identify patients at risk of VTE and prescribe thromboprophylaxis as necessary. Although a significant body of evidence exists to guide diagnosis and treatment of VTE, this article provides a concise summary of the pathophysiology, natural history, clinical features, diagnosis and management of VTE.

v6_i1_a6f

Introduction

Venous  thromboembolism  (VTE)  is  a  disease  process  comprising deep vein thrombosis (DVT) and pulmonary embolism (PE). VTE is a common problem with an estimated incidence of one-two per 1,000 population each year [1,2] and approximately 2,000 Australians die each year from VTE. [3] PE represents one of the single most common preventable causes of in-hospital death [4] and acutely it has a  17% mortality rate. [5, 6] VTE is also associated with a significant financial burden; the financial cost of VTE in Australia in 2008 was an estimated $1.72 billion. [7] Several important sequelae of VTE exist including: post-thrombotic syndrome, recurrent VTE, chronic thromboembolic pulmonary hypertension (CTEPH) and death. [8,9]

Due to the high incidence of VTE and the potential for significant sequelae, it is imperative that medical students and junior doctors have a sound understanding of its pathophysiology, diagnosis and management of VTE.

Pathophysiology and risk factors

The pathogenesis of venous thrombosis is complex and our understanding of the disease is constantly evolving. Although no published literature supports that Virchow ever distinctly described a triad for the formation of venous thrombosis [10], Virchow’s triad remains clinically relevant when considering the pathogenesis of venous thrombosis. The commonly cited triad consists of alteration in the constituents of blood, vascular endothelial injury and alterations in blood flow. Extrapolation of each component of Virchow’s triad provides a framework for important VTE risk factors. Risk factors form an integral part of the scoring systems used in risk stratification of suspected VTE. In the community, risk factors are present in over 75% of patients, with recent or current hospitalisation or residence in a nursing home reported by over 50% ofpatients with VTE. [11] Patients may have a combination of inherited and acquired thrombophilic defects. Combinations of risk factors have at least an additive effect on the risk of VTE. Risk factors for VTE are presented in Table 1.

v6_i1_a6a

Thrombophilia

Thrombophilia refers to a predisposition to thrombosis, which may be inherited or acquired. [14] The prevalence of thrombophilia at first presentation  of  VTE  is  approximately  50%,  with  the highest prevalence  found  in  younger  patients  and  those  with unprovoked VTE. [15] Inherited thrombophilias are common in the Caucasian Australian population. The birth prevalence of factor V Leiden heterozygosity and homozygosity, which confers resistance to activated protein  C,  is  9.5%  and  0.7%  respectively. Heterozygosity and homozygosity for the prothrombin gene mutation (G20210A) is another common inherited thrombophilia, with a prevalence of 4.1% and 0.2% respectively. [16] Other significant thrombophilias include antithrombin deficiency, protein C deficiency, protein S deficiency and causes of hyperhomocystinaemia. [16,17] Antiphospholipid syndrome is an acquired disorder characterised by antiphospholipid antibodies and arterial or venous thrombosis or obstetric related morbidity, including recurrent spontaneous abortion. Antiphospholipid syndrome represents an important cause of VTE and may occur as a primary disorder or secondary to autoimmune or rheumatic diseases such as systemic lupus erythematosus. [18]

Testing for hereditary thrombophilia is generally not recommended as it does not affect clinical management of most patients with VTE [19,20] and there is no evidence that such testing alters the risk of recurrent VTE. [21] There are few exceptions such as a fertile women with a family history of a thrombophilia where testing positive may lead to the decision to avoid the oral contraceptive pill or institute prophylaxis in the peripartum period. [22]

Natural history

Most DVT originate in the deep veins of the calf. Thrombi originating in the calf are often asymptomatic and confer a low risk of clinically significant PE. Approximately 25% of untreated calf DVT will extend into the proximal veins of the leg and 80% of patients with symptomatic DVT have involvement of the proximal veins. [9] Symptomatic PE occurs in a significant proportion of patients with untreated proximal DVT; however the exact risk of proximal embolisation is difficult to estimate. [9,23]

Pulmonary vascular remodeling may occur following PE and may result in CTEPH. [24] CTEPH is thought to be caused by unresolved pulmonary emboli and is associated with significant morbidity and mortality. CTEPH develops in approximately 1-4% of patients with treated PE. [25,26]

Post-thrombotic syndrome is an important potential long-term consequence of DVT, which is characterised by leg pain, oedema, venous ectasia and venous ulceration. Within 2 years of symptomatic DVT, post-thrombotic syndrome develops in 23-60% of patients [27] and is associated with poorer quality of life and significant economic burden. [28]

Diagnosis

Signs and symptoms of VTE are often non-specific and may mimic many other common clinical conditions (Table 2). In the primary care setting, less than 30% of patients with signs and symptoms suggestive of DVT have a sonographically proven thrombus. [29] Some of the clinical features of superficial thrombophlebitis overlap with those of DVT. Superficial thrombophlebitis carries a small risk of DVT or PE and contiguous extension of the thrombus. Treatment may be recommended with low-dose anticoagulant therapy or NSAIDs. [30]

v6_i1_a6b

Deep vein thrombosis

Clinical features

Symptoms of DVT include pain, cramping and heaviness in the lower extremity, swelling and a cyanotic or blue-red discolouration of the limb.  [31]  Signs  may  include  superficial vein  dilation, warmth and unilateral oedema. [31,32] Pain in the calf on forceful dorsiflexion of the foot was described as a sign of DVT by the American surgeon John Homans in 1944. [33] Homans’ sign is non-specific and is an unreliable sign of DVT. [34]

Investigations

Several scoring tools have been evaluated for assessing the pre-test probability of DVT. One such commonly used validated tool is the Modified Wells score, presented in Table 3. [35] The Modified Wells score categorises patients as either likely or unlikely to have a DVT.

v6_i1_a6c

D-dimer is the recommended investigation in patients considered unlikely to have a DVT, as a negative D-dimer effectively rules out DVT in this patient group. [36] D-dimer measurements have several important limitations with most studies of its use in DVT being performed in outpatients and non-pregnant patients. As D-dimer represents a fibrin degradation product, it is likely to be raised in any inflammatory response. This limits its use in post-operative patients and many hospitalised patients.

Venous compression ultrasound with Doppler flow is indicated as the initial investigation in patients who are considered likely to have DVT (Modified Wells ≥ 2) or in patients with a positive D-dimer. Compression ultrasonography is the most widely used imaging modality due to its high sensitivity and specificity, non-invasive nature and low cost. Limitations include operator-dependent accuracy and reduction of sensitivity and specificity in DVT of pelvic veins, small calf veins or in obese patients. [32]

Pulmonary embolism

Clinical features

10%  of  symptomatic  PE  are  fatal  within  1  hour  of  the  onset of symptoms [9] and delay of diagnosis remains common due to non-specific  presentation.  [37]   Clinical  presentation  will  depend  on several factors including size of the embolus, rapidity of obstruction of the pulmonary vascular bed and patient’s haemodynamic reserve. Symptoms may include sudden or gradual onset dyspnoea, chest pain, cough, haemoptysis, palpitations and syncope. Signs may include tachycardia, tachypnea, fever, cyanosis and the clinical features of DVT. Signs of pulmonary infarction may develop later and include a pleural friction rub and reduced breath sounds. [31] Patients may also present with systemic arterial hypotension with or without clinical features of obstructive shock. [5,38]

Investigations

The first step in the diagnosis of suspected PE is the calculation of the clinical  pre-test  probability  using  a  validated  tool  such  as  the Wells or Geneva score. Clinician gestalt may be used in place of a validated scoring tool; however it may be associated with a lower specificity and therefore increased unnecessary pulmonary imaging. [39] Neither clinician gestalt nor a clinical decision rule can accurately exclude  PE  on its  own.  An  electrocardiogram  (ECG)  will  often  be performed early in the presentation of a patient with suspected PE. A variety of electrocardiographic changes associated with acute PE have been described. Changes consistent with right heart strain and atrial enlargement reflect mechanical pulmonary artery outflow tract obstruction. [40]  Other  ECG  changes  include  sinus  tachycardia, ST segment or T wave abnormalities, QRS axis alteration (left or right), right bundle branch block and a number of others. [40] The S1Q3T3 abnormality, described as a prominent S wave in lead I with a Q wave and inverted T wave in lead III, is a sign of acute corpulmonale. It is not pathognomonic for PE and occurs in less than 25% of patients with acute PE. [40]

In patients with a low pre-test probability, a negative quantitative D-dimer effectively excludes PE. [39] The conventional D-Dimer cut- off value (500 µg/L) are associated with reduced specificity in older patients leading to false positive results. [41] A recent meta-analysis has found that the use of an age specific D-dimer cut off value (age x  10µg/L)  increases  the specificity  of  the  D-dimer  test  with  little effect on sensitivity. [42] The pulmonary embolism rule-out criteria (PERC), as outlined in Table 4, may be applied to patients with a low pre-test  probability  to  reduce  the number  of  patients  undergoing D-dimer testing. [43] A recent meta-analysis demonstrated that in the emergency department, the combination of low pre-test probability and  a  negative  PERC  rule results  in  a  likelihood  of  PE  that  is  so unlikely that the risk-benefit ratio of further investigation for PE is not favourable. [44]

v6_i1_a6d

Patients with a high pre-test probability or with a positive D-dimer test should undergo pulmonary imaging. Multidetector Computed Tomography  Pulmonary  Angiography  (CTPA)  is  largely  considered the imaging modality of choice for PE given its high sensitivity and
specificity and its ability to identify alternative diagnoses. [45,46] CTPA
must be used only with a clear indication due to significant radiation
exposure, risk of allergic reactions and contrast-induced nephropathy.
[47] Concerns have also been raised about over diagnosis of PE with
detection of small subsegmental emboli. [48]

 

Ventilation-perfusion (V/Q) lung scintigraphy is an alternative pulmonary imaging modality to CTPA. A normal V/Q scan excludes PE; however, a significant proportion of patients will have a ‘non- diagnostic’ result thus requiring further imaging. [49] Non-diagnostic scans are more common in patients with pre-existing respiratory disease or an abnormal chest radiograph and are less likely in younger and pregnant patients. [48,50] Compared with CTPA, V/Q scanning is associated with fewer adverse effects and less radiation exposure and is often employed when a contraindication to CTPA exists. [49,50]

Bedside echocardiography is a useful investigation if CT is not immediately available or if the patient is too unstable for transfer to radiology. [51,52] Echocardiography may reveal right ventricular dysfunction which guides prognosis and the potential for thrombolytic therapy in massive and sub-massive PE. [51]

The diagnosis of PE during pregnancy is an area of controversy. [54] The diagnostic value of D-dimer during pregnancy using the conventional threshold is limited. With both V/Q scans and CTPA, foetal radiation dose is minimal but higher in the former. CTPA is associated with a much higher dose of radiation to maternal breast tissue thus increased risk of breast cancer. [53,54] In light of these risks, some experts advocate for bilateral compression Doppler ultrasound for suspected PE in pregnancy. [54] However, if this is negative and a high clinical suspicion remains, pulmonary imaging is still required.

Prophylaxis

Multiple guidelines exist to direct clinicians on the use of thromboprophylaxis in both medical and surgical patients. [3,55-58] Implementation of thromboprophylaxis involves assessment of the patient’s risk of VTE, risk of adverse effects of thromboprophylaxis, including bleeding and identification of any contraindications.

Patients at high risk include those undergoing any surgical procedure, especially abdominal, pelvic or orthopaedic surgery. Medical patients at high risk include those with myocardial infarction, malignancy, heart failure, ischaemic stroke and inflammatory bowel disease.[3]

Mechanical options for thromboprophylaxis include encouragement of  mobility,  graduated  compression  stockings,  intermittent pneumatic compression devices and venous foot pumps. Mechanical prophylactic measures are often combined with pharmacological thromboprophylaxis. The strength of evidence for each of the anticoagulant varies depending on the surgical procedure or medical condition in  question;  however,  unfractionated heparin  (UFH)  and low-molecular weight heparin (LMWH) remain the mainstay of VTE prophylaxis. [3] The NOACs, also referred to as direct oral anticoagulants, notably  rivaroxaban,  apixaban  and  dabigatran,  have  been  studied most amongst the hip and knee arthroplasty patient groups, where they have been shown to be both efficacious and safe. [59-61] The use of aspirin for the prevention of VTE following orthopaedic surgery remains controversial, despite receiving a recommendation by recent guidelines. [62] It is recommended that pharmacological prophylaxis should be continued until the patient is fully mobile. In certain circumstances such as following total hip or knee arthroplasty and hip fracture surgery, extended duration prophylaxis for up to 35 days post- operatively is recommended. [3,62]

Management

The aim of treatment is to relieve current symptoms, prevent
progression of the disease, reduce the potential for sequelae and
prevent recurrence. Anticoagulation remains the cornerstone of
management of VTE.

Patients with PE and haemodynamic instability (hypotension, persistent
bradycardia, pulselessness), so called ‘massive PE’may require urgent
treatment with thrombolytic therapy. Thrombolysis reduces mortality
in haemodynamically unstable patients, however it is associated with a
risk of major bleeding. [63] Surgical thrombectomy and catheter-based
interventions represent an alternative to thrombolysis in patients withThe aim of treatment is to relieve current symptoms, prevent progression of the  disease,  reduce the potential for sequelae and massive PE where contraindications exist. [64] The use of thrombolytic therapy in patients with evidence of right ventricular dysfunction and myocardial injury without hypotension and haemodynamic instability remains controversial. A recent study revealed that fibrinolysis in this intermediate risk group reduces rates of haemodynamic compromise while significantly increasing the risk of intracranial and other major bleeding. [65]

For the majority of patients with VTE anticoagulation is the mainstay of treatment. Acute treatment involves UFH, LMWH or fondaparinux. [50]

UFH binds to antithrombin III, increasing its ability to inactivate thrombin, factor Xa and other coagulation factors. [66] UFH is usually given as an intravenous bolus initially, followed by a continuous infusion. UFH therapy requires monitoring of the activated partial thromboplastin time (aPTT) and is associated with a risk of heparin- induced thrombocytopenia. [66] The therapeutic aPTT range and dosing regimen vary between institutions. The use of UFH is usually preferred if there is severe renal impairment, in cases where there may be a requirement to rapidly reverse anticoagulation therapy and in obstructive shock where thrombolysis is being considered. [50]

LMWH is administered subcutaneously in a weight adjusted dosing regimen once or twice daily. [51] When compared with UFH, LMWH has a more predictable anticoagulant response and does not usually require monitoring. [67] In obese patients and those with significant renal dysfunction, LMWH may require dose adjustment or monitoring of factor Xa activity. [67]

Therapy with a vitamin K antagonist, most commonly warfarin, should be commenced at the same time as parenteral anticoagulation. Therapy with the parenteral anticoagulant should be discontinued when the international normalised ratio (INR) has reached at least 2.0 on two consecutive measurements and there has been an overlap of treatment with a parenteral anticoagulant for at least five days. [68] This overlap is required as the use of warfarin alone may be associated with an initial transient prothrombotic state due to warfarin mediated rapid depletion of the natural anticoagulant protein C, whilst depletion of coagulation factors II and X takes several days. [69]

The NOACs represent an attractive alternative to traditional anticoagulants for the prevention and management of VTE.

Rivaroxaban is a direct oral anticoagulant that directly inhibits factor Xa.[66] Rivaroxaban has been shown to be as efficacious as standard therapy (parenteral anticoagulation and warfarin) for the treatment of proximal DVT and symptomatic PE. [70,71] When compared with conventional  therapy,  rivaroxaban  may  be  associated  with  lower risks of major bleeding. [70] Rivaroxaban represents an attractive alternative to the standard therapy mentioned above as it does not require parenteral administration, is given as a fixed daily dose, does not require laboratory monitoring and has few drug-drug and food interactions. [70,71]

Dabigatran etexilate is an orally administered direct thrombin inhibitor. Dabigatran is non-inferior to warfarin for the treatment of PE and proximal DVT after a period of parenteral anticoagulation. [72] The safety profile is similar; however dabigatran requires no laboratory
monitoring. [72]

The lack of a requirement for monitoring is a significant benefit over
warfarin for the NOACs. The role of monitoring the anticoagulant
activity of these agents and the clinical relevance of monitoring is a
subject of ongoing research and debate. [73] Anti-factor Xa based
assays may be used to determine the concentration of the anti-factor
Xa inhibitors in specific clinical circumstances. [74,75] The relative
intensity of anticoagulant due to dabigatran can be estimated by
the aPTT and rivaroxaban by the PT or aPTT [76]. There is however,
significant variation in the results based on the reagent the laboratory
uses. Routine monitoring for the NOACs is not currently recommended.

The major studies evaluating the NOACs carried exclusion criteria that included those at high risk of bleeding, with a creatinine clearance of  <30  mL/min,  pregnancy  and  those  with  liver  disease  [70-72], thus caution must be applied with their use in these patient groups. The  NOACs  are  renally  metabolised  to  variable  degrees.  Warfarin or dose adjusted LMWH are preferred for those with reduced renal function (creatinine clearance <30mL/min) who require long-term anticoagulation.

Concern exists regarding a lack of a specific reversal agent for the NOACs. [77,78] Consultation with haematology is recommended if significant bleeding  occurs  during  therapy  with  a  NOAC.  Evidence for the use of agents such as tranexamic acid, recombinant factor VIIa and prothrombin complex concentrate is very limited. [77,78] Haemodialysis may significantly reduce plasma levels of dabigatran, as the drug displays relatively low protein binding. [77,78]

Inferior vena cava (IVC) filters may be placed in patients with VTE and a contraindication to anticoagulation. IVC filters prevent PE however they may increase the risk of DVT and vena cava thrombosis. The use of IVC filters remains controversial due to a lack of evidence. [79]

Recurrence

The risk of recurrence differs significantly depending on whether the initial VTE event was unprovoked or associated with a transient risk factor. [9] Patients with idiopathic VTE have a significantly higher risk of recurrence than those with transient risk factors. Isolated calf DVT carry a lower risk of recurrence than that of proximal DVT or PE. The risk of recurrence after the cessation of anticoagulant therapy is as high as 10% per year in some patients groups. [9]

Duration of anticoagulation therapy should be based on patient preference and a risk-benefit analysis of the risk of recurrence versus the risk of complications from therapy. Generally, anticoagulation should be continued for a minimum of three months and the decision to  continue  anticoagulation  should  be  re-assessed  on  a  regular basis. Recommendations for duration of anticoagulation therapy are presented in Table 5. The currently published guidelines recommend extended anticoagulation therapy with a vitamin K antagonist such as warfarin. Evaluation of the new direct oral anticoagulants for therapy and prevention of recurrence is ongoing. Recent evidence supports the use of dabigatran and rivaroxaban for the secondary prevention of venous thromboembolism with similar efficacy to standard therapy and reduced rates of major bleeding. [70,71,80]

v6_i1_a6e

Aspirin has been shown to be effective in reducing the recurrence VTE in patients with previous unprovoked VTE. After up to 18 months of therapy, aspirin reduces the rate of VTE recurrence by 40%, as compared with placebo. [81]

Acknowledgements

I would like to thank Marianne Turner for her help with editing of the manuscript.

Conflict of interest

None declared.

Correspondence

R Pow: richardeamonpow@gmail.com

Conclusion

VTE  is  a  commonly  encountered  problem  and  is  associated  with significant short and long term morbidity. A sound understanding of the pathogenesis of VTE guides clinical assessment, diagnosis and management. The prevention and management of VTE continues to evolve with the ongoing evaluation of the NOACs. Anticoagulation remains the mainstay of therapy for VTE, with additional measures including thrombolysis used in select cases.  This article has provided medical students with an evidence based review of the current diagnostic and management strategies for venous thromboembolic disease.

References

[1] Oger E. Incidence of venous thromboembolism: a community-based study in Western France. Thromb Haemost. 2000;83(5):657-60.

[2] Cushman M, Tsai AW, White RH, Heckbert SR, Rosamond WD, Enright P, et al. Deep vein thrombosis and pulmonary embolism in two cohorts: the longitudinal investigation of thromboembolism etiology. Am J Med. 2004;117(1):19-25.

[3]  National Health  and  Medical  Research  Council.  Clinical  Practice Guideline  for  the Prevention of Venous Thromboembolism in Patients Admitted to Australian Hospitals. Melbourne: National Health and Medical Research Council; 2009. 157 p.

[4]  National  Institute  of  Clinical  Studies.  Evidence-Practice  Gaps  Report  Volume  1. Melbourne:National Institute of Clinical Studies; 2003. 38 p.

[5] Goldhaber SZ, Visani L, De Rosa M. Acute pulmonary embolism: clinical outcomes in   the   International   Cooperative   Pulmonary   Embolism   Registry   (ICOPER).   Lancet. 1999;353(9162):1386-9.

[6] Huang CM, Lin YC, Lin YJ, Chang SL, Lo LW, Hu YF, et al. Risk stratification and clinical outcomes in patients with acute pulmonary embolism. Clin Biochem. 2011;44(13):1110-5.

[7]  Access  Economics  Pty  Ltd.  The  burden  of  venous  thromboembolism  in  Australia. Canberra: Access Economics Pty Ltd; 2008. 50 p.

[8] Lang IM, Klepetko W. Chronic thromboembolic pulmonary hypertension: an updated review. Curr Opin Cardiol. 2008;23(6):555-9.

[9] Kearon C. Natural history of venous thromboembolism. Circulation. 2003;107(23 Suppl 1):22-30.

[10] Dickinson BC. Venous thrombosis: On the History of Virchow’s Triad. Univ Toronto Med J. 2004;81(3):6.

[11] Heit JA, O’Fallon WM, Petterson TM, Lohse CM, Silverstein MD, Mohr DN, et al. Relative impact of risk factors for deep vein thrombosis and pulmonary embolism: a population-based study. Arch Intern Med. 2002;162(11):1245-8.

[12] Anderson FA, Jr., Spencer FA. Risk factors for venous thromboembolism. Circulation. 2003;107(23 Suppl 1):9-16.

[13] Ho WK, Hankey GJ, Lee CH, Eikelboom JW. Venous thromboembolism: diagnosis and management of deep venous thrombosis. Med J Aust. 2005;182(9):476-81.

[14]   Heit   JA.   Thrombophilia:   common   questions   on   laboratory   assessment   and management. Hematology Am Soc Hematol Educ Program. 2007;1:127-35.

[15] Weingarz L, Schwonberg J, Schindewolf M, Hecking C, Wolf Z, Erbe M, et al. Prevalence of thrombophilia according to age at the first manifestation of venous thromboembolism: results from the MAISTHRO registry. Br J Haematol. 2013;163(5):655-65.

[16] Gibson CS, MacLennan AH, Rudzki Z, Hague WM, Haan EA, Sharpe P, et al. The prevalence of inherited thrombophilias in a Caucasian Australian population. Pathology. 2005;37(2):160-3.

[17]  Genetics  Education  in  Medicine  Consortium.  Genetics  in  Family  Medicine:  The Australian  Handbook  for  General  Practitioners. Canberra:  The  Australian  Government Agency Biotechnology Australia; 2007. 349 p.

[18] Giannakopoulos B, Krilis SA. The pathogenesis of the antiphospholipid syndrome. N Engl J Med. 2013;368(11):1033-44.

[19] Ho WK, Hankey GJ, Eikelboom JW. Should adult patients be routinely tested for heritable  thrombophilia  after  an  episode  of  venous thromboembolism?  Med  J  Aust. 2011;195(3):139-42.

[20]  Middeldorp   S.  Evidence-based  approach  to  thrombophilia  testing.  J  Thromb Thrombolysis. 2011;31(3):275-81.

[21] Cohn DM, Vansenne F, de Borgie CA, Middeldorp S. Thrombophilia testing for prevention of recurrent venous thromboembolism. Cochrane Database Syst Rev[Internet] 2012 [Cited 2014 Sep 10]. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23235639

[22] Middeldorp S, van Hylckama Vlieg A. Does thrombophilia testing help in the clinical management of patients? Br J Haematol. 2008;143(3):321-35.

[23] Markel A. Origin and natural history of deep vein thrombosis of the legs. Semin Vasc Med. 2005;5(1):65-74.

[24] Hoeper MM, Mayer E, Simonneau G, Rubin LJ. Chronic thromboembolic pulmonary hypertension. Circulation. 2006;113(16):2011-20.

[25] Pengo V, Lensing AW, Prins MH, Marchiori A, Davidson BL, Tiozzo F, et al. Incidence of chronic thromboembolic pulmonary hypertension after pulmonary embolism. N Engl J Med. 2004;350(22):2257-64.

[26]  Poli  D,  Grifoni  E,  Antonucci  E,  Arcangeli  C,  Prisco  D,  Abbate  R,  et  al.  Incidence of  recurrent  venous  thromboembolism  and  of  chronic  thromboembolic  pulmonary hypertension  in  patients  after  a  first  episode  of  pulmonary  embolism.  J  Thromb Thrombolysis. 2010;30(3):294-9.

[27] Ashrani AA, Heit JA. Incidence and cost burden of post-thrombotic syndrome. J Thromb Thrombolysis. 2009;28(4):465-76.

[28] Kachroo S, Boyd D, Bookhart BK, LaMori J, Schein JR, Rosenberg DJ, et al. Quality of life and economic costs associated with postthrombotic syndrome. Am J Health Syst Pharm. 2012;69(7):567-72.

[29]  Oudega  R,  Moons  KG,  Hoes  AW.  Limited  value  of  patient history  and  physical examination in diagnosing deep vein thrombosis in primary care. Fam Pract. 2005;22(1):86-91.

[30] Di Nisio M, Wichers IM, Middeldorp S. Treatment for superficial thrombophlebitis of the leg. Cochrane Database Syst Rev [Internet] 2013 [Cited 2014 Sep 10]. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23633322

[31]  Bauersachs  RM.  Clinical  presentation  of  deep  vein  thrombosis  and  pulmonary embolism. Best Pract Res Clin Haematol. 2012;25(3):243-51.

[32] Ramzi DW, Leeper KV. DVT and pulmonary embolism: Part I. Diagnosis. Am Fam physician. 2004;69(12):2829-36.

[33] Homans J. Diseases of the veins. N Engl J Med. 1946;235(5):163-7.

[34] Cranley JJ, Canos AJ, Sull WJ. The diagnosis of deep venous thrombosis. Fallibility of clinical symptoms and signs. Arch Surg. 1976;111(1):34-6.

[35]  Wells  PS,  Anderson  DR,  Bormanis  J,  Guy  F,  Mitchell  M,  Gray  L,  et  al.  Value  of assessment of pretest probability of deep-vein thrombosis in clinical management. Lancet. 1997;350(9094):1795-8.

[36] Wells PS, Anderson DR, Rodger M, Forgie M, Kearon C, Dreyer J, et al. Evaluation of  D-dimer  in  the  diagnosis  of  suspected  deep-vein thrombosis.  N  Engl  J  Med. 2003;349(13):1227-35.

[37] Torres-Macho J, Mancebo-Plaza AB, Crespo-Gimenez A, Sanz de Barros MR, Bibiano-Guillen C, Fallos-Marti R, et al. Clinical features of patients inappropriately undiagnosed of pulmonary embolism. Am J Emerg Med. 2013;31(12):1646-50.

[38] Grifoni S, Olivotto I, Cecchini P, Pieralli F, Camaiti A, Santoro G, et al. Short-term clinical outcome of patients with acute pulmonary embolism, normal blood pressure, and echocardiographic right ventricular dysfunction. Circulation. 2000;101(24):2817-22.

[39] Lucassen W, Geersing GJ, Erkens PM, Reitsma JB, Moons KG, Buller H, et al. Clinical decision  rules  for  excluding  pulmonary  embolism:  a meta-analysis.  Ann  Intern  Med. 2011;155(7):448-60.

[40] Ullman E, Brady WJ, Perron AD, Chan T, Mattu A. Electrocardiographic manifestations of pulmonary embolism. Am J Emerg Med. 2001;19(6):514-9.

[41] Douma RA, Tan M, Schutgens RE, Bates SM, Perrier A, Legnani C, et al. Using an age-dependent D-dimer cut-off value increases the number of older patients in whom deep vein thrombosis can be safely excluded. Haematologica. 2012;97(10):1507-13.

[42] Schouten HJ, Geersing GJ, Koek HL, Zuithoff NP, Janssen KJ, Douma RA, et al. Diagnostic accuracy of conventional or age adjusted D-dimer cut-off values in older patients with suspected   venous   thromboembolism:   systematic   review   and   meta-analysis.   BMJ. 2013;346(2492): 1-13.

[43]  Kline  JA,  Mitchell  AM,  Kabrhel  C,  Richman  PB,  Courtney  DM.  Clinical  criteria  to prevent unnecessary diagnostic testing in emergency department patients with suspected pulmonary embolism. J Thromb Haemost. 2004;2(8):1247-55.

[44] Singh B, Mommer SK, Erwin PJ, Mascarenhas SS, Parsaik AK. Pulmonary embolism rule-out criteria (PERC) in pulmonary embolism–revisited: a systematic review and meta-analysis. Emerg Med J. 2013;30(9):701-6.

[45] den Exter PL, Klok FA, Huisman MV. Diagnosis of pulmonary embolism: Advances and pitfalls. Best Pract Res Clin Haematol. 2012;25(3):295-302.

[46] Stein PD, Fowler SE, Goodman LR, Gottschalk A, Hales CA, Hull RD, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006;354(22):2317-27.

[47] Mitchell AM, Jones AE, Tumlin JA, Kline JA. Prospective study of the incidence of contrast-induced  nephropathy  among  patients  evaluated  for  pulmonary  embolism  by contrast-enhanced computed tomography. Acad Emerg Med. 2012;19(6):618-25.

[48]   Huisman   MV,   Klok  FA.  How  I  diagnose  acute  pulmonary embolism.  Blood. 2013;121(22):4443-8.

[49] Anderson DR, Kahn SR, Rodger MA, Kovacs MJ, Morris T, Hirsch A, et al. Computed tomographic pulmonary angiography vs ventilation-perfusion lung scanning in patients with suspected pulmonary embolism: a randomized controlled trial. J Am Med Assoc. 2007;298(23):2743-53.

[50] Takach Lapner S, Kearon C. Diagnosis and management of pulmonary embolism. BMJ. 2013 Feb 20;346(757):1-9.

[51] Agnelli G, Becattini C. Acute pulmonary embolism. N Engl J Med. 2010;363(3):266-74.

[52]  Konstantinides  S.  Clinical  practice.  Acute  pulmonary  embolism.  N  Engl  J  Med. 2008;359(26):2804-13.

[53] Duran-Mendicuti A, Sodickson A. Imaging evaluation of the pregnant patient with suspected pulmonary embolism. Int J Obstet Anesth. 2011;20(1):51-9.

[54]  Arya  R.  How  I  manage  venous  thromboembolism  in  pregnancy. Br  J  Haematol. 2011;153(6):698-708.

[55] National Health and Medical Research Council. Prevention of Venous Thromboembolism in Patients Admitted to Australian Hospitals: Guideline Summary. Melbourne: National Health and Medical Research Council; 2009. 2 p.

[56] Australian Orthopaedic Association. Guidelines for VTE Prophylaxis for Hip and Knee Arthroplasty. Sydney: Australian Orthopaedic Association; 2010. 8 p.

[57] The Australia & New Zealand Working Party on the Management and Prevention of  Venous  Thromboembolism.  Prevention of  Venous Thrombormbolism:  Best  practice guidelines for Australia & New Zealand, 4th edition. South Austrlaia: Health Education and Management Innovations;2007. 11 p.

[58] Geerts WH, Bergqvist D, Pineo GF, Heit JA, Samama CM, Lassen MR, et al. Prevention of venous thromboembolism: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines (8th Edition). Chest. 2008;133(6 Suppl):381-453.

[59] Eriksson BI, Borris LC, Friedman RJ, Haas S, Huisman MV, Kakkar AK, et al. Rivaroxaban versus   enoxaparin   for   thromboprophylaxis after  hip   arthroplasty.   N   Engl   J   Med. 2008;358(26):2765-75.

[60]  Lassen  MR,  Ageno  W,  Borris  LC,  Lieberman  JR,  Rosencher  N, Bandel  TJ,  et  al. Rivaroxaban versus enoxaparin for thromboprophylaxis after total knee arthroplasty. N Engl J Med. 2008;358(26):2776-86.

[61]  Eriksson  BI,  Dahl  OE,  Rosencher  N,  Kurth  AA,  van  Dijk  CN, Frostick  SP,  et  al. Dabigatran  etexilate  versus  enoxaparin  for prevention  of  venous  thromboembolism after  total  hip  replacement:  a  randomised,  double-blind,  non-inferiority  trial.  Lancet. 2007;370(9591):949-56.

[62] Falck-Ytter Y, Francis CW, Johanson NA, Curley C, Dahl OE, Schulman S, et al. Prevention of  VTE  in  orthopedic  surgery  patients: Antithrombotic  Therapy  and  Prevention  of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):278-325.

[63] Wan S, Quinlan DJ, Agnelli G, Eikelboom JW. Thrombolysis compared with heparin for the initial treatment of pulmonary embolism: a meta-analysis of the randomized controlled trials. Circulation. 2004;110(6):744-9.

[64] Jaff MR, McMurtry MS, Archer SL, Cushman M, Goldenberg N, Goldhaber SZ, et al. Management  of  massive and submassive pulmonary embolism, iliofemoral deep vein thrombosis, and chronic thromboembolic pulmonary hypertension: a scientific statement from the American Heart Association. Circulation. 2011;123(16):1788-830.

[65]  Meyer  G,  Vicaut  E,  Danays  T,  Agnelli  G,  Becattini C,  Beyer-Westendorf  J,  et  al. Fibrinolysis  for  patients  with  intermediate-risk  pulmonary  embolism.  N  Engl  J  Med. 2014;370(15):1402-11.

[66] Mavrakanas T, Bounameaux H. The potential role of new oral anticoagulants in the prevention and treatment of thromboembolism. Pharmacol. Ther. 2011 Dec 24;130(1):46-58.

[67] Bounameaux H, de Moerloose P. Is laboratory monitoring of low-molecular-weight heparin therapy necessary? No. J Thromb Haemost. 2004 Apr;2(4):551-4.

[68]  Goldhaber  SZ,  Bounameaux  H.  Pulmonary  embolism  and  deep vein  thrombosis. Lancet. 2012 Apr 10;379(9828):1835-46.

[69] Choueiri T, Deitcher SR. Why shouldn’t we use warfarin alone to treat acute venous thrombosis? Cleve Clin J Med. 2002;69(7):546-8.

[70]  Buller  HR,  Prins  MH,  Lensin  AW,  Decousus  H,  Jacobson  BF, Minar  E,  et al.  Oral rivaroxaban  for  the  treatment  of  symptomatic  pulmonary  embolism.  N  Engl  J  Med. 2012;366(14):1287-97.

[71] Landman GW, Gans RO. Oral rivaroxaban for symptomatic venous thromboembolism. N Engl J Med. 2011;364(12):1178.

[72] Schulman S, Kearon C, Kakkar AK, Mismetti P, Schellong S, Eriksson H, et al. Dabigatran versus  warfarin  in  the  treatment  of  acute  venous thromboembolism.  N  Engl  J  Med. 2009;361(24):2342-52

[73] Schmitz EM, Boonen K, van den Heuvel DJ, van Dongen JL, Schellings MW, Emmen JM, et al. Determination of dabigatran, rivaroxaban and apixaban by UPLC-MS/MS and coagulation assays for therapy monitoring of novel direct oral anticoagulants. J Thromb Haemost. Forthcoming 2014 Aug 20.

[74] Lippi G, Ardissino D, Quintavalla R, Cervellin G. Urgent monitoring of direct oral anticoagulants in patients with atrial fibrillation: a tentative approach based on routine laboratory tests. J Thromb Thrombolysis. 2014;38(2):269-74.

[75] Samama MM, Contant G, Spiro TE, Perzborn E, Le Flem L, Guinet C, et al. Laboratory assessment of rivaroxaban: a review. Thromb J. 2013;11(1):11.

[76] Baglin T, Keeling D, Kitchen S. Effects on routine coagulation screens and assessment of anticoagulant intensity in patients taking oral dabigatran or rivaroxaban: guidance from the British Committee for Standards in Haematology. Br J Haematol. 2012;159(4):427-9.

[77] Jackson LR, 2nd, Becker RC. Novel oral anticoagulants: pharmacology, coagulation measures, and considerations for reversal. J Thromb Thrombolysis. 2014;37(3):380-91

[78] Wood P. New oral anticoagulants: an emergency department overview. Emerg MedAustralas. 2013;25(6):503-14.

[79]   Rajasekhar   A,   Streiff   MB.   Vena   cava   filters   for   management of  venous thromboembolism: a clinical review. Blood rev. 2013;27(5):225-41.

[80] Schulman S, Kearon C, Kakkar AK, Schellong S, Eriksson H, Baanstra D, et al. Extended use  of  dabigatran,  warfarin,  or  placebo  in  venous  thromboembolism.  N  Engl  J  Med. 2013;368(8):709-18.

[81] Becattini C, Agnelli G, Schenone A, Eichinger S, Bucherini E, Silingardi M, et al. Aspirin for preventing the recurrence of venous thromboembolism. N Engl J Med. 2012;366(21):1959-67.

[82] Kearon C, Akl EA, Comerota AJ, Prandoni P, Bounameaux H, Goldhaber SZ, et al. Antithrombotic  therapy  for  VTE  disease: Antithrombotic  Therapy  and  Prevention  of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):419s-94s.

 

Categories
Review Articles

Glass micro-particulate contamination of intravenous drugs – should we be using filter needles?

The universal use of filter needles in the aspiration of all medications from glass ampoules for intravenous administration has been recommended due to safety concerns surrounding possible inadvertent injection of glass micro-particulate created from snapping open ampoules. Implementing this would involve significant costs. This article aims to review the relevant literature to evaluate whether sufficient evidence for patient harm due to glass micro-particulate contamination exists to justify the universal introduction of filter needles for the aspiration of medications from glass ampoules for intravenous administration. Methods: A search of  OVID  Medline,  TRIP,  Embase  and  Google  Scholar  databases was conducted with a wide variety of terms with no limitation on publication date. Papers addressing the research question were included in the review. Results: Contamination of drugs by glass micro-particulates does occur with aspiration from glass ampoules. Pathological changes such as granuloma formation, embolic or thrombotic events may occur if these are injected intravenously. There is, however, a lack of evidence of consequent clinical harm in humans. Conclusion: A recommendation for the universal introduction of filter needles for aspiration of drugs from glass ampoules for intravenous administration cannot be justified on the basis of the paucity of available evidence showing harm and in light of the significant cost of this recommendation. Concerns regarding the lack of studies demonstrating that particle contamination poses no threat remain valid from a perspective of total patient safety.

v6_i1_a7

Introduction

Glass ampoules are common containers for many drugs. The ampoules are usually broken open by hand and the drugs are then drawn up for administration. In the last 60 years, many questions have been raised over the potential patient safety issues related to glass micro- particulate contamination of drugs from glass ampoules, particularly for intravenous administration. [1-6]  There have been few conclusive answers, however there are suggestions it may lead to complications including pulmonary thrombi, micro-emboli, and end-organ granuloma formation. [6]

It has been recommended that filter needles should be used in the aspiration of all medications from glass ampoules. [7] This is not yet standard practice, but follows recommendations made for over forty years that practice should err on the side of caution until further studies can demonstrate that any type of particle contamination poses no threat. [5,6,8,9] This must also be balanced however, against the significant cost the universal use of filter needles would incur. The cost of a 5 μm 18 g filter needle ($0.315) is approximately ten times that of a standard 18 g drawing up needle ($0.029). [10]

For the universal implementation of filter needles to be justified in the light of this expense, three important questions should be satisfied. Firstly, does micro-particulate contamination occur when drugs are aspirated from glass ampoules? Secondly, if so, is this particulate contamination of clinical significance and a threat to patient safety? Thirdly, are filter needles effective in preventing contamination of medications by glass particles? This article reviews the relevant literature and through answering these questions attempts to evaluate whether sufficient evidence exists to warrant the universal introduction of filter needles for the aspiration of medications from glass ampoules for intravenous administration.

Methods

A search of OVID Medline, TRIP database, Embase and Google Scholar databases was conducted. In order to capture all possible evidence and relevant background history on this topic in this review, there was no restriction on date of publication and a wide range of search terms were used. Terms used included (but were not limited to): ‘glass’,

‘ampoule’, ‘drug contamination’, ‘intravenous’, ‘filter needles’, ‘filter straws’, ‘filtration’ and ‘needles’. This search was supplemented with additional papers sourced from reference lists to ensure completeness. Both human and animal studies were included. Papers addressing the research question were included in the review as decided by the author, including papers addressing other micro-particulate contamination. Explicitly defined criteria were not used in the selection of the papers.

Definitions

A filter needle or filter straw is a needle attached to a syringe in place of a drawing up needle, designed to filter out particulates from a contaminated fluid. Generally they contain a 5 micron filter.

Glass ampoules are widely used in the production of parenteral medications. Glass is an attractive material to industry as it can be vacuum-sealed, sterilized, is easy to clean, it is chemically inert, it is difficult to tamper with and is possibly recyclable. [2,4] To access the drug, the top of the ampoule is snapped off by applying manual force at a pre-weakened point. [2]

Results

Does particulate contamination occur?

There is clear evidence that the action of snapping off the top of an ampoule can lead to contamination of ampoule contents, primarily with glass micro-particles. [11-15] Glass micro-particles are primarily composed of inorganic compounds (SiO2, Na2CO2, CaCO2) and metallic oxides. [2] They have a sharp microscopic appearance. [16] Particulate size ranges from 8-172 microns. [15] The amount of particulate matter varies slightly amongst different manufacturers and more particles are found in transparent metal etched ampoules compared with coloured chemically etched ampoules. [17]

 

Is glass particulate contamination of clinical significance?

Brewer and Dunning (1947) demonstrated that massive micro- particulate infusions in rabbits can cause foreign body reactions which result in pulmonary granulomas, pulmonary silicosis, and cause nodular fibrosis of the liver, spleen and lymph nodes. [1] These were reported as chronic rather than acute changes. Notably, a dose equivalent to a total human dose of 14g of glass over a month given in daily doses was required to produce these effects. Animals receiving small doses, equivalent to those that a human might receive in normal clinical practice, however, exhibited no pathological changes and no glass was found in the lungs. No animals died within the full investigation period of up to a year until euthanised for pathological examination. The authors concluded that “occasional particle contamination of ampoule preparations produces no significant pathology in animals”. [1]

Garvan and Gunner (1964) conducted a similar small experiment infusing saline from glass ampoules into an ear vein of three rabbits. [3] After killing the animals, autopsy showed the formation of capillary and arterial granulomas, all containing cellulose fibres. They estimated that every half-litre of IV fluid injected into a rabbit caused the formation of 5000 granulomas scattered through the lungs. They also found similar lesions in the lungs of patients who had died and had received large volumes of IV fluid before death. In this study there was no specific reference to glass as the causative particle of the granulomas, nor was it associated with any morbidity or mortality apart from the histological changes.

Two case reports have been published recently regarding glass contamination. In the first a patient was found on arthroscopy to have glass particles within the right knee joint possibly due to recent steroid or local anaesthetic injection into the joint. [18] In the second report a single glass particle lodged within a cannula caused leakage out of the injection port of the cannula during an infusion. [19]

Contamination with other micro-particulates

Contamination of IV fluids by other materials such as rubber or cellulose has also been shown to occur and these particulates may have similar effects to glass. A review of relevant work concluded however, that although pathological changes had been associated with these various contaminants in both human and animal studies, it was not possible to correlate particular clinical manifestations with a specific contaminant, and nor was there any association with mortality. [20]

Similarly, in an autopsy study Puntis et al. (1992) found pulmonary granulomata in two of 41 parentally fed infants who had died of unstated  causes  following  stays  in  a  neonatal  intensive  care  unit with a median duration of 14 days of parenteral feeding. These were compared to 32 control infants who died of Sudden Infant Death Syndrome (SIDS) within the same time period and who had not received any IV treatment. [21] No granulomata or foreign bodies were found in the controls. Of the two cases, some pulmonary granulomas contained cotton fragments or glass, but the majority exhibited no obvious foreign body. The authors point out that the parental nutrition solutions  themselves  contain  many  micro-particles  that  may  also have pathological effects. Further to this a recent study found silicon particles (common contaminants in solutions stored in glass ampoules) caused suppression of macrophage and endothelial cell cytokine secretion in vitro, suggesting that micro particle infusion could have immune-modulating effects in vivo. [22]

A recent Cochrane Review of the use of in-line filters for preventing morbidity and mortality in neonates attributable to particulate matter and bacterial contamination, concluded that there is insufficient evidence to recommend the use of these devices. [23] Falchuck et al. found that in-line filtration significantly reduced the incidence of infusion-related phlebitis, however a recent meta-analysis of trials investigating the benefit of in-line filters was inconclusive. [24,25]

There is further inconclusive evidence that epithelioid granulomas, containing macrophages and giant cells, can occur at the entry points of silicone coated needles used for acupuncture (a polymer containing the element silicon) but these granulomas can also occur following venipuncture or at skin biopsy sites. [26]

Are filter needles effective in preventing contamination of medications by glass particles?

Sabon et al. (1989) found that control ampoules contained an average of 100.6 (SE ± 16.3) particles with size ranging from 10 to 1000 μm. [17] Aspiration through an 18 g needle reduced particulate contamination to a mean of 65.6 (SE ± 18.7) particles with a maximum size of 400 μm, whereas aspiration through a 19 g 5 μm filter needle reduced the number of particles to 1.3 (SE ± 0.3), with a decrease in the average particle size. More recently Zabir et al. (2008) found that of

120 ampoules aspirated using a 5 μm filter, 0% of the aspirated fluid samples were contaminated with glass, in comparison to when 120 ampoules were aspirated using an unfiltered 18 g needle, 9.2% of the aspirated fluid samples were contaminated. [27]  The use of smaller gauge non-filter needles has also been found to reduce contamination when compared to large bore needles. [5, 27]

In contrast to this Carbone-Traber et al. (1986) found no difference between unfiltered and filtered needles or between different needle bore sizes. Using a 3 mm tubing as a control, the contents of ten ampoules were aspirated for each group. The control group was contaminated with a mean of 12 (SD ± 5) glass particles, compared to 13 (SD ± 6) and 13 (SD ± 7) glass particles in the aspirate contents of unfiltered 18 g and 5μm filter needle respectively. [28] The authors suggest that the force of aspiration may cause glass particles to penetrate the filter.

Discussion

The clinical significance of the effects of glass particulates on the human body remains unclear. A number of historical investigations and case reports have been published, however there are no recent systematic reviews or prospective studies relating directly to glass particulates. Perhaps not surprisingly, there are no relevant controlled human studies and much of the data that forms the basis for the evidence of harm comes from animal studies. It is worth noting that while the findings of Brewer and Dunning are often cited as evidence for the harm caused by glass, their clinical conclusions that glass causes no significant pathology in animals are often ignored. [1]

The lack of studies investigating the effects of glass particulate contamination is due to many factors including the ethical difficulties associated with infusing contaminated fluids into human subjects, cost, and the lack of interest by pathologists. [29] The lack of evidence available from high quality and recent investigations is the significant limiting factor of this review.

In this light, a number of recommendations have been made for over forty years that practice should err on the side of caution until further studies can demonstrate that any type of particle contamination poses no threat. [5,6,8,9] This is a valid perspective with a view to ensuring total patient safety.

In  evaluating  the  introduction  of  any  intervention  however,  both the costs and consequences must be considered. With the current evidence,  evaluation  of  the  efficacy  or  the  effectiveness  of  the global introduction of filter needles cannot be undertaken, nor can cost-benefit be appraised. It is clear however, that the large-scale introduction of filter needle use for all drugs aspirated from glass ampoules destined for intravascular injection would incur a significant cost.

Filter needle use in current practice

Injection of contaminants may occur via various pathways including the intravenous, intramuscular, subcutaneous, intrathecal, epidural, and intraocular routes. There are no data describing the prevalence of filter needle use, and perhaps the most accurate appraisal is that they are at least widely available. Anecdotally their use seems favoured when drawing up drugs from glass ampoules prior to intrathecal, epidural and intraocular administration, likely due to fear of significant consequences of microbiological contamination of these sites. [30]

Alternatives to filter needles

Several alternative solutions have been considered to reduce glass contamination. The use of a machine that cuts the ampoules and aspirates the contents using a vacuum produced less glass particulate contamination of ampoules compared to opening by hand, however this is impractical for everyday use. [16] The use of prefilled syringes showed   far   less   contamination   than   aspirating   glass   ampoule contents into syringes however this a very expensive option. [31,32] A commercial ampoule opener showed no difference in particulate contamination compared to hand-opened. [29] While there have been no recommendations made, the use of smaller gauge needles may reduce contamination as discussed above.

Conclusion

In conclusion, studies have shown evidence of glass particle contamination in injectable drugs drawn from glass ampoules, and have generally demonstrated that use of filter needles would reduce patient exposure to these particulates. There is, however, a lack of definitive evidence for significant harm from the injection of these glass particle contaminants. There is a potential that drugs administered intravenously containing glass fragments may cause granuloma formation, embolic, thrombotic and other vascular events, however this is not supported by any recent literature or conclusive studies. The paucity of evidence further limits economic evaluation into efficacy, effectiveness and cost- benefit analysis, into an intervention that would incur substantial cost. Arguments that practice should err on the side of caution until studies can prove that contamination does not cause harm are valid, however it is unlikely these studies will be able to be conducted. Considering the limited evidence for harm of glass particulate injection found in well over fifty years of observation, it would appear that the cost of filter needles outweighs the questionable benefits gained from their universal introduction for aspiration of intravenously administered drugs from glass ampoules.

Acknowledgements

I gratefully acknowledge the help of Associate Professor Simon Mitchell, Head of Department, Department of Anaesthesiology, University of Auckland, and Professor Ben Canny, Deputy Dean (MBBS), Faculty of Medicine, Nursing and Health Sciences, Monash University for their critical assessment and review of the manuscript.

Conflict of interest

None declared.

Correspondence

L Fry: lefry3@student.monash.edu

References

[1] Brewer JH, Dunning JH. An in vitro and in vivo study of glass particles in ampules. J Am Pharm Assoc. 1947 Oct;36(10):289-93.

[2] Carraretto AR, Curi EF, de Almeida CE, Abatti RE. Glass ampoules: risks and benefits. Rev Bras Anestesiol. 2011 Jul-Aug;61(4):513-21.

[3] Garvan JM, Gunner BW. The harmful effects of particles in intravenous fluids. Med J Aust. 1964 Jul 4;2:1-6.

[4] Lye ST, Hwang NC. Glass particle contamination: is it here to stay? Anaesthesia. [Letter]. 2003 Jan;58(1):93-4.

[5] Preston ST, Hegadoren K. Glass contamination in parenterally administered medication. J Adv Nurs. 2004 Nov;48(3):266-70.

[6] Stein HG. Glass ampules and filter needles: an example of implementing the sixth ‘r’ in medication administration. Medsurg Nurs. 2006 Oct;15(5):290-4.

[7]   Provisional   Infusion   Standards   of   Practice.   Intravenous   Nursing   New   Zealand Incorporated Society; 2012.

[8] Heiss-Harris GM, Verklan MT. Maximizing patient safety: filter needle use with glass ampules. J Perinat Neonatal Nurs. 2005 Jan-Mar;19(1):74-81.

[9] Davis NM, Turco S. A study of particulate matter in I.V. infusion fluids–phase 2. Am J Hosp Pharm. 1971 Aug;28(8):620-3.

[10] Personal communication with hospital procurement department (Monash Medical Centre). 2013.

[11] Turco S, Davis NM. Glass particles in intravenous injections. N Engl J Med. 1972 Dec 7;287(23):1204-5.

[12] Bohrer D, do Nascimento PC, Binotto R, Pomblum SC. Influence of the glass packing on the contamination of pharmaceutical products by aluminium. Part I: salts, glucose, heparin and albumin. J Trace Elem Med Biol. [Research Support, Non-U.S. Gov’t]. 2001;15(2-3):95-101.

[13] Glube ML, Littleford J. Paint chips and glass ampoules. Can J Anaesth. [Letter]. 2000 Jun;47(6):601-2.

[14] Kawasaki Y. Study on insoluble microparticulate contamination at ampoule opening. Yakugaku Zasshi. 2009 Sep;129(9):1041-7.

[15] Unahalekhaka A, Nuthong P, Geater A. Glass particles contamination in single dose ampoules: patient safety concern. Am J Infect Control. 2009;37(5):E109-E10.

[16] Lee KR, Chae YJ, Cho SE, Chung SJ. A strategy for reducing particulate contamination on opening glass ampoules and development of evaluation methods for its application. Drug Dev Ind Pharm. 2011 Dec;37(12):1394-401.

[17]  Sabon  RL,  Jr.,  Cheng  EY,  Stommel  KA,  Hennen  CR.  Glass  particle contamination: influence of aspiration methods and ampule types. Anesthesiology. 1989 May;70(5):859-62.

[18] Hafez MA, Al-Dars AM. Glass foreign bodies inside the knee joint following intra-articular injection. Am J Case Rep. 2012;13:238-40.

[19] Mathioudakis D. One drip too much: contamination in intravenous injectate [BJA Out Of The Blue]. BJA. 2012 Mar 19.

[20] Thomas WH, Lee YK. Particles in intravenous solutions: a review. N Z Med J. 1974 Aug 28;80(522):170-8.

[21] Puntis JW, Wilkins KM, Ball PA, Rushton DI, Booth IW. Hazards of parenteral treatment: do particles count? Arch Dis Child. 1992 Dec;67(12):1475-7.

[22] Jack T, Brent BE, Boehne M, Muller M, Sewald K, Braun A, et al. Analysis of particulate contaminations of infusion solutions in a pediatric intensive care unit. Intensive Care Med. [Research Support, Non-U.S. Gov’t]. 2010 Apr;36(4):707-11.

[23] Foster J, Richards R, Showell M. Intravenous in-line filters for preventing morbidity and mortality in neonates. Cochrane Database Syst Rev. 2006(2):CD005248.

[24] Falchuk KH, Peterson L, McNeil BJ. Microparticulate-induced phlebitis. Its prevention by in-line filtration. N Engl J Med. 1985 Jan 10;312(2):78-82.

[25] Niel-Weise BS, Stijnen T, van den Broek PJ. Should in-line filters be used in peripheral intravenous  catheters  to  prevent  infusion-related phlebitis?  A  systematic  review  of randomized controlled trials. Anesth Analg. 2010 Jun 1;110(6):1624-9.

[26] Yanagihara M, Fujii T, Wakamatu N, Ishizaki H, Takehara T, Nawate K. Silicone granuloma on the entry points of acupuncture, venepuncture and surgical needles. J Cutan Pathol. [Case Reports]. 2000 Jul;27(6):301-5.

[27]  Zabir  AF,  Choy  YC.  Glass  particle  contamination  of  parenteral  preparations  of intravenous drugs in anaesthetic practice. Southern African Journal of Anaesthesia and Analgesia 2008;14(3):17-9.

[28] Carbone-Traber KB, Shanks CA. Glass particle contamination in single-dose ampules. Anesth Analg. 1986 Dec;65(12):1361-3.

[29] Giambrone AJ. Two methods of single-dose ampule opening and their influence upon glass particulate contamination. AANA J. 1991 Jun;59(3):225-8.

[30] Pinnock CA. Particulate contamination of solutions for intrathecal use. Ann R Coll Surg Engl. 1984 Nov;66(6):423.

[31] Eriksen S. Particulate contamination in spinal analgesia. Acta Anaesthesiol Scand. 1988 Oct;32(7):545-8.

[32]  Yorioka  K,  Oie  S,  Oomaki  M,  Imamura  A,  Kamiya  A.  Particulate and  microbial contamination   in   in-use   admixed   intravenous infusions.   Biol   Pharm   Bull.   2006 Nov;29(11):2321-3.

Categories
Review Articles

Insights into the mechanism of ‘chemobrain’: deriving a multi-factorial model of pathogenesis

Chemotherapy-related  cognitive  impairment,  commonly called ‘chemobrain’, is a potentially debilitating condition that is slowly being  recognised.  It  encompasses  a  wide  range  of  cognitive domains  and  can  persist  up  to  years  after  the  cessation  of chemotherapy.  What  initially  appears  to  be  a  straightforward example of neurotoxicity may be a complex interplay between individual susceptibilities and treatment characteristics, the effects of which are perpetuated through mechanisms such as oxidative stress  and  telomere  shortening  via  cytokines.  This  article  will attempt to propose a multi-factorial model of pathogenesis which may clarify the relationship between these factors and ultimately improve the life of cancer patients through informed decisions during the chemotherapy process.

v6_i1_a8a

Introduction

Chemotherapy is a mainstay in modern oncological treatment. Chemotherapeutic drugs are often cytotoxic and this allows cancer cells to be destroyed effectively. However, the systemic nature of chemotherapy means that normal cells are damaged too. If cells in the central nervous system are affected, neurological effects manifesting into cognitive deficits may be evident. The link between chemotherapy and cognitive impairment was first reported by Silberfarb and colleagues in the 1980s. [1] In the past 10-20 years, research in this area further developed due to fairly high rates of cognitive decline in cancer patients receiving chemotherapy. The cognitive sequelae arising from chemotherapy is commonly referred to as ‘chemobrain’.

It is estimated that up to 70-75% of cancer patients have cognitive deficits  during  and  post-chemotherapy,  and  up  to  half  of  these patients will have impairment lasting months or years after treatment. [2,3] Transient cognitive impairment during chemotherapy is usually tolerated but persistence of these symptoms can cause significant psychological stress and affect activities of daily living such as work, education, and social interaction.

Understanding chemotherapy-related cognitive impairment can help guide the choice and dosage/duration of chemotherapeutic drugs and ultimately enable us to improve the quality of life of cancer patients undergoing treatment. This article will briefly examine what is known about ‘chemobrain’ and attempt to propose a multi-factorial model of pathogenesis.

What is ‘chemobrain’?

The cognitive domains involved in ‘chemobrain’ are not fully defined but they are thought to be related to structural and functional changes in the frontal lobes and hippocampus of the brain. [4] Domains affected often include executive functioning, possessing speed, attention/ concentration, as well as verbal and visuospatial memory. [5] While the degree of cognitive decline can be subtle in high-functioning individuals with a resultant cognition within the normal range, even a  small  decline  in  cognitive  function  can  significantly  reduce  the quality of life (QOL) of a cancer patient. This is particularly true for those  who  experience  persistent  cognitive  deficits.  ‘Chemobrain’ can refer to cognitive dysfunction within any time period but recent studies assess cognitive dysfunction in the long-term (i.e. months or years) as immediate cognitive changes are often transient and resolve spontaneously. [6]

 

Cognitive  outcomes  in  patients  undergoing  chemotherapy  appear to be affected by treatment characteristics. Van Dam and colleagues compared the cognitive function in women receiving high-dose versus standard-dose adjuvant chemotherapy for high-risk breast cancer. The results indicated a dose-related effect whereby a higher proportion of breast cancer patients receiving high-dose chemotherapy had cognitive impairment as compared to patients receiving standard-dose chemotherapy (32% versus 17%). [7] A more recent study by the same team also showed a greater degree of cognitive impairment in breast cancer patients receiving high-dose chemotherapy. [8] However, other studies such as Mehnert et al. and Scherwath et al. did not find any significant difference in post-chemotherapy cognitive function between high-dose and standard-dose groups. [9,10] These inconsistencies are probably due to methodological differences, such as the choice of chemotherapeutic agent and the time of cognitive testing.

The duration and type of regimen were also implicated as possible treatment factors. In early breast cancer patients, the duration of chemotherapy was positively correlated with the degree of cognitive decline. [11] The previously commonplace cyclophosphamide, methotrexate, and 5-fluorouracil (CMF) regime was also shown to increase the incidence of cognitive dysfunction when compared to published test norms of healthy people. [11] In particular, methotrexate is a known neurotoxic agent which affects cell proliferation and blood vessel density in the hippocampus. [12,13] However, similar regimens substituting methotrexate with etoposide or adriamycin also seem to cause cognitive impairment. [14] This brings into question whether a single or combination of chemotherapeutic agents are largely responsible for the cognitive effects.

Are some individuals more susceptible to ‘chemobrain’?

Individual cognitive characteristics

Since ‘chemobrain’ only occurs in a subset of cancer patients, many researchers have postulated that some individuals may be more susceptible than others. Cognitive decline prior to treatment can contribute indirectly to ‘chemobrain’ by establishing a lower baseline cognitive function. Individual characteristics such as poor education, reduced cognitive stimulation, old age, and stress are possible risk factors  for  developing  ‘chemobrain’.  Ahles  et  al.  and  Adams-Price et al. showed that older patients with low cognitive reserve have a lower  processing  speed  as  compared  to  younger  patients.  [15,16] This is not unexpected as processing speed decreases with age and cognitive disorders in older patients are generally under-diagnosed.

 

For example, in the United Sates, about 20% of elderly cancer patients screen positively for cognitive disorders, and dementia is clinically diagnosed in one in two cancer patients above the age of 80. [17,18] Earlier studies that have not shown an association between age and cognitive decline often include younger and more highly-educated individuals, and this could have affected the statistical significance of the results. [19]

Most studies failed to find an association between psychological stress and cognitive dysfunction. This is because many neuropsychological tools measure objective (i.e. cognitive function) rather than subjective cognitive impairment (i.e. cognitive symptoms). The latter is, however, equally  important  and  Jenkins  et  al.  showed  that  psychological distress can cause subjective cognitive impairment with a consequent significant reduction in QOL. [20] It is difficult to attribute specific proportions of cognitive decline to chemotherapy or emotional distress, but any declines due to stress/grief are likely to be secondary to chemotherapy.

Genetic susceptibility

The apoliprotein E (APOE) and catechol-o-methyltransferase (COMT) genes are involved in neural repair and neurotransmission. [21,22] The human E4 allele of APOE is associated with cognitive disorders such as Alzheimer’s disease, as well as poor prognosis in brain injury and stroke patients. [23,24] One study found that cancer patients with the E4 allele also tend to have poor executive functioning and visuospatial memory irrespective of chemotherapy status. [21]

Interestingly, the brain-derived neurotrophic factor (BDNF) is also implicated as a possible genetic susceptibility factor. The BDNF is involved in neural repair and is preferentially expressed in the frontal lobe and hippocampus. [2] A valine (Val)-to-methionine (Met) amino acid substitution at codon 66 of the BDNF gene confers similar cognitive deficits as those found in APOE E4 carriers. [2,25]

Cognitive performance is dependent on efficient neurotransmission. COMT is required for the metabolism of catecholamines, and this function is especially important in brain regions with low expression of presynaptic dopamine transporter such as the prefrontal cortex. [26] Reduced dopamine level in the prefrontal cortex is associated with a significant decline in executive functioning. COMT-Val allele carriers are rapid metabolisers of dopamine (four times that of COMT-Met allele) and predictably, individuals in the general population with this allele variation were shown to perform poorly in cognitive assessments. [27]

It is worth thinking that chemotherapy may exacerbate cognitive changes in individuals with these specific variations in APOE, BDNF, or COMT.

The current evidence for hormones and cytokines

The fact that cognitive impairment has been shown in diverse types of cancer (breast, CNS, and lymphoma) and even in the presence of the protective blood-brain barrier (BBB), suggests that direct neurotoxicity of chemotherapeutic agents is only partially responsible for ‘chemobrain’. It is believed that a reduction in hormones such as oestrogen and testosterone is associated with cognitive decline. Studies have shown that post-menopausal women undergoing chemotherapy have a poorer cognitive performance as compared to pre-menopausal women.   Moreover,   despite   conflicting  results   in   some   studies, pre-menopausal breast cancer patients receiving tamoxifen and chemotherapy are often more cognitively impaired (especially verbal memory and processing speed) than those receiving chemotherapy alone. [28,29] Similar results were also found in males undergoing androgen deprivation therapy (ADT) for prostate cancer. One study found that almost half of the prostate cancer patients undergoing ADT scored 1.5 standard deviations below the mean in more than 2 NP measurements. [30] These observations suggest that oestrogen and testosterone may have neuro-protective roles (such as antioxidant or telomere length maintenance) which are vital to cognitive function. [2]

Cytokine imbalance may also be involved in cognitive decline. Cytokines are responsible for maintaining normal neuronal and glial cell  function.  They  also  regulate  levels  of  neurotransmitters  such as dopamine and serotonin which are necessary for cognition. [31] Increased levels of pro-inflammatory cytokines, such as interleukin-1β (IL-1β) and interleukin-6 (IL-6), were found in patients receiving chemotherapy for Hodgkin’s disease and breast cancer respectively. [32,33] In particular, an elevated level of IL-6 was associated with a decline in executive functioning. [34] Longitudinal studies of patients receiving immunotherapies consisting of IL-2 or interferon-alpha also found that these therapies result in cognitive decline across a range of domains such as processing speed, spatial ability, and executive functioning. [35] Paradoxically, an elevated level of IL-8 was found to correlate with memory enhancement in acute myelogenous leukemia and myelodyplastic syndrome patients. [34] It is still unclear which cytokines are involved and how they work. Moreover, most studies up to now have focused on acute rather than long-term cognitive changes in cancer patients. Possible roles for hormones and cytokines in chemotherapy-induced cognitive changes will be elaborated in the ‘multi-factorial model’ section.

Is anaemia related to cognitive function?

In  anaemic  cancer  patients,  it  is  hypothesised  that  low  levels of haemoglobin  result  in  ischaemic  damage  to  the  brain.  Since many chemotherapeutic agents are cardiotoxic, cerebrovascular changes  could also  further  aggravate  the  hypoxic  condition.  [36] Both Vearncombe et al. and Jacobsen et al. showed that decline in haemoglobin (Hb) levels is a significant predictor of multiple cognitive impairments (such as attention and visual memory) in patients receiving chemotherapy. [37,38] However, Iconomou et al. found no association between Hb levels and cognition function, although higher Hb levels were significantly correlated with a better QOL. [39] This conflicting result may be attributed to the use of the Mini-Mental State Examination (MMSE), which is in itself too brief and not a very sensitive measure of subtle cognitive impairment. [3] Conversely, Verancombe et al. used a battery of comprehensive neuropsychological assessments to measure different cognitive domains

Establishing a multi-factorial model of ‘chemobrain’

Despite all the research so far, there is still no consensus on how ‘chemobrain’ develops. It is well recognised that oxidative stress is one of the commonest causes of DNA damage in neuronal cells and a number  of  cognitive  disorders  such  as  Alzheimer’s  disease and Parkinson’s Disease are associated with it. [40,41] Chemotherapeutic drugs such as Adriamycin are also known to increase production of reactive oxygen species (ROS) and contribute to reduced anti-oxidant capacity. [42] In addition, chemotherapy has often been associated with telomere shortening in patients with breast cancer and haematological malignancies. [43,44] Telomeres shortening can result in adverse cell outcomes such as senescence and apoptosis, and although most CNS cell types are post-mitotic, some such as glial cells are actively dividing and are vulnerable to this process. [45] Based on these observations, it is conceivable that oxidative DNA damage and telomere shortening could  form  the  basis  of  a  model  of  CNS  dysfunction  to explain ‘chemobrain’.

As mentioned previously, a lower baseline cognitive function due to individual cognitive characteristics and genetic predisposition can precipitate cognitive difficulties when certain treatment conditions are prevalent. These conditions are not fully understood but may relate to  the use of neurotoxic  agents,  prolonged high-dosage regimens, or simply any therapeutic situation which causes hormonal and/or cytokine imbalances. Cytokines are likely to play a crucial intermediary role linking the neurotrophic effects of chemotherapy to oxidative DNA damage in the CNS as the BBB will limit the entry of most chemotherapeutic agents. [2] Although some animal studies show that a minute dose of these agents can cause cognitive symptoms, such occurrences are typically rare and drug effects may instead follow a dosage-dependent pattern. [46]

In contrast, cytokines can pass through the BBB and mediate their effects freely. Aluise and colleagues proposed a mechanism of pathogenesis whereby Adriamycin causes the release of peripheral tumour necrosis factor-alpha (TNF-α) via cell injury. These cytokines pass through the BBB and induce glial cells to produce more TNF-α, especially in the hippocampus and frontal cortex. Elevated levels of central TNF- α then damage brain cell mitochondria as well as stimulate production of ROS, which results in oxidative stress and DNA damage. [47]

By extrapolation, other pro-inflammatory cytokines such as IL-6 may play similar roles and different chemotherapeutic agents could induce distinct cytokine profiles with varying CNS effects. It is also worth postulating that the same oxidative stress could have led to telomere shortening and subsequently cell apoptosis/senescence. When this occurs in patients who are post-menopausal or undergoing hormonal therapy,  the  effects  of  telomere  shortening  would  predictably  be more pronounced.  As changes in oestrogen status (such as in the transition from pre-menopause to post-menopause) have been linked to fluctuations in levels of cytokines such as IL-6 and alterations in cortisol rhythm are shown to elevate pro-inflammatory cytokine levels, it is possible that interplay between cytokines and hormones could be significant in the pathogenesis of ‘chemobrain’. [48, 49]

How  then,  does  cognitive  impairment  translate  to  a  diminished QOL? Quantifying cognitive impairment in terms of QOL is difficult due to its objective (assessed by neuropsychological tools) and subjective components (assessed by self-reporting). In some patients, psychological stress coupled with anaemia (and possibly, other side effects of chemotherapy) could have reduced the subjective component of QOL to such an extent that the effects of cognitive difficulties are amplified. This could explain the apparent paradox whereby a subtle change in cognitive function often results in a significant impact on a patient’s quality of life.

Lastly,  how  do  we  reconcile  the  delayed  effects  of  ‘chemobrain’? The  immediate  effects  of  chemotherapy  are  well-established  as a  result  of  acute  CNS  damage  but  the  persistence  of  cognitive changes has always remained unclear. A study by Han et al. found that systemic administration of the commonly used chemotherapy agent 5-fluorouracil results in a progressively worsening delayed demyelination of the CNS white matter tracts with consequent cognitive impairment. Although this is unlikely to be the only chemotherapy related mechanism of delayed CNS change, it adds to the existing knowledge of prolonged inflammation and vascular damage to the CNS noted in radiotherapy. [50]

A  possible  multi-factorial model  of  ‘chemobrain’  is  summarised in Figure 1.

v6_i1_a8b

Chemotherapy related cognitive impairment can be affected by a number of possible determinants such as treatment characteristics, genetic susceptibility, cytokine imbalance, and hormonal factors. Mechanisms such as oxidative stress and telomere shortening have been implicated, and studies suggest a mediating role for cytokines. The primary outcome is commonly called ‘chemobrain’, which encompasses a wide range of cognitive domains including executive functioning, processing speed, attention/concentration, as well as verbal and visuospatial memory. The effects of ‘chemobrain’ are both acute and delayed, with the latter thought to involve demyelination of CNS tracts. While ‘chemobrain’ can be subtle, amplifying factors such as psychological stress and anaemia may have a significant impact on the quality of life of a patient in terms of reduced work, education, and social interaction opportunities.

Discussion and conclusion

While good progress has been made in understanding ‘chemobrain’, further  research  is  required  in  order  for  clinical  interventions  to be  effective.  A  multi-prong  treatment  approach  is  widely  viewed as necessary to manage this condition due to the complexity of the phenomenon. Pharmacological approaches proposed by researchers revolve around reducing oxidative DNA damage and improving neurotransmission. Examples of drugs considered include antioxidants such as zinc sulfate and N-acetyl cysteine, as well as modulators of the catecholaminergic  system  such  as  Methylphenidate  and  Modafinil. [3] Furthermore, cognitive rehabilitation has shown promise in restoring  an  acceptable  baseline  level  of  cognition.  [6]  However, these interventions are at most speculative and certain mechanistic questions still need to be addressed.

Firstly, it is important to identify further risk factors which could help us identify the cognitive effects of chemotherapy more precisely. This may involve extending our study beyond purely neurological-related genes such as APOE and COMT. Ahles and Saykin have suggested that genes involved in regulating drug transport across the BBB could be involved in ‘chemobrain’. [2] The P-glycoprotein, encoded by the multi-drug resistance 1 (MDR1) gene, is expressed by endothelial cells in the BBB and protects neuronal cells by promoting efflux of drug metabolites. A C3435T polymorphism in exon 26 of the MDR1 gene is associated with reduced efflux capacity of P-glycoprotein and could precipitate buildup of high concentrations of toxic chemotherapy agents. [51] Positron-emission  tomography  (PET)  studies  allow  monitoring  of these concentration changes and may help us understand which drug transporters are involved and how drug doses can affect cognitive function. [52]  Evidence  of  direct  chemotherapy  neurotoxicity  may also be further pinpointed through neuroimaging studies which compare changes in brain integrity on MRI in women treated with chemotherapy compared to cancer patients who did not receive chemotherapy. An example is the study done by Deprez et al., which assessed microstructural changes of cerebral white matter in non-CNS cancer patients. [53]

Secondly, methodological differences between studies pose a serious limitation, which precludes strong conclusions from being derived. Some studies utilize brief assessments, such as the MMSE, which are poor at detecting subtle cognitive changes. There needs to be a battery of NP assessments which are comprehensive yet practical enough to be used in clinical trials (refer to Vardy et al.). [54] In addition, many studies often exclude patients with pre-existing conditions (such as neurological disorders or learning disabilities) for fear of aggravating post-chemotherapy   cognitive   impairment.   [19]   This   meant   that high-risk patients are left out of the analysis and consequently, the actual proportion of patients experiencing ‘chemobrain’ might be underestimated. It is also essential for studies to establish the pre- chemotherapy baseline cognitive level prior to treatment as those, which recruit individuals regardless of cognitive status tend to yield conflicting results. [3] Moreover, studies should endeavour to compare cognitive impairment in the short-term versus the long-term in order to ascertain that cognitive difficulties are persistent and not transient in nature.

The practical implications of understanding ‘chemobrain’ are forseeable. Chemotherapy regimens can be individualized to fit the physical  and psychological  constitution  of  the  patient.  This  helps to improve compliance rate and reduce drop-outs due to adverse treatment-related effects. In addition, the existence of ‘chemobrain’ may favour the diversification of treatment modalities instead of focusing on chemotherapy alone. For example, immunotherapy can be trialed as adjuvant to chemotherapy with the aim of reducing the latter’s side effects and potentiating the overall therapeutic gain, such as in the case of indoximod (an IDO inhibitor) and chemotherapy in metastatic breast cancer.

In conclusion, ‘chemobrain’ is a phenomenon which needs to be studied in depth. Current observations favour a framework whereby individuals experience cognitive difficulties due to a combination of inherent vulnerabilities and chemotherapy-related side effects. There is also increasing recognition that cytokines might play a crucial supporting role in pathogenesis. Emphasis should be placed on identifying further chemotherapy-related risk factors, as well as improving the sensitivity of methodological approaches with the aim of improving the design of chemotherapy regimens to provide a better quality of life.

Acknowledgements

None.

Conflict of interest

None declared.

Correspondence

K Ho: koho2292@uni.sydney.edu.au

References

[1] Silberfarb P. Chemotherapy and cognitive defects in cancer patients. Ann Rev Med 1983;34:35-46.

[2]  Ahles  Ta,  Saykin  AJ.  Candidate  mechanisms  for  chemotherapy-induced  cognitive changes. Nat Rev Cancer 2007;7(3):192-201.

[3] Fardell JE, Vardy J, Johnston IN, Winocur G. Chemotherapy and cognitive impairment: treatment options. Clin Pharmacol Ther 2011;90(3):366-76.

[4]  Janelsins  MC,  Kohli  S,  Mohile  SG,  Usuki  K,  Ahles  TA,  Morrow  GR.  An  update  on cancer-  and  chemotherapy-related  cognitive dysfunction: current  status.  Semin  Oncol 2011;38(3):431-8.

[5] Vardy J, Tannock I. Cognitive function after chemotherapy in adults with solid tumours. Crit Rev Oncol Hematol 2007;63:183-202.

[6] Poppelreuter M, Weis J, Bartsch HH. Effects of specific neuropsychological training programs  for  breast  cancer  patients after adjuvant  chemotherapy.  J  Psychosoc  Oncol 2009;27(2):274-96.

[7] Van Dam FS, Schagen SB, Muller MJ, Boogerd W, vd Wall E, Droogleever Fortuyn ME, et al. Impairment of cognitive function in women receiving adjuvant treatment for high-risk  breast  cancer:  high  dose  versus  standard-dose  chemotherapy.  J  Natl  Cancer  Inst 1998;90(3):210-8.

[8] Schagen SB, Muller MJ, Boogerd W, Mellenbergh GJ, van Dam FS. Change in cognitive function after chemotherapy: a prospective longitudinal study in breast cancer patients. J Natl Cancer Inst 2006;98:1742-5.

[9] Mehnert A, Scherwath A, Schirmer L, Schleimer B, Petersen C, Schulz-Kindermann F, et al. The association between neuropsychological impairment, self-perceived cognitive deficits, fatigue and health related quality of life in breast cancer survivors following standard adjuvant versus high-dose chemotherapy. Patient Educ Couns 2007;66(1):108-18.

[10] Scherwath A, Mehnert A, Schleimer B, Schirmer L, Fehlauer F, Kreienberg R, et al. Neuropsychological function in high-risk breast cancer survivors after stem-cell supported high-dose therapy versus standard-dose chemotherapy: evaluation of long-term treatment effects. Ann Oncol 2006;17(3):415-23.

[11] Wieneke MH, Dienst ER. Neuropsychological assessment of cognitive functioning following chemotherapy for breast cancer. Psychooncology 1995;4:61-6.

[12] Seigers R, Timmermans J, van der Horn HJ, de Vries EF, Dierckx RA, Visser L, et al. Methotrexate reduces hippocampal blood vessel density and actives microglia in rats but does not elevate central cytokine release. Behav Brain Res 2010;207(2):265-72.

[13] Seigers R, Schagen SB, Beerling W, Boogerd W, van Tellingen O, van Dam FS, et al. Long-lasting suppression of hippocampal cell proliferation and impaired cognitive performance by methotrexate in the rat. Behav BrainRes 2008;186(2):168-75.

[14] Raffa RB, Duong PV, Finney J, Garber DA, Lam LM, Matthew SS, et al. Is ‘chemo-fog’/’chemo-brain’ caused by cancer chemotherapy? J Clin Pharm Ther 2006;31(2):129-38.

[15] Ahles TA, Saykin AJ, McDonald BC, Li Y, Furstenberg CT, Hanscom BS, et al. Longitudinal assessment of cognitive changes associated with adjuvant treatment for breast cancer: impact of age and cognitive reserve. J Clin Oncol 2010;28(29):4434-40.

[16] Adams-Price CE, Morse LW, Cross GW, Williams M, Wells-Parker E. The effects of chemotherapy on useful field of view (UFOV) in younger and older breast cancer patients. Exp Aging Res 2009;35:220-34.

[17] Plassman BL, Langa KM, Fisher GG, Heeringa SG, Weir DR, Ofstedal MB, et al. Prevalence of dementia in the United States: the aging, demographics, and memory study. Neuroepidemiology 2007;29(1-2):125-32.

[18] Rodin MB, Mohile SG. A practical approach to geriatric assessment in oncology. J Clin Oncol 2007;25(14):1936-44.

[19] Rodin G. Accumulating evidence for the effect of chemotherapy on cognition. J Clin Oncol 2012;30(29):3568-9.

[20] Jenkins V, Shillings V, Deutsch G, Bloomfield D, Morris R, Allan S, et al. A 3-year prospective study of the effects of adjuvant treatments on cognition in women with early stage breast cancer. Br J Cancer 2006;94(6):828-34.

[21] Ahles TA, Saykin AJ, Noll WW, Furstenberg CT, Guerin S, Cole B, et al. The relationship of APOE genotype to neuropsychological performance in long-term cancer survivors treated with standard dose chemotherapy. Psychooncology 2003;12(6):612-9.

[22]  Small  BJ,  Rawson  KS,  Walsh  E,  Jim  HS,  Hughes  TF,  Iser  L,  et al.  Catechol-O-methyltransferase  genotype  modulates  cancer  treatment-related  cognitive  deficits  in breast cancer survivors. Cancer 2011;117(7):1369-76.

[23] Laws SM, Clarnette RM, Taddei K, Martins G, Paton A, Hallmayer J, et al. APOE-epilson4 and APOE-491A polymorphisms in individuals with subjective memory loss. Mol Psychiatry 2002;7(7):768-75.

[24]  Nathoo  N,  Chetty R,  van  Dellen  JR,  Barnett  GH.  Genetic  vulnerability  following traumatic brain injury: the role of apolipoprotein E. Mol Pathol 2003;56(3):132-6.

[25] Pezawas L, Verchinski BA, Mattay VS, Callicott JH, Kolachana BS, Straub RE, et al. The brain-derived neutrophic factor val66met polymorphism and variation in human cortical morphology. J Neurosci 2004;24(45):10099-102.

[26] Matsumoto M, Weickert CS, Akil M, Lipska BK, Hyde TM, Herman MM, et al. Catechol O-methyltransferase mRNA expression in human and rat brain: evidence for a role in cortical neuronal function. Neuroscience 2003;116(1):127-37.

[27]  McAllister  TW,  Ahles  TA,  Saykin  AJ,  Ferguson  RJ,  McDonald  BC,  Lewis  LD,  et  al. Cognitive effects of cytotoxic cancer chemotherapy: predisposing risk factors and potential treatments. Curr Psychiatry Rep 2004;6(5):364-71.

[28]  Palmer  JL,  Trotter T,  Joy  AA,  Carlson  LE.  Cognitive  effects  of tamoxifen  in  pre-menopausal women with breast cancer compared to healthy controls. J Cancer Surviv 2008;2(4):275-82.

[29] Zec RF, Trivedi MA. The effects of estrogen replacement therapy on neuropsychological functioning  in  postmenopausal  women  with  and  without  dementia:  a  critical  and theoretical review. Neuropsychol Rev 2002;12(2):65-109.

[30] Mohile SG, Lacy M, Rodin M, Bylow K, Dale W, Meager MR, et al. Cognitive effects of androgen deprivation therapy in an older cohort of men with prostate cancer. Crit Rev Oncol Hematol 2010;75(2):152-9.

[31] Wilson CJ, Finch CE, Cohen HJ. Cytokines and cognition—the case for a head-to-toe inflammatory paradigm. J Am Geriatr Soc 2002;50(12):2041-56.

[32] Pusztai L, Mendoza TR, Reuben JM, Martinez MM, Willey JS, Lara J, et al. Changes in plasma levels of inflammatory cytokines in response to paclitaxel chemotherapy. Cytokine 2004;25(3):94-102.

[33] Villani F, Busia A, Villani M, Vismara C, Viviani S, Bonfante V. Serum cytokine in response to chemo-radiotherapy for Hodgkin’s disease. Tumori 2008;94(6):803-8.

[34] Meyers CA, Albitar M, Estey E. Cognitive impairment, fatigue, and cytokine levels in  patients  with  acute  myelogenous  leukemia  or myelodysplastic  syndrome.  Cancer 2005;104(4):788-93.

[35] Capuron L, Ravaud A, Dantzer R. Timing and specificity of the cognitive changes induced by interleukin-2 and interferon-alpha treatments in cancer patients. Psychosom Med 2001;63(3):376-86.

[36] Theodoulou M, Seidman D. Cardiac effects of adjuvant therapy for early breast cancer. Semin Oncol 2003;30(6):730-9.

[37] Vearncombe KJ, Rolfe M, Wright M, Pachana NA, Andrew B, Beadle G. Predictors of cognitive decline after chemotherapy in breast cancer patients. J Int Neuropsychol Soc 2009;15(6):951-62.

[38] Jacobsen PB, Garland LL, Booth-Jones M, Donovan KA, Thors CL, Winters E, et al. Relationship of  hemoglobin  levels  to  fatigue and cognitive functioning among  cancer patients receiving chemotherapy. J Pain Symptom Manage 2004;28(1):7-18.

[39] Iconomou G, Koutras A, Karaivazoglou K, Kalliolias GD, Assimakopoulos K, Argyriou AA, et al. Effect of epoetin alpha therapy on cognitive function in anaemic patients with solid tumours undergoing chemotherapy. Eur J Cancer Care 2008;17(6):535-41.

[40] Fishel ML, Vasko MR, Kelley MR. DNA repair in neurons: so if they don’t divide what’s to repair? Mutat Res 2007;614(1-2):24-36.

[41]  Mariani  E,  Polidori  MC,  Cherubini  A,  Mecocci  P.  Oxidative stress  in  brain  aging, neurodegenerative and vascular disease: an overview. J Chromatogr B Analyt Technol Biomed Life Sci 2005;827(1):65-75.

[42] Tsang WP, Chau SP, Kong SK, Fung KP, Kwok TT. Reactive oxygen species mediate doxorubicin induced p53-independent apoptosis. Life Sci 2003;73(16):2047-58.

[43] Schroder CP, Wisman GBA, de Jong S, van der Graaf WTA, Ruiters MHJ, Mulder NH, et al. Telomere length in breast cancer patients before and after chemotherapy with or without stem cell transplantation. Br J Cancer 2001;84(10):1348-53.

[44] Lahav M, Uziel O, Kestenbaum M, Fraser A, Shapiro H, Radnay J, et al. Nonmyeloablative conditioning   does   not   prevent   telomere shortening   after   allogeneic   stem   cell transplantation. Transplantation 2005;80(7):969-76.

[45] Flanary BE, Streit WJ. Progressive telomere shortening occurs in cultured rat microglia, but not astrocytes. Glia 2004;45(1):75-88.

[46]  Verstappen  CC,  Heimans JJ, Hoekman K, Postma TJ. Neurotoxic complications of chemotherapy in patients with cancer: clinical signs and optimal management. Drugs 2003;63(15):1549-63.

[47] Aluise CD, Sultana R, Tangpong J, Vore M, St Clair D, Moscow JA, et al. Chemo brain (chemo fog) as a potential side effect of doxorubicin administration: role of cytokine- induced,   oxidative/nitrosative   stress   in   cognitive   dysfunction.   Adv   Exp   Med   Biol 2010;678:147-56.

[48] Pfeilschifter J, Koditz R, Pfohl M, Schatz H. Changes in proinflammatory cytokine activity after menopause. Endocr Rev 2002;23(1):90-119.

[49]  Bower JE, Ganz PA, Dickerson SS, Petersen L, Aziz N, Fahey JL, et al. Diurnal cortisol rhythm and fatigue in breast cancer survivors. Psychoneuroendocrinology 2005;30(1):92-100.

[50]  Han  R,  Yang  YM,  Dietrich  J,  Luebke  A,  Mayer-Proschel  M,  Noble  M.  Systemic 5-fluorouracil treatment causes a syndrome of delayed myelin destruction in the central nervous system. J Biol 2008;7(4):12.

[51] Hoffmeyer S, Burk O, von Richter O, Arnold HP, Brockmoller J, Johne A, et al. Functional polymorphisms of the human multidrug-resistance gene: multiple sequence variations and correlation of one allele with P-glycoprotein expression and activity in vivo. Proc Natl Acad Sci USA 2000;97(7):3473-8.

[52]   Kerb   R.   Implications   of   genetic   polymorphisms   in   drug transporters   for pharmacotherapy. Cancer Lett 2006;234(1):4-33.

[53] Deprez S, Billiet T, Sunaert S, Leemans A. Diffusion tensor MRI of chemotherapy-induced cognitive impairment in non-CNS cancer patients: a review. Brain Imaging Behav 2013;7(4):409-35.

[54]  Vardy  J,  Rourke  S,  Tannock  IF.  Evaluation  of  cognitive  function associated  with chemotherapy: a review of published studies and recommendations for future research. J Clin Oncol 2008;25(17):2455-63.

Categories
Review Articles

HIV/AIDS: let’s see how far we’ve come

Now  more  than  ever,  HIV  positive  people  are  living  longer and healthier lives because of access to antiretroviral therapy. Healthcare organisations are working to ensure that HIV positive people all over the world have access to the medical care they need to stay healthy. In the last few years, research into vaccine development, genetics-based approaches and novel therapies have achieved some progress and drug therapy regimens have become more effective. Public health strategies have aimed to reduce transmission, and early access to treatment has dramatically improved quality of life. With resources and funding aimed in the right directions, it will be possible to continue making significant progress towards better prevention, improved treatment options and  perhaps  even  a  cure  for  HIV  and  the  elimination  of  the global AIDS epidemic. This article reviews some of the successes and difficulties in the scientific, research and treatment arm of combating the HIV epidemic. There is still much work to be done, but for now, let’s see how far we’ve come.

v6_i1_a9a

Introduction

The 2011 UNAIDS World Aids Day report concisely outlines the aims of global health efforts against HIV/AIDS: “Zero new infections. Zero discrimination. Zero AIDS-related deaths”. [1] As global citizens and future medical practitioners, it is our duty to participate in the medical issues that are of importance to the world. We should work to make the eradication of HIV/AIDS one of those key issues. This paper discusses, from the scientific perspective, some of our triumphs and tribulations with regards to combatting the complex, evasive and resourceful opponent that is the HIV virus.

The Basics

There are two main types of HIV: HIV-1 and HIV-2, with HIV-1 being the more common of the two. [2,3] HIV is a retrovirus with a high degree of variability, attributable to the error prone nature of its reverse transcriptase enzyme and its high recombination rate. [4] The HIV genome consists of a number of genes (see figure 1) including gag, pol, env and nef all of which have been used as potential antigens for the generation of vaccines. [5]

v6_i1_a9b

 

HIV primarily infects cluster-of-differentiation-4 cells (CD4) cells, including CD4 T-cells, macrophages, monocytes and dendritic cells. [6] Initially, the immune response to viral infection (including CD8 T-cell mediated killing, complement activation and antibody production) is effective at removing HIV infected cells [7], but continued immune activation and antigen presentation spreads the virus to the lymph nodes and the rest of the body. [8,9] Continued immune response to replicating virus drives development of escape mutations and ultimately overwhelms the immune system’s ability to respond. [10] As the infection progresses, the rate of destruction of infected CD4 T-cells exceeds the rate of synthesis and the CD4 count declines.

The 2008 HIV infection case definition replaces past criteria for HIV infection progression to AIDS, and divides the infection into stages reflecting the decline in immune function. [11] The stages of infection are:

Stage 1: CD4 T-cells ≥ 500 cells/mm3 (≥ 29% of total lymphocytes) with no AIDS-defining conditions.

Stage 2: CD4 T-cells 200-499 cells/mm3  (14-28% of total lymphocytes) with no AIDS-defining conditions.

Stage 3 (Progression to AIDS): CD4 T-cells <200 cells/mm3  (<14% of total lymphocytes) or the emergence of an AIDS defining condition (regardless of the CD4 T-cell count).

AIDS related conditions include a range of infections, such as esophageal Candidiasis, Cryptococcus, Kaposi sarcoma, Mycobacterium avium/ tuberculosis and Pneumocystis jirovecii pneumonia. [11] AIDS related deaths are usually due to severe opportunistic infection as the immune system is no longer able to fight basic infections. [1]

HIV is primarily transmitted via bodily fluids including blood, vaginal secretions and semen, and across the placenta, however it is not readily found in the saliva, unless there are cuts or ulcers providing access to the bloodstream. [12] Although sexual contact is the most common method of transmission, other mechanisms such as needle- stick injury, sharing needles and transfusion of HIV infected blood are far more likely to result in infection. [12] The infectivity of a person with HIV is proportional to the number of copies per mL of blood. [12]

HAART Therapy

According to UNAIDS estimates, 34 million people around the world were living with HIV in 2010, with 2/3 of the global total in Sub-Saharan Africa. [1] By the end of 2012, this had increased to 35.3 million people. [13] Now more than ever, HIV positive people are living longer and healthier lives. In part, this is due to the provision of effective treatment in the form of Highly Active Anti-Retroviral Therapy (HAART).

The first antiretroviral drug described and approved for clinical use was AZT (3’-azido-3’-deoxythymidine), a thymidine analogue. Now, there are seven categories of antiretroviral drugs, with more than 25 unique drugs. [14] HAART involves taking a combination of at least three drugs from at least two classes of antiretroviral drugs, with the aim of reconstituting lost CD4 T-cells, minimising viral load and reducing viral evolution. [15-17] The efficacy of HAART has changed HIV from a disease of significant morbidity and mortality to a manageable chronic condition. Further, HAART can minimise transmission of the virus, and prolongs the healthy lifetime of the individual by reducing viral load to undetectable levels. [18]

At present, much research is focused on investigating the timing of initiation of HAART therapy in HIV+ individuals, and the most effective combinations of drugs. [19,20] In general, those who initiate HAART therapy earlier are more likely to die at older ages and of non-AIDS causes. [21] Studies have found conflicting results when tracking the disease outcomes of patients commenced on HAART at different stages of disease. For example, a large collaborative study found that patients who are commenced on HAART with CD4 T-cell counts of 351-450 cells/mm3  have a lower rate of AIDS and death than patients whose commencement on HAART was deferred until there was further CD4 T-cell decline to below 500 cells. [18,22] Another study has shown that patients commenced on HAART at CD4 counts <500 cells/mm3 compared to deferring had slower disease progression, but this benefit was not seen with commencing HAART at CD4 T-cell counts of 500-799 cells/mm3. [19] Recent evidence has shown that early treatment enhances recovery of CD4 T-cells to normal levels. [20,23] The World Health Organisation’s HIV treatment guidelines issued in June 2013 now recommend commencing treatment when an individual’s CD4 T-cell count falls below 500 cells/mm3, or immediately on diagnosis for pregnant women, children under 5, those with HIV-associated comorbidities like tuberculosis and Hepatitis B, and for HIV- people in a serodiscordant couple with an HIV+ individual. [13]

HAART requires strict adherence, and side effects can impact on the patient’s quality of life. Long-term use is associated with toxicity, particularly to the liver, kidneys, bone marrow, brain, cardiovascular system and gastrointestinal tract. [15,24] Further, It has been noted that illnesses that typically occur with ageing appear prematurely in HIV+ patients on HAART therapy. This is thought to be only partially due to the infection, but also a side effect of the drugs involved in treatment. [25] Torres and Lewis (2014) provide an overview of what is known about premature ageing in the HIV+ patient and how this relates to HAART drugs. [25]

Non-compliance with HAART leads to the resurgence of viral replication with increased viral load and the potential for development of drug resistant mutant strains of HIV. [15,24] This can make it more difficult to treat the patient, as new drug choices may be limited and increases in viral load can lead to risk of further spread of the virus. HAART also does not purge reservoirs of latently infected cells [26], and as such, there is currently no cure for HIV.

Integration, latency and treatment challenges

There are a number of characteristics of the HIV virus that make it challenging to eradicate. HIV exhibits significant genetic diversity, both within an individual patient and within a population. As a retrovirus, HIV has an inherent ability to establish early latency within the DNA of host cells, where it remains for the lifetime of the cells and is unable to be removed by the immune system. [5]

One major development was made in 2013, when Hauber et al. reported the successful use of an antiretroviral gene therapy. A site specific recombinase (Tre) targeted to the HIV-1 long terminal repeat (LTR) was used to excise the HIV provirus from infected CD4 T-cells, functionally curing infected cells of their HIV infection, without cytopathic effects. [27] This study provides promising evidence to suggest that in future, genetically oriented antiretroviral technologies may have the potential to provide a cure for HIV infection.

Development of a prophylactic vaccine for HIV

Although  it  was  initially believed  that  HIV  infection was  a  simple illness of immunosuppression, we now know that HIV sufferers do mount strong immune responses to the HIV virus, although this response is insufficient to control the virus or eradicate the infection. [28] One of the major challenges in producing a prophylactic vaccine is the high variability of the HIV virus. There is a variation of 25-30% between subtypes of HIV-1, and 15-20% variation within any subtype. Furthermore, viral quasi-species in any infected individual can range by 10%. [29] The problem this variation causes is illustrated in natural infection, where the antibodies present in infected individuals are functional, but do not eliminate the infection due to the genetic variety seen in mutants created under the pressure of the immune system. [28,30] This variability makes it difficult to know which antigens to use to generate the required immune response to control the infection. Strategies being explored to combat this include the use of consensus sequences (fusing the most conserved portions of the virus, and trying to produce immunity to such a sequence), conserved region antigens (specifically choosing the most conserved antigens to generate immune responses) and multiple antigen cocktails (vaccination with multiple variants of one immunogen, or several different ones in combination). [31]

The first prototype HIV vaccine tested utilised purified monomeric env gp120 immunogens (a component of the virus’s surface envelope protein) in an attempt to generate virus-specific antibody responses. Unfortunately, early trials showed that this vaccine was unable to induce the production of neutralising antibodies, and did not prevent infection with HIV-1 in humans. [31,32] Since then, attempts have been made at prophylactic vaccination using a range of differing immunogens including Tat and Gag. In most cases, these vaccinations have proven safe and well tolerated, and resulted in the production of anti-HIV antibodies that may not be seen in natural infection. [33] Promising results have come from a range of trials, including the control of infected CD4 T-cells by Gag-specific CD8 T-cells, proportionally to the number of Gag epitopes recognised. [34] Success at eliciting immune responses demonstrated thus far with Gag may be to do with its relatively well-conserved sequence patterns. [34]

The STEP Study and Phambili HIV vaccine trials both used an Adenovirus 5 (Ad5) vector and gag, pol and nef immunogens, which was shown to be successful at inducing a good CD8 T-cell response. [35] Subsequent trials demonstrated that the vaccine provided no additional prevention from infection, nor a reduced early viral level. [36] Further, there was an increased rate of HIV-1 acquisition in groups of vaccinated individuals from the STEP study, most particularly in men who were already Ad5 seropositive and uncircumcised. [37]

The primary challenge in the use of viral vectors to deliver a HIV vaccine into cells is the pre-existing immunity of humans to viral vectors, leading to the neutralisation and removal of vectors from the circulation before the transfer of the immunogen to cells. In addition, vectors induce mucosal homing in T-cells, making them more susceptible to infection [38] and explaining the increased susceptibility to infection observed in the Step Study. [37,39]

At present, there is much debate over the necessary aims for a successful HIV vaccine: for example, whether to focus on the development of anti- HIV antibodies, or the induction of a CTL response. [30] Recent papers have described the ability of combinations of broadly neutralising antibodies to successfully neutralise HIV. [40,41]

MiRNAs

A new area of interest is the use of microRNAs (miRNA) as potential next generation therapeutic agents for the treatment of HIV infection and management of viremia. MiRNAs are small, noncoding RNA fragments, responsible for fine-tuning and negatively regulating gene expression. Roles for microRNAs have been found in metabolism, development and growth, and dysregulation of miRNAs have been implicated in loss of tumour suppression and development of cancer. [42,43]

The utility of miRNA-oriented technology has already been illustrated in the context of hepatitis C infection. In 2005, Jopling et al. reported that miR-122 was highly abundant in human hepatocytes, and that its presence may facilitate replication of the viral RNA, and encourage survival of the virus in the liver. [44] Since this pivotal paper, the first drug targeting a miRNA has been developed. Miraversen is a miR-122 inhibitor [45], which in human trials has been shown to exhibit dose dependent antiviral activity [46], with no dose-limiting adverse events or escape mutations observed. [47]

The first attempt to find human cellular miRNAs directly targeting the HIV genome was by Hariharan et al. (2005). [48] It was then later shown that one of the miRNAs identified was capable of inhibiting HIV nef expression, and decreasing HIV replication. [49] Further research demonstrates that cellular miRNAs potently inhibit HIV-1 production in resting primary CD4 T-cells, suggesting that cellular miRNAs are pivotal in HIV-1 latency. [50]

In 2007, Triboulet et al. reported that HIV-1 infection caused a down- regulation of specific miRNAs in mononuclear blood cells, and that this was necessary for effective viral replication. [51] Witwer et al. (2012) subsequently showed that the miRNA profiling of infected cells could be used to distinguish elite suppressors, viraemic HIV-1 patients and uninfected controls from one another [52], indicating significant changes to cellular miRNA profiles of cells when they are infected by HIV. From these studies it is evident that different cellular miRNAs modulate and are modulated by HIV infection, with different miRNAs implicated in different cells, contexts and environments. More research in this area is required, and will hopefully give rise to a new generation of therapeutic agents for HIV. The interested reader is referred to more specific reviews [53-55] for more detailed information.

Where to from here?

HIV has proven itself a formidable opponent to our aims at a global improvement in healthcare and quality of life. However, recent research gives hope that advanced treatments, better prevention and even a cure may one day be possible. It is clear that the best way to tackle HIV is by a coordinated approach, where global health strategies, clinical medicine and research work together to help eradicate this epidemic.

We have had some success in the use of novel approaches like targeting cellular  miRNAs  and  excising  HIV  DNA  from  the  human  genome, and there have also been some promising results in the generation of immunity to infection through a HIV vaccine. We are constantly learning more about how HIV interacts with the host immune system, and how to overcome it. However, progress on the development of a HIV vaccine has stalled somewhat after the findings of the STEP and Philambi studies.

It is important to acknowledge the significant achievements that have been made worldwide through HIV public health campaigns. In 2012, a record 9.7 million people were receiving antiretroviral therapy, and the incidence of HIV is falling each year, with a 33% decline from 2001 to 2012. [56] However, there is still much work to be done to ensure all people are able to access HIV testing and treatment. One of the aims of the Millennium Development Goals is to provide universal access to treatment for HIV/AIDS to all those who need it, although this is yet to be achieved. [56,57]

Medicine has come a long way in the understanding and treatment of the complex and multifaceted problem that is the global AIDS epidemic.  I urge  medical  students  to  be  informed  and  interested in the HIV epidemic, and to be involved in the clinical, research and community groups tackling this multifaceted problem. With continued efforts and dedication, there is hope that in our lifetime, we may see the realisation of the ambitious aims of the 2011 UNAIDS World AIDS Day report: “Zero new infections. Zero discrimination. Zero AIDS- related deaths”.

Acknowledgements

Many thanks to my family and friends who make life fun and the effort worthwhile. This article is in acknowledgement and appreciation of the hard work and dedication of HIV/AIDS researchers everywhere.

Conflict of interest

None declared.

Correspondence

L Fowler: lauren.fowler@griffithuni.edu.au

References

[1] UNAIDS. World AIDS Day Report 2011: How to get to zero: Faster. Smarter. Better. 2011.

[2] Barre-Sinoussi F, Chermann JC, Rey F, Nugeyere MT, Chamaret S, Gruest J. Isolation of a T-lymphotropic retrovirus from a patient at risk for acquired immune deficiency syndrome (AIDS). Science. 1983; 220(4599):868-71.

[3] Clavel F, Guetard D, Brun-Vezinet F, Chamaret S, Rey M, Santos-Ferreira M, et al. Isolation  of  a  new  human  retrovirus  from  West African  patients  with  AIDS.  Science. 1986;223(4761):343-6.

[4] Preston B, Poiesz B, Loeb L. Fidelity of HIV-1 Reverse Transcriptase. Science. 1988; 242(4882):1168-71.

[5] Chhatbar C, Mishra R, Kumar A, Singh SK. HIV vaccine: hopes and hurdles. Drug Discov Today. 2011; (21-22):948-56.

[6] Cohen M, Shaw G, McMichael A, Haynes B. Acute HIV-1 infection. New Eng J Med. 2011; 364(20):1943-54.

[7]  Borrow  P,  Lewicki  H,  Hahn  BH,  Shaw  GM,  Oldstone  MBA. Virus-specific  CD8+ cytotoxic  T-lymphocyte  activity  associated  with  control of  viremia  in  primary  human immunodeficiency virus Type 1 infection. J Virol. 1994; 68(9):6103-10.

[8] Zhang Z. Sexual transmission and propagation of SIV and HIV in resting and activated CD4+ T cells. Science. 1999; 286(5443):1353-7.

[9] Cameron P, Pope M, Granelli-Piperno A, Steinman RM. Dendritic cells and the replication of HIV-1. J Leukoc Biol. 1996; 59(2):158.

[10] Richman DD, Wrin T, Little SJ, Petropoulos CJ. Rapid evolution of the neutralizing antibody response to HIV type 1 infection. P Natl Acad Sci 2003; 100(7):4144-9.

[11] Schneider E, Whitmore S, Glynn MK, Dominguez K, Mitsch A, McKenna M. Revised surveillance case definitions for HIV infection among adults, adolescents, and children aged <18 months and for HIV infection and AIDS among children aged 18 months to <13 years. Morb Mortal Wkly Rep. 2008; 57(10):1-8.

[12] Galvin SR, Cohen MS. The role of sexually transmitted diseases in HIV transmission. Nat Rev Microbiol. 2004; 2(1):33-42.

[13] UNAIDS. GLOBAL REPORT: UNAIDS report on the global AIDS epidemic. 2013.

[14] De Clercq E. Antiretroviral drugs. Curr Opin Pharmacol. 2010 Oct;10(5):507-15.

[15] Yeni P. Update on HAART in HIV. J Hepatol. 2006; 44(1 Suppl):S100-3.

[16] Hammer SM, Saag MS, Schechter M, Montaner JSG, Schooley RT, Jacobsen DM, et Treatment for adult HIV infection: 2006 Recommendations of the International AIDS Society USA Panel. JAMA. 2006; 296(7):827-43.

[17] Lee GQ, Dong W, Mo T, Knapp DJ, Brumme CJ, Woods CK, et al. Limited evolution of inferred HIV-1 tropism while viremia is undetectable during standard HAART therapy. PloS one. 2014; 9(6):e99000.

[18] Cohen M, Chen YQ, McCauley M, Gamble T, Hosseinipour M, Kumarasamy N, et al. Prevention of HIV-1 infection with early antiretroviral therapy. New Eng J Med. 2011; 365(6):493-505.

[19] Writing Committee for the Cascade Collaboration. Timing of HAART initiation and clinical outcomes in human immunodeficiency virus type 1 seroconverters. Arch Internal Med. 2011; 171(17):1560-9.

[20] US Department of Health and Human Services. Panel on antiretroviral guidelines for adults and adolescents: guidelines for the use of antiretroviral agents in HIV-1-infected adults and adolescents. 2009; 1-161.

[21] Wada N, Jacobson LP, Cohen M, French A, Phair J, Munoz A. Cause-specific mortality among HIV-infected individuals, by CD4(+) cell count at HAART initiation, compared with HIV-uninfected individuals. AIDS. 2014; 28(2):257-65.

[22] Sterne JA, May M, Costagliola D, de Wolf F, Phillips AN, Harris R, et al. Timing of initiation of antiretroviral therapy in aids-free hiv-1-infected patients: A collaborative analysis of 18 HIV cohort studies. Lancet. 2009; 373(9672):1352-63.

[23] Le T, Wright EJ, Smith DM, He W, Catano G, Okulicz JF, et al. Enhanced CD4+ T-cell recovery with earlier HIV-1 antiretroviral therapy. New Eng J Med. 2013; 368(3):218-30. [24] Gallant J. Initial therapy of HIV infection. J Clin Virol. 2002 25:317-33.

[25] Torres RA, Lewis W. Aging and HIV/AIDS: pathogenetic role of therapeutic side effects. Laboratory invest. 2014; 94(2):120-8.

[26] Le Douce V, Janossy A, Hallay H, Ali S, Riclet R, Rohr O, et al. Achieving a cure for HIV infection: do we have reasons to be optimistic? J Antimicrob Chemoth. 2012; 67(5):1063- 74.

[27] Hauber I, Hofmann-Sieber H, Chemnitz J, Dubrau D, Chusainow J, Stucka R, et al. Highly significant antiviral activity of HIV-1 LTR-specific tre-recombinase in humanized mice. PLoS pathog. 2013 9(9):e1003587.

[28] Burton DR, Stanfield RL, Wilson IA. Antibody vs. HIV in a clash of evolutionary titans. P Natl Acad Sci USA. 2005; 102(42):14943-8.

[29] Thomson MM, Pérez-Álvarez L, Nájera R. Molecular epidemiology of HIV-1 genetic forms and its significance for vaccine development and therapy. Lancet Infect Dis. 2002; 2(8):461-71.

[30] Schiffner T, Sattentau QJ, Dorrell L. Development of prophylactic vaccines against HIV-Retrovirology. 2013; 10(72).

[31] Barouch DH, Korber B. HIV-1 vaccine development after STEP. Annu Rev Med. 2010; 61:153-67.

[32] Flynn NM, Forthal DN, Harro CD, Judson FN, Mayer KH, Para MF, rgp120 HIV Vaccine Study Group. Placebo-controlled phase 3 trial of a recombinant glycoprotein 120 vaccine to prevent HIV-1 infection. J Infect Dis. 2005; 191(5):654-65.

[33] Ensoli B, Fiorelli V, Ensoli F, Lazzarin A, Visintini R, Narciso P, et al. The preventive phase I trial with the HIV-1 Tat-based vaccine. Vaccine. 2009; 28(2):371-8.

[34] Sacha JB, Chung C, Rakasz EG, Spencer SP, Jonas AK, Bean AT, et al. Gag-specific CD8+ T lymphocytes recognize infected cells before AIDS-virus integration and viral protein expression. J Immunol. 2007; 178(5):2746-54.

[35] McElrath MJ, DeRosa SC, Moodie Z, Dubey S, Kierstead L, Janes H, et al. HIV-1 vaccine-induced immunity in the test-of-concept Step Study: a case-cohort analysis. Lancet. 2008; 372(9653):1894-905.

[36] Buchbinder SP, Mehrotra DV, Duerr A, Fitzgerald DW, Mogg R, Li D, et al. Efficacy assessment of a cell-mediated immunity HIV-1 vaccine (the Step Study): a double-blind, randomised, placebo-controlled, test-of-concept trial. Lancet. 2008; 372(9653):1881-93.

[37] Gray G, Buchbinder S, Duerr A. Overview of STEP and Phambili trial results: two phase IIb test-of-concept studies investigating the efficacy of MRK adenovirus type 5 gag/pol/nef subtype B HIV vaccine. Curr Opin HIV AIDS. 2010; 5(5):357-61.

[38] Benlahrech A, Harris J, Meiser A, Papagatsias T, Hornig J, Hayes P, et al. Adenovirus vector vaccination induces expansion of memory CD4 T cells with a mucosal homing phenotype that are readily susceptible to HIV-1. PNAS 2009; 106(47):19940-5.

[39] Robb ML. Failure of the Merck HIV vaccine: an uncertain step forward. Lancet. 2008; 372(9653):1857-8.

[40]  Walker  LM,  Huber  M,  Doores  KJ,  Falkowska  E,  Pejchal  R, Julien  JP,  et al.  Broad neutralization  coverage  of  HIV  by  multiple highly  potent  antibodies.  Nature.  2011; 477(7365):466-70.

[41] Wu X, Yang ZY, Li Y, Hogerkorp CM, Schief WR, Seaman MS, et al. Rational design of envelope identifies broadly neutralizing human monoclonal antibodies to HIV-1. Science. 2010; 329(5993):856-61.

[42] Houzet L, Jeang KT. MicroRNAs and human retroviruses. Biochim Biophys Acta. 2011; 1809(11-12):686-93.

[43] Croce CM. Causes and consequences of microRNA dysregulation in cancer. Nat Rev Genet. 2009 (10):704-14.

[44] Jopling C, Yi M, Lancaster A, Lemon S, Sarnow P. Modulation of Hepatitis C virus RNA abundance by a liver-specific microRNA. Science. 2005; 309(5740):1577-81.

[45] Santaris Pharma A/S advances Miraversen, the first microRNA targeted drug to enter clinical trials, into phase 2 to treat patients infected with hepatitis C virus. Clinical Trials Week. 2010; 309.

[46] Janssen H, Reesink H, Zeuzem S, Lawitz E, Rodriguez-Torres M, Chen A, et al. A randomized, double-blind, placebo (plb) controlled safety and anti-viral proof of concept study of miravirsen (MIR), an oligonucleotide targeting miR-122, in treatment naïve patients with genotype 1 (gt1) chronic HCV infection. Hepatology. 2011; 54(1430).

[47] Janssen HL, Reesink HW, Lawitz EJ, Zeuzem S, Rodriguez-Torres M, Patel K, et al. Treatment of HCV infection by targeting microRNA. New Eng J Med. 2013; 368(18):1685-94. [48] Hariharan M, Scaria V, Pillai B, Brahmachari SK. Targets for human encoded microRNAs in HIV genes. Biochem Bioph Res Co. 2005 Dec 2;337(4):1214-8.

[49] Ahluwalia JK, Khan SZ, Soni K, Rawat P, Gupta A, Hariharan M, et al. Human cellular microRNA hsa-miR-29a interferes with viral nef protein expression and HIV-1 replication. Retrovirology. 2008; 5:117.

[50] Huang J, Wang F, Argyris E, Chen K, Liang Z, Tian H, et al. Cellular microRNAs contribute to HIV-1 latency in resting primary CD4+ T lymphocytes. Nat Med. 2007; (10):1241-7.

[51]  Triboulet  R,  Mari  B,  Lin  YL,  Chable-Bessia  C,  Bennasser  Y,  Lebrigand  K,  et  al. Suppression of microRNA-silencing pathway by HIV-1 during virus replication. Science. 2007; 315(5818):1579-82.

[52] Witwer KW, Watson AK, Blankson JN, Clements JE. Relationships of PBMC microRNA expression, plasma viral load, and CD4+ T-cell count in HIV-1-infected elite suppressors and viremic patients. Retrovirology. 2012; 9:5.

[53] Klase Z, Houzet L, Jeang KT. MicroRNAs and HIV-1: complex interactions. J Biol Chem. 2012; 287(49):40884-90.

[54] Fowler L, Saksena N. Micro-RNA: new players in HIV-pathogenesis, diagnosis, prognosis and antiviral therapy. AIDS Rev. 2013; 15:3-14.

[55] Gupta P, Saksena N. MiRNAs: small molecules with a big impact on HIV infection and pathogenesis. Future Virol. 2013; 8(8):769-81.

[56] UN Department of Public Information. We can end poverty: Millennium development goals and beyond 2015. Fact Sheet Goal 6: Combat HIV/AIDS, malaria and other diseases. [Internet] 2013. Available from: http://www.un.org/millenniumgoals/pdf/Goal_6_fs.pdf

[57]  UN  Department  of  Public  Information.  Goal  6:  Combat  HIV/AIDS,  Malaria and Other Diseases [Internet] 2013 [cited 2014 Sept 5]. Available from: http://www.un.org/millenniumgoals/aids.shtml

Categories
Review Articles

Complementary medicine and hypertension: garlic and its implications for patient centred care and clinical practice

This review aims to explore the impact that patient attitudes, values and beliefs have on healing and the relevant implications these have for clinical practice and patient centred care. Using a Cochrane review as a platform, garlic as a complementary medicine was evaluated based on current societal trends and pertinent clinical practice points. The study found that when engaging with a patient using complementary medicine it is important to consider not only the efficacy of the proposed treatment, but also variation in preparations, any possible interactions and side effects, and the effect of patient beliefs and the placebo effect on clinical outcomes. The use of garlic in the treatment of hypertension could serve to enhance the therapeutic alliance between clinician and patient and potentially improve clinical outcomes.

v6_i1_a10

Introduction

Hypertension is the most common cardiovascular disease in Australia. Approximately eleven percent of the population (2.1 million people) are affected by the condition. [1] The prevalence is twice as high in the indigenous population, affecting 22 percent of those aged 35 or older.[1] Hypertension is a significant risk factor for transient ischaemic attack, stroke, coronary heart disease and congestive heart failure, increasing the risk of these by two to three fold. [2] Cardiovascular disease accounts for 47,637 or 36 percent of deaths in Australia each year and costs the economy a total of $14.2 billion AUD per annum – 1.7 percent of GDP. [3,4] Hypertension also accounts for six percent of all general practice consultations, making it the most commonly managed condition. [5] Given the significant effect hypertension has on society, it is imperative to evaluate potential therapies to combat hypertension.

Hippocrates is quoted as saying “let food be thy medicine and medicine be thy food”. [6] A considerable number of complementary therapies are thought to be effective in the treatment of hypertension by the general public. Such medicines include cocoa, acupuncture, coenzyme Q10 and garlic. [7] Medical texts from the ancient civilisations of India, China, Egypt, Rome and Greece all reference the consumption of garlic as having numerous healing properties. [8] Garlic (Allium sativum) was selected as the medicine of choice for this review as it is one of the most widely used and better studied complementary therapies in the management of hypertension. [9]

In addition to the effect of garlic on blood pressure, it is interesting to consider the implications of using this complementary medicine in light of patient centred care and clinical practice. It is highly recommended to medical students and clinicians that a patient’s cultural attitudes, values and beliefs are recognised and incorporated into clinical decision-making. The incorporation of patient perspectives into clinical practice may be done by negotiating the use of garlic as a complementary medicine alongside the use of a recognised antihypertensive drug. This study therefore aims to explore the findings and implications of controlled studies on the use of garlic to prevent cardiovascular morbidity and mortality in hypertensive patients in relation to good clinical practice and patient centred care. The aim of this investigation is to use a Cochrane review as a platform to explore garlic as an antihypertensive, and to discuss this treatment in the context of patient centred care and clinical practice.

 

Methods

The review focused on recent literature surrounding the use of garlic as an antihypertensive. A Cochrane review was used as an exemplar to discuss the broader implications of using garlic as a therapy for hypertension.  Use  of  garlic  was  explored  through  the  framework of current societal trends, clinical practice and patient centred care. Selected publications present both qualitative and quantitative data.

Results

While the literature search retrieved a number of randomised controlled studies suggesting a beneficial effect of garlic on blood pressure, [5,10] the most recent Cochrane review by Stabler et al. retrieved only two controlled studies that assessed the benefit of garlic for the prevention of cardiovascular morbidity and mortality in hypertensive patients. [5,11,12] Of the two studies, Kandziora did not report the number of people randomised to each treatment group, meaning their data could not be meta-analysed. [5] They did report however, that 200mg of garlic powder in addition to hydrochlorothiazide-triamterene baseline therapy produced a mean reduction of 10-11 mmHg and 6-8mmHg in systolic and diastolic pressure respectively, compared to placebo therapy. [5] Auer’s 1990 study randomised 47 patients to receive either 200 mg garlic powder three times daily or placebo determining that garlic reduces mean arterial systolic blood pressure by 12mmHg and diastolic blood pressure by approximately 6-9mmHg in comparison to a placebo. [5] Ried’s meta-analysis revealed a mean systolic decrease of 8.4mmHg ± 2.6mmHg (P≤0.001) and a mean diastolic reduction of 7.3mmHg ± 1.5mmHg (P≤0.001) in hypertensive patients. [10]

Given these findings fall within the normal parameters for blood pressure measurement variability, the efficacy of garlic as an antihypertensive is inconclusive. It is also difficult to ascertain the implications of the Cochrane review for morbidity and mortality as neither of the trials reported on clinical outcomes for patients using garlic as a hypertension treatment and insufficient data was provided on adverse events. As such, garlic cannot be recommended as a monotherapy for the reduction of hypertension. [13] Despite this, there are other potential uses for garlic in the treatment of hypertension which encompass both patient centred care (PCC) and evidence based practice.

Different garlic preparations

Several   garlic   preparations   are   available   for   the   treatment of hypertension including: garlic powder (as per the Cochrane studies), garlic oil, raw garlic, cooked garlic and aged garlic extract. [5,14] Ried and colleagues suggests that aged garlic extract is the best preparation for treatment of hypertension, and may reduce mean systolic blood pressure by 11.8mmHg ± 5.4mmHg over 12 weeks compared to placebo (P=0.006). Ried also noted that aged garlic extract did not interact with any other medications, particularly warfarin. [14]

Drug interactions

A number of drug interactions may occur when using garlic. Edwards et al. noted an increased risk of bleeding in patients who take garlic and blood thinning agents such as aspirin and warfarin. The same study also noted that the efficacy of HIV medications such as saquinavir may be reduced by garlic interactions, and some patients suffer allergies to garlic. [15]

Patient beliefs and the placebo effect

Patient beliefs must be incorporated into clinical practice not only for adherence to PCC but also as a therapy itself. Numerous studies have suggested that placebo treated control groups frequently experience a relevant decrease of blood pressure in pharmacological investigations into hypertension. [16]

Discussion

The findings of the Cochrane review are useful in making evidence based decisions regarding patient care, yet it is important to reflect on the issue of hypertension holistically and to consider what the review may have overlooked. Given that the Cochrane review provided insufficient data on the potential adverse effects, including drug interactions, of garlic consumption, prescribing garlic as a therapy for hypertension at this stage would be a failure to uphold best evidence based practice and would breach ethical principles such as non-maleficence.

Different types of garlic preparation are available. If a patient wishes to use this complementary therapy they should be guided to the most appropriate type. On a biochemical level, aged garlic extract has two main benefits for clinical practice. It contains the active and stable component (S)-allyl-cysteine which is measurable, and may allow for standardisation of dosage. [14] Aged garlic extract is also reportedly safer than other preparations and does not cause the bleeding issues associated with blood thinning medications such as warfarin. [15]

Patient centred care is particularly important as patient centred approaches   have   numerous   influences   over   clinical   outcomes. Bauman et al. proposes that PCC reduces patient anxiety and morbidity, improves quality of life, patient engagement and both patient and doctor satisfaction. [17]   Evidence also suggests PCC increases treatment adherence and results in fewer diagnostic tests and unnecessary referrals, which is important to consider given the burden of hypertension on the health care system. [17,18] Particularly significant for all stakeholders (patients, clinicians and financiers) is the use of PCC as a dimension of preventative care. For the primary prevention  of  disease,  clinicians  should  discuss  risk  and  lifestyle factors with patients and the detrimental effects they can have on a patient’s health. [2,5] Given the effect of PCC on treatment adherence it  is  important  to  consider  open  communication  and  discussion with patients not only as a part of treatment, but also as a part of preventative medicine. Further, if a patient is willing to take garlic for hypertension it may be a tool for further discussion between clinician and patient, especially if the treatment sees some success. This success may open windows for a clinician to discuss further the effects of lifestyle modification on health. [7]

Being a multifaceted dimension of health, PCC recognises each patient is a unique individual, with different life experience, cultural attitudes, values and beliefs. Capraz et al. found that a percentage of patients use  garlic  in  preference  to  antihypertensive  drugs  whilst  others use it as a complementary medicine in combination with another antihypertensive drug. [19] This affirms the potential for disparity in patient ideals. A patient may prefer garlic because of concern over the

addictive potential of drugs (including antihypertensive). [19] Such concerns should be explored with the patient to ensure patients can make informed decisions about their healthcare. Other viewpoints may be complex, for example mistrust in pharmaceutical companies, or simply having a preference for natural therapies. [19] Again, these somewhat concerning perceptions are worthy of discussion with a willing patient.

Amongst all the information provided it is worth taking the time to appreciate the role of demographic and religious factors. The social context of a patient’s health may influence how a patient considers the findings of the review. [20] It may also provide an indicator for the likelihood of complementary medicine use. [20] Xue et al. suggests that  females  aged  18-34  who  have  higher-than-average  income, are well educated and had private health cover were more likely to use a complementary or alternative medicine, such as garlic for hypertension. [20] Religion is also a significant determinant in patient centred care. Adherents to Jainism are unlikely to be concerned with the findings of the review, as they do not consume garlic, believing it to be an unnecessary sexual stimulant. [21] Similarly, some Hindus have also been noted to avoid garlic during holy times for the same reason. [21] A clinical decision regarding garlic as a complementary medicine would have to consider these factors in consultation with the patient.

When making decisions about the course of clinical practice in consultation with a patient, it is important to remember patients have a right to making a well informed decision. [22] It would be appropriate to disclose the findings of this review to patients considering the use of garlic so that a patient can make an informed decision regarding treatment options. It is essential that patients seeking treatment for hypertension understand the true extent of the efficacy of garlic: that it only has minimal (if any) blood pressure lowering effects. Patients should also be advised against garlic as a monotherapy for the reduction of hypertension until there is sufficient evidence to support its use. It is also important to inform patients of their right to use garlic as a complementary medicine if the patient so wishes to do so. [13,19] Given the potential detrimental effects of some garlic preparations, the implications of these effects should also be discussed with patients. If there is discrepancy between the views of the patient and the clinician, then the clinician must remain professional, upholding the codes of ethics which necessitates clinicians respecting the needs, values and culture of their patients. [23] The clinician must also provide the best clinical advice, and negotiate an outcome that is agreeable to both parties’ agendas. [23]

Conclusion

Hypertension is the most commonly managed condition in general practice. A Cochrane review assessing the benefit of garlic for the prevention of cardiovascular morbidity and mortality in hypertensive patients found a negligible effect on morbidity and mortality. [5] The study did not reflect on clinical outcomes for patients and neglected to discuss different garlic preparations used in the studies, potential differences this may have had on patient outcomes or any pertinent side effects. It is recommended that more studies be performed on the clinical effectiveness and side effects of different types of garlic preparations, particularly aged garlic extract. Patient centred care is important for the best clinical outcomes and for disease prevention. [17,18] Regardless of the efficacy of garlic, it is highly recommended to clinicians that a patient’s cultural attitudes, values and beliefs are recognised and incorporated into clinical practice. This may be done by negotiating the use of garlic as a complementary medicine, along with the use of a prescribed recognised antihypertensive drug if the patient desires a complementary medicine. The significant effect that patient values have on healing should be realised and utilised by clinicians and students alike. Ultimately, the use of garlic in the treatment of hypertension could serve to enhance the therapeutic alliance between clinician and patient and potentially improve clinical outcomes.

Acknowledgements

Jacob Bonanno for his assistance in proof-reading this article.

Conflict of interest None declared.

Correspondence

A S Lane: angus.lane@my.jcu.edu.au

References

[1] Australian Bureau of Statistics. Cardiovascular Disease in Australia: A Snapshot, 2004-5. In: Australian Bureau of Statistics. Canberra. 2006. p. 1-3.

[2] Kannel WB. Blood pressure as a cardiovascular risk factor: prevention and treatment. JAMA. 1996;275:1571-6.

[3] Australian Bureau of Statistics. Causes of Death, Australia, 2011 In: Australian Bureau of Statistics. Canberra. Australian Bureau of Statistics 2013. p. 100-9.

[4] Abernethy A, et al. The Shifting Burden of Cardiovascular Disease in Australia. The Heart Foundation. 2005.

[5] Stabler SN, Tejani AM, Huynh F, Fowkes C. Garlic for the prevention of cardiovascular morbidity and mortality in hypertensive patients. Cochrane Database of Systematic Reviews [Internet]. 2012; (8). Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD007653.pub2/abstract

[6] Smith R. “Let food be thy medicine…”. BMJ. 2004;328(7433):211.

7]  Nahas  R.  Complementary  and  alternative  medicine  approaches  to  blood  pressure reduction. Can Fam Physician. 2008;54:1529-33.

[8] Rivlin RS. Historical perspective on the use of garlic. Journal Nutr 2001;131:951s-4s.

[9] NPS annual consumer surveys: Findings about complementary medicines use 2008. [cited  2014  Nov  2]  Available  from:  http://www.nps.org.au/about-us/what-we-do/our research/complementary-medicines/nps-consumer-survey-cms-use-findings

[10] Ried K, Frank OR, Stocks NP, Fakler P, Sullivan T. Effect of garlic on blood pressure: a systematic review and meta-analysis. BMC Cardiovascular Disorders. 2008;8:1-12. [DOI: 10.1186/1471-2261-8-13]

[11] Auer W, Eiber A, Hertkorn E, Koehrle U, Lorenz A, Mader F, Merx W, Otto G, Schmidt-Otto B, Taubenheim H. Hypertension and hyperlipidaemia: garlic helps in mild cases. Br J Clin Pract 1990;Supplement 69:3-6.

[12]  Kandaziora  J.  Blood  pressure  and  lipid  lowering  effect  of  garlic preparations  in combination  with  a  diuretic  [Blutdruk-  und lipidsenkende  Wirkung  eines  Knoblauch-Praparates in Kombination mit einem Diuretikum]. Artzliche Forschung 1988;35:1-8.

[13]  Qian  X.  Garlic  for  the  prevention  of  cardiovascular  morbidity and  mortality  in hypertensive patients. Int J Evid Based Healthc. 2013;11:83.

[14] Ried K, Frank OR, Stocks NP. Aged garlic extract reduces blood pressure in hypertensives: a dose-response trial. Eur J Clin Nutr 2013;67:64-70.

[15] Edwards QT, Colquist S, Maradiegue A. What’s cooking with garlic: is this complementary and alternative medicine for hypertension? J Am Acad Nurs Pract. 2005;17:381-5.

[16] Deter HC. Placebo Effects on Blood Pressure Berlin Charite University; 2007 [cited 2013 19/04]. Available from: http://clinicaltrials.gov/show/NCT00570271.

[17] Bauman AE, Fardy HJ, Harris PG. Getting it right: why bother with patient-centred care? Med J Aust. 2003;179:253-6.

[18] Roumie CL, Greevy R, Wallston KA, Elasy TA, Kaltenbach L, Kotter K, et al. Patient centered primary care is associated with patient hypertension medication adherence. J Behav Med. 2011;34:244-53.

[19] Capraz M, Dilek M, Akpolat T. Garlic, hypertension and patient education. Int J Cardiol. 2007;121:130-1.

[20]  Xue  CC,  Zhang  AL,  Lin  V,  Da  Costa  C,  Story  DF. Complementary  and  alternative medicine use in Australia: a national population-based survey. J Altern Complement Med 2007;13:643-50.

[21] Mehta N. Faith and Food: Jainism. 2009. [cited 2013 19/04]. Available from: http://www.faithandfood.com/Jainism.php.

[22] Faden RR, Becker C, Lewis C, Freeman J, Faden AI. Disclosure of information to patients in medical care. Medical Care. 1981;19:718-33.

[23] Australian Medical Students’ Association. Australian Medical Students’ Association: Code of Ethics. Australian Medical Students’ Association 2003. Available from: http://media.amsa.org.au/internal/official_documents/internal_policies/code_of_ethics_2003.pdf .