Categories
Feature Articles Articles

Dealing with futile treatment: A medical student’s perspective

A 76 year old man with metastatic liver cancer lies feebly in his hospital bed surrounded by family. He’s in cardiac and respiratory failure. Attached to him are multiple lines, cannulas and monitors. There are more machines present than people. Despite this, his breathing is laboured, he’s gaunt, and he is clearly suffering. In a rare moment of lucidity, he gestures for his son to come closer and whispers: “No more.” An obviously grief stricken man turns to the rest of the family, gestures, and heads outside to make one of the most difficult decisions he will ever make.

Confused and anxious, a fifteen year old boy sits and listens to the pros and cons of stopping his grandfather’s treatment being discussed by the doctors and the family. Questions keep popping up in his head “Why is he giving up? How could they consider withdrawing treatment, the same treatment that was obviously keeping this man alive? How could anyone live with that decision?”

How do I know this? Because I was that fifteen year old boy.

It is, perhaps, ironic that modern advances in medicine have made it feasible to sustain life and sometimes suffering, for an indefinite period. [1] The dramatic improvement in technology for life preservation has created ambiguity and has dehumanised the dying process. The result of this is that very difficult legal and moral decisions must now be made about transitions from aggressive treatment to palliative care. [2] At times, the existence of this technology creates a moral obligation to use it, especially when societal belief is that to treat is to care. [3]

It was all too much back then for a teenage boy, but now ten years down the line, is it still too much for a medical student? After all, what can we as mere fledgling trainees do to help ease those heavy burdens? Reflecting on these experiences helps address the powerlessness we experience in these morally and ethically challenging cases and serves as a reminder to everyone that even as mere ‘students’, we are capable of playing a vital therapeutic role in the care of patients whose treatments have been deemed futile.

Defining futility

Looking back at that period of time now, it is difficult to justify the last few weeks of futile treatment that my grandfather received.

How does one decide when treatment is futile? Some have defined it quantitatively as treatments that have less than a 10% chance of success, [4] while others have tried to express it qualitatively as “treatment which provides no chance of meaningful prolongation of survival or may only briefly delay the inevitable death of the patient.” [5] The majority of physicians will deem this poor outcome unsatisfactory and thus the treatment futile; however, most families will not. [6] Whatever the definition, futile treatment is not a black and white concept, but must be considered as a complex composite of quality of life issues that need to be discussed either with the patient early in their diagnosis, or with their legal next of kin. [5]

Ethical decisions

This choice is difficult enough for clinicians with years of health-care experience, let alone medically untrained families under stress, grieving for the imminent loss of a loved one.

How are these decisions made? There are no protocols or parameters set out which suggest treatment should be withdrawn. While students are often taught to use the four principles of bioethics: beneficience, non-maleficience, autonomy and justice to guide them through ethically challenging cases, [7] the general public often places a special emphasis on beneficence, and thus consider continuing treatment as the only option. This was demonstrated in a questionnaire study by Rydvall and Lynoe (2008), asking both physicians and the general public when they believed treatments should be withdrawn from terminally ill patients. While the majority of physicians chose to withdraw treatment early on to prevent further suffering, the majority of the general public chose to continue aggressive treatment until the very end, stating that the first task of health care professionals is to save lives. [8]

This highlights the higher expectations that the general public may have of what the health care system should achieve, [8] which can lead to points of contention and miscommunication when it comes to making critical care decisions. The role of the medical student in these cases is often as a moderator; to listen, discuss and bridge the gap of communication between the two understandably apprehensive parties.

The therapeutic use of self

The feeling of helplessness was overwhelming, none of the doctors paid me any attention; I was just a child after all, not worthy of their attention or time. But he was my grandfather, not just their patient.

The concept of ‘therapeutic use of self’ is the use of oneself as a therapeutic agent by integrating and empathising with the patient and their family. This can be to alleviate fear or anxiety, provide reassurance and obtain or provide necessary information in an attempt to relieve suffering. [9] This is particularly relevant in circumstances where treatments have a limited effect on the disease process, where suffering is prolonged rather than prevented.

Medical school does not always formally teach the importance of connecting with patients and the therapeutic role that students play. [10] Many young aspiring doctors seek to emulate the ‘professional’ and sometimes detached demeanour of their more senior counterparts; often getting too close to the patients is seen to be a one way street towards emotional burnout. However, therapeutically, the importance of being physically near patients and their families during their personal illness and distress cannot be over stated. [9]

While many students may claim to never have enough time in their schedules, they are often the most time-rich personnel. For this reason they are often the only ones who have the opportunity to sit down with the family and the patient. This is not to take away, explain or understand the pain, but rather as a symbol of support, so that they know we are witnesses to their suffering and that they have not been abandoned. [10]

Withdrawing versus withholding

The debate went on throughout the night: “We’re abandoning him?”

“No, it’s for the best, he doesn’t need to go through any more of this, the doctor said there’s no way he’s going to get better.”

“You want to stop all treatments? We should be trying new things not stopping old treatments!”

Traditional medical training places an emphasis on the acquisition of skills and expertise to help ‘fix’ the patients or their diseases. Interestingly, many clinicians are more comfortable withholding treatment – that is, not beginning new aggressive treatments – than stopping currently initiated treatments. [1,11,12] This may be because withdrawing attaches a feeling of responsibility and culpability for the death. [3,13] To avoid this, clinicians will often only withdraw support when it becomes clear that death will occur regardless of further treatment. In this way, a “causative link between non-treatment and death is avoided.” [14]

Increasingly in today’s medical system, a simple ‘fix’ does not exist for many patients and their diseases. For these patients, success is judged not on the amelioration of the pathological process, but instead, on whether a good quality of life can be achieved in spite of the presence of chronic disease. Various religions and cultures have differing views on quality of life arguments adding a further layer of complexity to the decision making process. Therefore it is important to take the background of the patient and their relatives into consideration. [3]

Similarly, individual variations exist between physicians, because although each will use the most current evidence available to decide plans for the best outcome, each person is influenced by their own ethical, social, moral and religious views. [3] This perhaps, is the reason why the modern curriculum has incorporated elements of personal reflection, professionalism and social foundations of medicine to guide students into thinking more reflectively and sensitively, allowing for a more holistic patient-centered approach.

Moral decisions

“He’s not going to get better,” I was told, “The doctors said we should stop the treatments because all they’re doing is causing him to suffer.” Even I could understand that decision when it was justified to me like that. Unfortunately others don’t necessarily see it that way.

Moral situations often arise when clinicians tell relatives that they believe treatment will not help the patient recover, and the option is given to withdraw aggressive treatment in favour of palliative care. Many perceive continued treatment to not only be life sustaining, but also potentially curative, and thus moving onto palliative care is often interpreted as a choice to end their loved one’s life. [5] Some feel it is better to watch their relative die while undergoing treatment rather than live with the belief that they consented to death. [3, 5] Unsurprisingly, relatives will often demand that “everything be done” to preserve life. [5, 15, 16]

It is important to remind family that withdrawing futile treatment does not mean withdrawing all treatment. Palliative management including analgesia, respect for dignity, and support will always be provided throughout the ordeal. [2] We must be mindful that in this day of medical advancements, it is quite common that caring for a chronically ill loved one becomes the sole purpose in the carer’s life. The health care system has generated a ‘patient support system’ in which the carer has one role, and is deprived of energy and time for anything else, forgoing careers, friends and hobbies. It is perhaps unsurprising, that towards the end of a patient’s life, the carer maybe unwilling to let go of the only remaining source of meaning in their life. [6]

These difficult decisions often don’t need to be made if adequate preparation has been made beforehand, by having advanced care directives documented and a durable power of attorney arranged before the condition of the patient declines. These items can make a world of difference for both the family and the health care staff. [13]

Final thoughts

Would I have done anything differently if I had the maturity and the training that I have now?

Medical students in general feel that completing a full history and examination is the extent of what they can offer to patients; [16] however, this is often not the case. Their support and knowledge base is invaluable to patients and their family. Students play a vital therapeutic role in assisting the patient and family to come to terms with the limitations of modern medicine, and to recognise that extension of the dying process undermines what both the medical team and the family ultimately want – a dignified and peaceful death.

It is easy to objectively look at a patient with whom we’ve had no past relationship and decide what the right choice is. But for families, it will never be that straight forward when a decision has to be made about a loved one. During these times, as medical students, we need more than the ability to communicate effectively, we need the mental fortitude to be able to step into that dark and difficult place with the patient and their family to truly connect, and be there for them not only with our book smarts, but as figures of support and strength.

Never underestimate the therapeutic potential of who we are. While we may lack the mountains of factual knowledge of our senior colleagues, we have the potential to excel in the more humanistic aspects of patient care. By learning to approach these cases with compassion and humility, we can hope that our presence and understanding will render healing in situations that cannot be cured by our medical knowledge. [10]

As he requested, treatment was withdrawn and palliative care started, the 76 year old grandfather, father, and husband returned home and passed away in a dignified and peaceful way surrounded by family.

Acknowledgements

The author would like to thank his grandfather who gave him the world by teaching him to how to learn.

The author would also like to acknowledge the fantastic feedback provided by both reviewers which allowed him to gain a greater understanding into this fascinating topic.

Conflict of interest

None declared.

Correspondence

M Li: michael.li@anu.edu.au

References
[1] Slomka J. The negotiation of death: clinical decision making at the end of life. Soc Sci Med. 1992; 35(3): 251-9.
[2] Kasman DL. When is medical treatment futile? A guide for students, residents, and physicians. J Gen Intern Med. 2004; 19(10): 1053-6.
[3] Reynolds S, Cooper AB, McKneally M. Withdrawing life-sustaining treatment: ethical considerations. Surg Clin North Am. 2007; 87(4): 919-36, viii.
[4] Howard DS, Pawlik TM. Withdrawing medically futile treatment. J Oncol Pract. 2009; 5(4): 193-5.
[5] Murphy BF. What has happened to clinical leadership in futile care discussions? Medical Journal of Australia. 2008; 188(7): 418-9.
[6] Hardwig J. Families and futility: forestalling demands for futile treatment. J Clin Ethics. 2005; 16(4): 335-44.
[7] Beauchamp TL, Childress JF. Principles of biomedical ethics. New York: Oxford University Press; 1994.
[8] Rydvall A, Lynoe N. Withholding and withdrawing life-sustaining treatment: a comparative study of the ethical reasoning of physicians and the general public. Crit Care. 2008; 12(1): R13.
[9] Bartholomai S. Therapeutic Use of Self/Building a Therapeutic Alliance. In: Hospital I, editor.; 2008.
[10] Kearsley JH. Therapeutic Use of Self and the Relief of Suffering. CancerForum. 2010; 34(2).
[11] Pawlik TM, Curley SA. Ethical issues in surgical palliative care: am I killing the patient by “letting him go”? Surg Clin North Am. 2005; 85(2): 273-86, vii.
[12] Iserson KV. Withholding and withdrawing medical treatment: an emergency medicine perspective. Ann Emerg Med. 1996; 28(1): 51-4.
[13] Scanlon C. Ethical concerns in end-of-life care. Am J Nurs. 2003; 103(1): 48-55; quiz 6.
[14] Seymour JE. Negotiating natural death in intensive care. Soc Sci Med. 2000; 51(8): 1241-52.
[15] Foster LW, McLellan LJ. Translating Psychosocial Insight into Ethical Discussions Supportive of Families in End-of-Life Decision Making. Social Work in Helath Care. 2002; 35(3): 37-51.
[16] Frank J. Refusal: deciding to pull the tube. J Am Board Fam Med. 2010; 23(5): 671-3.

Categories
Articles Guest Articles

Health care to meet the future needs of New South Wales

When thinking about innovation and ways to transform how health care is delivered to the patients of today and tomorrow, the importance and growing potential of e-Health springs to mind.

In Opposition and now as Minister for Health and Minister for Medical Research I am absolutely convinced of the enormous gains to be made using e-Health technology, whether electronic patient records, telehealth connecting clinicians in remote settings, managing assets and other important clinical information and governance arrangements.

My years in Opposition did much to foster my knowledge, understanding and passion for the health system and I had the rewarding opportunity to meet with leaders in health care to discuss both challenges and future possibilities.

In 2007, as NSW Shadow Minister for Health, I was humbled to be invited to speak at Hewlett Packard’s Health and Life Sciences Symposium. Hosted at San Diego, the conference connected specialists in the e-Health field from across the globe to discuss its impacts and contribution to the wider health agenda.

The conference was incredibly inspiring for an aspiring health minister. While many there were caught up in the gadgetry, I found myself thinking a lot of about the experiences of patients back home and how they needed to be improved by advancements in technology.

Now, the NSW health system boasts one of the largest information and communication technology (ICT) portfolios of any government agency or corporate organisation in this country.

ICT has an important role to play in the delivery of health services; whether in acute hospital care, preventative health, patient self-care and those treatments provided in a range of health care settings – in a patient’s home, in the community, a private or not-for-profit facility or through the public health system.

And these services will be delivered by a range of health professionals, including those of you reading who hope to enter these fields.

For many years, I have been committed to enhancing e-Health services in this state as it is these very services that put the patient at the forefront while boosting contemporary methods of care.

NSW Health will spend more than $1.5 billion over the next 10 years on ICT to improve both care and patient outcomes across the state.

We’ve achieved a lot in this space in the past 12-18 months and are setting the foundations to do great things for the benefit of patients in the future.

We have developed numerous innovative e-Health programs across the health system including:

  • The use of telehealth to link patients in rural and regional NSW with face-to-face specialist care in tertiary hospitals, making services available anywhere, anytime.
  • We are collaborating with clinicians by activating voice recognition software in emergency departments to free-up precious time for patient care.
  • We have established real-time emergency department waiting time data for the community, published online and updated every fifteen minutes.
  • We have technology that now provides instant digital images, which can be reviewed and reported by specialist doctors even before the patient is back in the ward. This slashes waiting times for results and is seeing treatments delivered earlier than ever before.
  • We are developing and introducing apps and tablet technology to provide instant access to clinical research and digital medical libraries for better information sharing between clinicians and their colleagues.
  • We are supporting trials where electronic health records have revolutionised the speed and accuracy of medical information between hospital wards and between patients and their General practitioners.
  • We are using technology to better track financial and performance management, not only in clinical incident monitoring, but in preparations for an Activity Based Funding model and to ensure value for money for every tax-payer dollar spent.

These are not future ambitions. These are services being utilised today to ensure our patients are not just well-cared for but well-informed and connected to health services.

As the Minister for Health, I want to see these initiatives driving better performance in our state’s hospitals – leading to better outcomes for patients and their families.

Telehealth remains a particular passion of mine. From what I am seeing utilised by clinicians on the ground, it is an impressive tool and one that sees patients receiving the best possible care with treatment transcending geographical barriers.

Recently, I was in Canberra to launch the Improving Critical Care Outreach and Training in the ACT and South East NSW project.

This pilot telehealth system will connect Canberra Hospital emergency department and helicopter base with hospitals at Queanbeyan, Moruya, Batemans Bay and Cooma.

It utilises overbed cameras, microphones and speakers and has viewing monitors positioned in the resuscitation area of the Emergency Department in the NSW spoke sites. The system uses the ACT and NSW videoconferencing networks to transmit images and vital signs to the referral or hub site in the ACT.

Telehealth initiatives have become a key component of clinical care and improving access to services in NSW.

We currently have more than 600 videoconferencing locations across the state which are used for a range of services in the areas of mental health; critical and emergency care; oncology; radiology; diabetic foot care; genetic services; and, chronic disease management.

The NSW Government is committed to supporting innovative projects such as this for the benefit of patients across NSW. We currently oversee the provision of telehealth technology in a variety of health facilities across regional NSW including Goulburn, Queanbeyan, Yass, Braidwood, Crookwell, Moruya, Bega, Batemans Bay, Cooma, Pambula and Bombala.

Telehealth affords local patients the opportunity to be treated locally with the support, guidance and expertise of clinicians at tertiary teaching hospitals..

Do medical students have a role to play in the state’s e-Health agenda? Absolutely.

Starting out in politics almost two decades ago, e-Health was considered the stuff of science fiction. Now, we’re seeing its use move from the bench to the bedside for the benefit of patients.

As our tech-savvy workforce increases, we will get smarter and more innovative.

I want a resilient system but one that is flexible and able to innovate to achieve greater efficiencies.

Above all, I want a health system that can deliver the highest quality care to patients.

Health often gets lost in statistics but technology does not substitute for the high quality care provided by clinicians, rather it enhances it.

By providing both current and future clinicians with the modern tools and information they need, we are going a long way to empowering them to achieve much more for their patients.

Categories
Case Reports Articles

Blood culture negative endocarditis – a suggested diagnostic approach

This case report describes a previously healthy male patient with a subacute presentation of severe constitutional symptoms, progressing to acute pulmonary oedema, and a subsequent diagnosis of blood culture negative endocarditis with severe aortic regurgitation. Blood culture negative endocarditis represents an epidemiologically varying subset of endocarditis patients, as well as a unique diagnostic dilemma. The cornerstones of diagnosis lay in careful clinical assessment and exposure history, as well as knowledge of common aetiologies and appropriate investigations. The issues of clinically informed judgement and having a systematic approach to the diagnosis of these patients, especially within an Australian context, are discussed. Aetiological diagnosis of these patients modifies and directs treatment, which is fundamental in minimising the high morbidity and mortality associated with endocarditis.

Case

Mr NP was a previously healthy, 47 year old Caucasian male who presented to a small metropolitan emergency department with two days of severe, progressive dyspnoea which was subsequently diagnosed as acute pulmonary oedema (APO). This occurred on a three month background of dry cough, malaise, lethargy and an unintentional weight loss of 10 kilograms.

History

Apart from the aforementioned, Mr NP’s history of the presenting complaint was unremarkable. In the preceding three months Mr NP was previously treated in the community for pertussis and atypical pneumonia, resulting in no significant improvement. Notably, this therapy included two courses of antibiotics (the specifics unable to be remembered by the patient), with the latest course completed the week prior to admission. He had no relevant past medical or family history, specifically denying a history of tuberculosis, malignancy, and heart and lung disease. There were no current medications or known allergies; he denied intravenous or other recreational drug use, reported minimal alcohol use, and had never smoked.

Mr NP lived in suburban Melbourne with his wife and children. He kept two healthy dogs at home. There had been no sick contacts and no obvious animal or occupational exposures, although he noted that he occasionally stopped cattle trucks on the highway as part of his occupation, but had no direct contact with the cattle. He recently travelled to Auckland, New Zealand for two weeks, two months prior. There were no stopovers, notable exposures or travel throughout the country.

During the initial assessment of Mr NP’s acute pulmonary oedema, blood cultures were drawn with a note made of oral antibiotics during the preceding week. A transthoracic echocardiogram (TTE) found moderate aortic regurgitation with left ventricular dilatation. A subsequent transoesophageal echocardiogram (TOE) noted severe aortic regurgitation, a one centimetre vegetation on the aortic valve with destruction of the coronary leaflet, LV dilation with preserved ejection fraction greater than 50%. Blood cultures, held for 21 days, revealed no growth.

Empirical antibiotics were started and Mr NP was transferred to a large quaternary hospital for further assessment and aortic valve replacement surgery.

Table 1. A suggested schema for assessing exposures to infectious diseases during the clinical history, illustrated using the commonly used CHOCOLATES mnemonic.

Exposure Assessment Schemata: CHOCOLATES mnemonic
Country of origin

Household environment

Occupation

Contacts

Other: Immunisations, intravenous drug user, immunosuppression,

splenectomy, etc.

Leisure activities/hobbies

Animal exposures

Travel and prophylaxis prior

Eating and drinking

Sexual contact

AVR – Aortic valve replacement; ANA – Antinuclear antibodies; ENA – Extractable nuclear antigens

Examination

Examination of Mr NP, after transfer and admission, showed an alert man, pale but with warm extremities, with no signs of shock or sepsis. Vital signs revealed a temperature of 36.2°C, heart rate of 88 beats per minute, blood pressure of 152/50 mmHg (wide pulse pressure of 102 mmHg) and respiratory rate of 18 breaths per minute, saturating at 99% on room air.

No peripheral stigmata of endocarditis were noted, and there was no lymphadenopathy. Examination of the heart and lungs noted a loud diastolic murmur through the entire precordium, which increased with full expiration, but was otherwise normal with no signs of pulmonary oedema. His abdomen was soft and non-tender with no organomegaly noted.

Workup and Progress

Table 2 shows relevant investigations and results from Mr NP.

Table 2. Table outlining the relevant investigation results for Mr NP performed for further assessment of blood culture negative endocarditis.


 

Investigation Result
Blood Cultures Repeat Blood Cultures x 3 (on antibiotics) No growth until date; held for 21 days
Autoimmune Rheumatoid Factor Weak Positive – 16 [N <11]
ANA Negative
ENA Negative
Serology Q Fever Phase I Negative Phase II Negative
Bartonella Negative
Atypical Organisms; (Legionella, Mycoplasma) Negative
Valve Tissue (post AVR) Histopathology Non-specific chronic inflammation and fibrosis
Tissue Microscopy and Culture Gram positive cocci seen. No growth until date.
16S rRNA Streptococcus mitis
18S rRNA Negative

Empirical antibiotics for culture negative endocarditis were initiated during the initial presentation and were continued after transfer and admission:

Benzylpenicillin for streptococci and enterococci

Doxycycline for atypical organisms and zoonoses

Ceftriaxone for HACEK organisms

Vancomycin for staphylococcus and resistant gram positive bacteria.

During his admission, doxycycline was ceased after negative serology testing and microscopy identifying gram positive cocci. Benzylpenicillin was changed to ampicillin after a possible allergic rash. Ceftriaxone, ampicillin and vancomycin were continued until the final 16S rRNA result from valvular tissue identifying Streptococcus mitis, a viridians group Streptococci.

The patient underwent a successful aortic valve replacement (AVR) and was routinely admitted to the intensive care unit (ICU) post cardiac surgery. He developed acute renal failure, most likely due to acute tubular necrosis from a combination of bacteraemia, angiogram contrast, vancomycin, and the stresses of surgery and bypass. Renal functional gradually returned after resolution of contributing factors without the need for removal of vancomycin, and Mr NP was discharged to the ward on day six ICU.

Clinical improvement was seen in Mr NP, as well as through a declining white cell count and a return to normal renal function. He was discharged successfully with Hospital in the Home for continued outpatient IV vancomycin for a combined total duration of four weeks and for follow up review in clinic.

Discussion

There is an old medical adage, that “persistent bacteraemia is the sine qua non of endovascular infection.” The corollary is that persistently positive blood cultures is a sign of an infection within the vascular system. In most clinical situations this is either primary bacteraemia or infective endocarditis, although other interesting, but less common differentials, exist (e.g. septic thrombophlebitis/Lemierre’s Syndrome, septic aneurysms, aortitis, etc.). Consequently, blood culture negative endocarditis (BCNE) becomes both an oxymoron, and a unique clinical scenario.

BCNE can be strictly defined as endocarditis (as per Duke criteria) without known aetiology after three separate blood cultures with no growth after at least seven days, [1] although less rigid definitions have been used throughout the literature. The incidence is approximately 2-7% of endocarditis cases, although it can be as much as 31%, due to multiple factors such as regional epidemiology, the administration of prior antibiotics and the definition of BCNE used. [1-3] Importantly, the morbidity and mortality associated with endocarditis remains high despite multiple advances, and early diagnosis and treatment remains fundamental. [1,4,5]

The most common reason for BCNE is prior antibiotic treatment before blood culture collection, [1-3] as was the case with Mr NP. Additional associated factors for BCNE include exposure to zoonotic agents, underlying valvular disease, right-sided endocarditis and presence of a pacemaker. [1,3]

Figure 1 shows the aetiology of BCNE; Table 3 lists clinical associations and epidemiology of common organisms which may be identified during assessment. Notably, there is a high prevalence of zoonotic infections, as well as a large portion remaining unidentified. [2] Additionally, the incidence of normal endocarditis organisms is comparatively high, which in most cases have been suppressed through prior antibiotic use. [2]

Table 3. Common aetiologies in BCNE and associated clinical features and epidemiology. [1,2,5-9]

Aetiology Clinical Associations and Epidemiology
Q Fever (Coxiella burnetii) Zoonosis: contact with farm animals (commonly cattle, sheep, and goats). Farmers, abattoir workers, veterinarians, etc. Check for vaccination in aforementioned high risk groups.
Bartonella spp. Zoonosis: contact with cats (B henselae); transmitted by lice, poor hygiene, homelessness (B quintana).
Mycoplasma spp. Ubiquitous. Droplet spread from person to person, increased with crowding. Usually causes asymptomatic or respiratory illness, rarely endocarditis.
Legionella spp. Usually L pneumophila; L longbeachae common in Australia. Environmental exposures through drinking/inhalation. Colonises warm water, and soil sediments. Cooling towers, air conditioners, etc. help aerosolise bacteria. Urinary antigen only for L pneumophila serogroup 1. Usually respiratory illness, rarely endocarditis.
Tropheryma whipplei Associations with soil, animal and sewerage exposures. Wide spectrum of clinical manifestations. Causative organism of Whipple’s Disease (malabsorptive diarrhoeal illness).
Fungi Usually with Candida spp. Normal GIT flora. Associated with candidaemia, HIV/immunosuppression, intravascular device infections, IVDU, prosthetic valves, ICU admission, parenteral feeding, broad spectrum antibiotic use. Associated with larger valvular vegetations.
HACEK organisms* Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella spp. Fastidious Gram negative rods. Normal flora of mouth and upper GI. Associated with poor dentition and dental work. Associated with larger valvular vegetations.
Streptococcus viridans group* Umbrella term for alpha haemolytic streptococci commonly found as mouth flora. Associated with poor dentition and dental work.
Streptococcus bovis* Associated with breaches of colonic mucosa: colorectal carcinoma, inflammatory bowel disease and colonoscopies.
Staphylococcus aureus* Normal skin flora. IVDU, intravascular device infections, post-operative valve infections.

IVDU – Intravenous drug user; GIT – Gastrointestinal tract.

* Traditional IE organisms. Most BCNE cases with usual IE bacteria isolated where antibiotics given before culture. [1-3]


 

The HACEK organisms (Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella) are fastidious (i.e. difficult to grow), gram negative oral flora. Consequently (and as a general principle for other fastidious organisms), these slow growing organisms tend to produce both more subacute presentations as well as larger vegetations at presentation. They have been traditionally associated with causing culture negative endocarditis, but advancements in microbiological techniques have resulted in the majority of these organisms being able to be cultured within five days, and now have a low incidence in true BCNE. [1]

Q Fever is of particular importance as it is both the most common identified aetiology of BCNE, as well as an important offender in Australia, given the large presence of primary industry and the consequent potential for exposure. [1-3,6] Q Fever is caused by the Gram negative obligate intracellular bacteria Coxiella burnetii, (named after Australian Nobel laureate Sir Frank Macfarlane Burnet), and is associated in particular with various farm animal exposures (see Table 4). The manifestations of this condition are variable and nonspecific, and the key to diagnosis often lies in an appropriate index of suspicion and an exposure history. [6] In addition, Q fever is a very uncommon cause of BCNE in Northern Europe and UK, and patient exposures in this region may be less significant. [1,2,6]

The clinical syndrome is separated into acute and chronic Q Fever. This differentiation is important to note for two reasons: firstly, Q fever endocarditis is a manifestation of chronic, not acute, Q fever, and secondly because of the implication on serological testing. [6] Q fever serology is the most common diagnostic method used, and is separated into Phase II (Acute Q Fever) and Phase I (Chronic Q Fever) serologies. Accordingly, to investigate Q fever endocarditis, Phase I serology must be performed. [6]

Given the large incidence of zoonotic aetiologies, the modified Duke criteria suggests that positive blood culture or serology for Q fever be classed as a major criterion for diagnosis of endocarditis. [10] However, Lamas and Eykyn [3] found that even with the modifications to the traditional Duke criteria this is still a poor predictor for BCNE, identifying only 32% of their pathologically proven endocarditis patients. Consequently, they suggest the addition of minor criteria to improve sensitivity, making particular note of rapid onset splenomegaly or clubbing which can occur especially in patients with zoonotic BCNE. [3]

Figure 2 outlines the suggested diagnostic approach, modified from the original detailed by Fournier et al. [2] The initial steps are aimed at high incidence aetiologies and to rule out non-infectious causes, with stepwise progression to less common causes. Additionally, testing of valvular tissue plays a valuable role in aiding diagnosis in situations where this is available. [1,2,11,12]

16S ribosomal RNA (rRNA) gene sequence analysis and 18S rRNA gene sequence analysis are broad range PCR tests, which can be used to amplify genetic material that may be present inside a sample. Specifically, it identifies sections of rRNA which are highly preserved against mutation, and are specific to a species of organism. When a genetic sequence has been identified, it is compared against a library of known genetic codes to identify the organism if listed. 16S identify prokaryotic bacteria, and 18S is the eukaryotic fungal equivalent. These tests can play a fundamental role in the identification of aetiology where cultures are unsuccessful, although they must be interpreted with caution and clinical judgement, as they are highly susceptible to contamination and false positives due to their high sensitivity. [11-13] Importantly, antibiotic sensitivity testing is unable to be performed on these results, as there is no living microorganism isolated. This may necessitate broader spectrum antibiotics to allow for potential unknown resistance – as was demonstrated by the choice of vancomycin in the case of Mr NP.

The best use of 16S and 18S rRNA testing in the diagnosis of BCNE is upon valvular tissue; testing of blood is not very effective and not widely performed. [2,11,13] Notwithstanding, 18S rRNA testing on blood may be appropriate in certain situations where first line BCNE investigations are negative, and fungal aetiologies become much more likely. [2] This can be prudent given that most empirical treatment regimes do not include fungal cover.

Fournier et al. [2] suggested the use of a Septifast© multiplex PCR (F Hoffmann-La Roche Ltd, Switzerland) – a PCR kit designed to identify 25 common bacteria often implicated in sepsis – in patients who have had prior antibiotic administration. Although studies have shown its usefulness in this context, it has been excluded from Figure 2 because, to the best of the author’s knowledge, this is not a commonly used test in Australia. The original diagnostic approach from Fournier et al. [2] identified aetiology in 64.6% of cases, with the remainder being of unknown aetiology.

Conclusion

BCNE represents a unique and interesting, although uncommon, clinical scenario. Knowledge of the common aetiologies and appropriate testing underpins the timely and effective diagnosis of this condition, which in turn modifies and directs treatment. This is especially important due to the high morbidity and mortality rate of endocarditis and the unique spectrum of aetiological organisms which may not be covered by empirical treatment.

Acknowledgements

The author would like to thank Dr Adam Jenney and Dr Iain Abbott for their advice regarding this case.

Consent declaration

Informed consent was obtained from the patient for publication of this case report and accompanying figures.

Conflict of interest

None declared.

Correspondence

S Khan: sadid.khan@gmail.com

References

[1] Raoult D, Sexton DJ. Culture negative endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[2] Fournier PE, Thuny F, Richet H, Lepidi H, Casalta JP, Arzouni JP, Maurin M, Célard M, Mainardi JL, Caus T, Collart F, Habib G, Raoult D. Comprehensive diagnostic strategy for blood culture negative endocarditis: a prospective study of 819 new cases. CID. 2010; 51(2):131-40.
[3] Lamas CC, Eykyn SJ. Blood culture negative endocarditis: analysis of 63 cases presenting over 25 years. Heart. 2003;89:258-62.
[4] Wallace SM, Walton BI, Kharbanda RK, Hardy R, Wilson AP, Swanton RH. Mortality from infective endocarditis: clinical predictors of outcome.
[5] Sexton DJ. Epidemiology, risk factors & microbiology of infective endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[6] Fournier PE, Marrie TJ, Raoult D. Diagnosis of Q Fever. J. Clin. Microbiol. 1998, 36(7):1823.
[7] Apstein MD, Schneider T. Whipple’s Disease. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[8] Baum SG. Mycoplasma pneumonia infection in adults. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[9] Pedro-Botet ML, Stout JE, Yu VL. Epidemiology and pathogenesis of Legionella infection. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[10] Li JS, Sexton DJ, Mick N, Nettles R, Fowler VG Jr, Ryan T, Bashore T, Corey GR. Proposed modifications to the Duke criteria for the diagnosis of infective endocarditis. CID. 2000; 30:633-38.
[11] Vondracek M, Sartipy U, Aufwerber E, Julander I, Lindblom D, Westling K. 16S rDNA sequencing of valve tissue improves microbiological diagnosis in surgically treated patients with infective endocarditis. J Infect. 2011; 62(6):472-78
[12] Houpikian P & Raoult D. Diagnostic methods: Current best practices and guidelines for identification of difficult-to-culture pathogens in infective endocarditis. Infect Dis Clin N Am. 2002; 16:377–92
[13] Muñoz P, Bouza E, Marín M, Alcalá L, Créixems MR, Valerio M, Pinto A. Heart Valves Should Not Be Routinely Cultured. J Clin Microbiol. 2008; 46(9):2897.

Categories
Case Reports Articles

Metastatic melanoma: a series of novel therapeutic approaches

The following report documents the case of a 63 year old male with metastatic melanoma following a primary cutaneous lesion. Investigation into the molecular basis of melanoma has identified crucial regulators in melanoma cell proliferation and survival, leading to the inception of targeted treatment and a shift toward personalised cancer therapy. Recently, the human monoclonal antibody ipilimumab and the targeted BRAF inhibitor vemurafenib have demonstrated promising results in improving both progression-free and overall survival.

Introduction

A diagnosis of metastatic melanoma confers a poor prognosis, with a median overall survival of six to ten months. [1-3] This aggressive disease process is of particular relevance in Australia, owing to a range of adverse risk factors including a predominantly fair-skinned Caucasian population and high levels of ultra-violet radiation. [4-6] While improved awareness and detection have helped to stabilise melanoma incidence rates, Australia and New Zealand continue to display the highest incidence of melanoma worldwide. [4-7] Clinical trials have led to two breakthroughs in the treatment of melanoma: ipilimumab, a fully human monoclonal antibody, and vemurafenib, a targeted inhibitor of BRAF V600E.

Case Presentation

The patient, a 63 year old male, initially presented to his general practitioner ten years ago with an enlarging pigmented lesion in the centre of his back. Subsequent biopsy revealed a grade IV cutaneous melanoma with a Breslow thickness of 5mm. A wide local excision was performed, with primary closure of the wound site. Sentinel node biopsy was not carried out, and a follow-up scan six months later found no evidence of melanoma metastasis.

In mid-2010, the patient noticed a large swelling in his left axilla. A CT/ PET scan demonstrated increased fluorodeoxyglucose avidity in this area, and an axillary dissection was performed to remove a tennis ball- sized mass that was histopathologically identified wholly as melanoma. A four week course of radiotherapy was commenced, followed by six weeks of interferon therapy. However, treatment was discontinued when he developed acute abdominal pain caused by pancreatitis.

CT/PET scans were implemented every three months; in early 2011 pancreatic metastases were detected.

The tumour was tested for a mutation in BRAF, a protein in the mitogen activating protein kinase (MAPK) signaling pathway. BRAF mutations are found in approximately half of all cutaneous melanoma, and this is a target for a recently developed inhibitor, vemurafenib. [8-11] The patient’s test was negative, and he was commenced on a clinical trial of nanoparticle albumin bound (nab) paclitaxel. He completed a nine month course of nab-paclitaxel, and experienced many adverse side effects including extreme fatigue, nausea, and arthralgia. A CT/PET scan demonstrated almost complete remission of his pancreatic lesions. Despite this progress, three months after completing treatment, a follow-up CT/PET scan revealed liver metastases that were confirmed by biopsy.

In 2012 he was commenced on the novel immunotherapy agent ipilimumab, which involved a series of four infusions of 10mg/kg three weeks apart. One week after his second dose, he was admitted to hospital with a two day history of maintained high fevers reaching above 40oC, rigors, sweats, and diffuse abdominal pain. These symptoms were preceded by a week long mild coryzal illness. On investigation he had elevated liver enzymes, more than double the reference range, and his blood cultures were negative. His symptoms settled within eight days, and he was discharged after an admission of two weeks in total.

The patient remains hopeful about his future, and is optimistic about the ‘fighting chance’ that this novel therapy has presented.

Discussion

The complexity of the melanoma pathogenome poses a major obstacle in developing efficacious treatments; however, the identification of novel signaling pathways and oncogenic mutations is challenging this paradigm. [12,13] The resultant development of targeted treatment strategies has clinical importance, with a number of new molecules targeting melanoma mutations and anomalies specifically. The promise of targeted treatments is evident for a number of other cancers, with agents such as trastuzumab in HER-2 positive breast cancer and imatinib in chronic myelogenous leukaemia now successfully employed as first-line options. [14,15]

This patient’s initial treatment with interferon alpha aimed to eradicate remaining micro-metastatic disease following tumour resection. While interferon-alpha has shown disease-free survival benefit, studies have failed to consistently demonstrate significant improvement in overall survival. [16-18]

Favourable outcomes in progression-free and median survival have been indicated for the taxane-based chemotherapy nab-paclitaxel that he next received; however, it has also been associated with concerning toxicity and side effect profiles. [19]

Ipilimumab is a promising development in immunotherapy for metastatic melanoma, with significant improvement in overall survival reported in two recent phase III randomised clinical trials. [20,21] This novel monoclonal antibody modulates the immune response by blocking cytotoxic T lymphocyte-associated antigen 4 (CTLA-4), which competitively binds with B7 on antigen presenting cells to prevent secondary signaling. When ipilimumab occupies CTLA-4, the immune response is upregulated and host versus tumour activity is improved. Native and tumour-specific immune response modification has led to a profile of adverse events associated with ipilimumab that is different from those seen with conventional chemotherapy. Immune- related dermatologic, gastrointestinal, and endocrine side effects have been observed, with the most common immune specific adverse events being diarrhoea, rash, and pruritis (see Table 1). [20,21] The resulting patterns of clinical response to ipilimumab also differ from conventional therapy. Clinical improvement and favourable outcomes may manifest as disease progression prior to response, durable stable disease, development of new lesions while the original tumours abate, or a reduction of baseline tumour burden without new lesions. [22]

Recently discovered clinical markers may offer predictive insight into ipilimumab benefit and toxicity, and are a key goal in the development of personalised medicine. Pharmacodynamic effects on gene expression have been demonstrated, with baseline and post-treatment alterations in CD4+ and CD8+ T cells implicated in both likelihood of relapse and occurrence of adverse events. [23] Novel biomarkers that may be associated with a positive clinical response include immune- related tumour biomarkers at baseline and a post-therapy increase in tumour-infiltrating lymphocytes. [24]

Overall survival was reported as 10 and 11.2 months for the two phase III studies compared with 6.4 and 9.1 months in the control arms. [20,21] Furthermore, recently published data on the durability of response to ipilimumab has indicated five year survival rates of 13%, 23%, and 25% for three separate earlier trials. [25]

Somatic genetic alterations in the MAPK signaling cascade have been identified as key oncogenic mutations in melanoma, and research into independent BRAF driver mutations has resulted in the development of highly selective molecules such as vemurafenib. Vemurafenib inhibits constitutive activation of mutated BRAF V600E, thereby preventing upregulated downstream effects that lead to melanoma proliferation and survival. [26,27] A multicentre phase II trial demonstrated a median overall survival of 15.9 months, and a subsequent phase III randomised clinical trial was ended prematurely after pre-specified statistical significance criteria was attained at interim analysis. [8,9] Crossover from the control arm to vemurafenib was recommended by an independent board for all surviving patients. [8] Conversely, in patients with mutated upstream RAS and wild-type BRAF mutation status, the use of vemurafenib is unadvisable on the basis of preclinical models. For these mutations, BRAF inhibition may lead to paradoxical gain-of-function mutations within the MAPK pathway, and drive tumourigenesis rather than promoting downregulation. [13] The complexity of BRAF signaling and reactivation of the MAPK pathway is highly relevant in the development of intrinsic and acquired drug resistance to vemurafenib. Although the presence of the V600E mutation generally predicts response, acquisition of secondary mutations has resulted in short-lived treatment duration. [28]

Ipilimumab and vemurafenib, when used individually, clearly demonstrate improvements in overall survival. Following the success of these two agents, a study examining combination therapy in patients testing positive to the BRAF V600E mutation is currently underway. [29]

With the availability of new treatments for melanoma, the associated health care economics of niche market therapies need to be acknowledged. It is likely that the cost of these drugs will be high, making it difficult to subsidise in countries such as Australia where public pharmaceutical subsidies exist. Decisions about public subsidy of drugs are often made on cost-benefit analyses, which may be inadequate in expressing the real life benefits of prolonging a patient’s lifespan in the face of a disease with a dismal prognosis. Non-subsidy may lead to the availability of these medicines to only those who can afford it, and it is concerning when treatment becomes a commodity stratified by individual wealth rather than need. This problem surrounding novel treatments is only expected to increase across many fields of medicine with the torrent of medical advances to come.

Conclusion

This case illustrates the shift in cancer therapy for melanoma towards a model of personalised medicine, where results of genomic investigations influence treatment choices by potentially targeting specific oncogenes driving the cancer.

Conflict of interest

None declared.

Consent declaration

Informed consent was obtained from the patient for publication of this case report.

Correspondence

J Read: jazlyn.read@griffithuni.edu.au

 

Categories
Case Reports Articles

An unusual case of bowel perforation in a 9 month old infant

In Australia, between 2009 and 2010 almost 290 000 cases of suspected child abuse and neglect were reported to Australian state and territory authorities. Child maltreatment may present insidiously, not allowing signs of the maltreatment to be elicited until after a culmination of events. Ms. LW, a 9-month- old Indigenous female, presented to the Alice Springs Hospital emergency department (ED) with complaints of bloody diarrhea. A provisional diagnosis of viral gastroenteritis was suggested and she was managed with fluids to which her vitals responded positively. She was discharged six hours post presentation but presented three days later in a worsened condition with a grossly distended abdomen. Exploratory laparotomy found a perforated jejunum, which was deemed as a non-accidental injury. This case outlines the pitfalls in collateral communication in which we discuss the lack of use of an interpreter or Aboriginal health worker. We also emphasise the onus on junior doctors to practice in a reflective manner with the burdens of ED, so that they do not miss key diagnostic clues. Early detection of chronic maltreatment is important in the prevention of toxic stress to the child, which has been shown to contribute to a greater burden on society in the form of chronic manifestations later in life.

Introduction

Maltreatment, especially that of children can be insidious in nature, whose signs may not be evident until a culmination of unfortunate events. In Australia, during 2010-2011, there were 286,437 [1] reports of suspected child abuse and neglect made to state and territory authorities with a total of 40,466 substantiations (Figure 1). These notifications include four maltreatment types: physical abuse, sexual abuse, emotional abuse and neglect (Figure 2). As of 30 June 2010, there were 11,468 Aboriginal and Torres Strait Islander children in out-of-home care as a result of this. The national rate of Indigenous children in out-of-home care was almost ten times higher than for non- Indigenous children. [1]

Child protection statistics shown above tells us how many children have come into contact with child protection services; however, they do not take in to account the silent statistics of those who suffer without seeking aid. In all jurisdictions in 2010-11, girls were much more likely than boys to be the subject of a substantiation of sexual abuse. In contrast, boys were more likely to be subject to physical abuse than girls in all jurisdictions except Tasmania and the Northern Territory. [1]

Unfortunately it is difficult to obtain accurate statistics regarding the number of children who die from child abuse or neglect in Australia, as currently comprehensive information is not collected in every jurisdiction. Taking this into account however latest data recorded indicated that in 2006, assault was the third most common type of injury causing death for Australian children aged 0-14 years, [2] and totaled 27 children mortalities in 2006-07. Medical practitioners must be aware of the signs of child maltreatment and their long- term consequences, as they possess the opportunity to intervene and change the consequences of this terrible burden on afflicted children.

Case Presentation

Ms. LW, a nine month old Indigenous female and her mother presented to the Alice Springs ED at 2100, with complaints of bloody diarrhea. Emergency department staff noted that on presentation the infant was notably uncomfortable and tearful. She was afebrile, with mild tachypnoea (50 respirations per min) all other vitals were normal. Examination of the infant revealed discomfort in the epigastric region with no other significant findings including no organomegaly or distention. No other abdominal signs in particular signs such as guarding or rigidity were noted on admission. Systemic review did not show any significant findings. Past medical history included recurrent chest infections with the last episode two months prior. No immunisation history was available. The staff had difficulty examining the child because she was highly irritable. It was also difficult to elicit a comprehensive history from the mother as she spoke minimal English and was relatively dismissive of questions. No interpreter was used in this setting.

The patient was diagnosed with viral gastroenteritis and treated conservatively by the administration of intravenous fluids to maintain hydration. After six hours of observation and a slight improvement in Ms. LW’s vitals she was sent home in the early morning hours after intense pressure from the family. No other treatments and investigations were done and the staff discharged her with the recommendation of returning if the symptoms worsened over the next day.

The patient returned three days later to ED with symptoms clearly of a different nature and not that of the previous diagnosis of gastroenteritis. On general observation the patient appeared unwell, irritable and was crying weakly. On examination she was found to be febrile (40°C) and toxic with tachycardia (168 bpm) tachypnoea (60 respirations per minute), and gross distention of her abdomen (Figure 3).

The case was referred to the on-call surgeon, who gave a provisional diagnosis of perforated bowel and decided to perform a laparotomy. She was immediately started on intravenous broad-spectrum antibiotics, ampicillin (200mg /6hourly), metronidazole (30mg /12 hourly) and gentamicin (20 mg/daily) before surgery.

Emergency laparotomy was performed, and on initial exploration it was found that the peritoneum contained foul smelling serous fluid with a mixture of blood and faecal matter. Further exploration found perforation of the jejunum with the mesentery torn from the fixed end of the jejunum (Figure 5). The surgeons resected the gangrenous portion of the jejunum and performed an end-to-end anastomosis of small bowel.

The abdomen was lavaged with copious amounts of warm saline and the abdominal wall was closed in interrupted layers. Post surgery the child remained intubated, ventilated and was admitted to the ICU. After 24 hours post surgery the infant was extubated successfully and oral feeding was commenced after 48 hours post surgery. The patient made an uneventful recovery and was later transferred to the paediatric ward.

The surgeons commented that the initial perforation to the jejunum fixed to the mesentery caused de-vascularisation of this portion, leading to the further degradation and gangrenous state of the intestine and thus worsening the child’s condition.

As the surgeons had indicated that this injury was of a non-accidental nature the parents of the infant were brought in to be interviewed by the consultant, with the aid of an interpreter. The parents denied any falls or injuries sustained in the events leading to the presentation, which the surgical team had already exclude, due to the absence of associated injuries and symptoms. The consultant noted that both parents were not forthcoming with information even with the aid of an interpreter. Further questioning from the allied health team finally led to an answer. The father admitted that on the morning of the initial presentation while he was sitting on the ground his daughter pulled his hair from behind him to which he responded by elbowing her in the mid-region of her abdomen. Upon obtaining this information a skeletal survey was undertaken, in which a hairline fracture of the shaft of the left humerus and minor bruising in this region was found.

Case resolution

The infant was assumed into care under the basis of neglect and the case was mandatorily reported to Child Protective Services. The parents were then reported to the police for further questioning and probable court hearings. Once the patient was stable, she was discharged into the care of her grandmother, with a further review to be made by Child Protective Services at a later date.

Discussion

Child abuse is still a cause for concern in Australia although there has been a decrease in substantiations since 2007. [3] Although the total substantiations have decreased, on a state level, Victoria, South Australia, Western Australia, Tasmania and the Northern Territory have recorded an increase in the number of abuse substantiations. The most common abuse type reported in the 2010-2011 was of emotional abuse (36%) followed by neglect (29%), physical abuse (22%) and sexual abuse (13%).

Children who suffer through maltreatment not only have physical burdens placed on them, they often have many associated long-term problems. [4] The term recently coined is ‘toxic stress’, which results from sustained neglect or abuse. Children are unable to cope and hence activate the body’s stress response (elevated cortisol levels). When this occurs over a prolonged period of time it can lead to permanent changes in the development of the immune and central nervous systems (e.g. hippocampus). [5] This combination results in cognitive deficits that result in unwanted manifestations during adult life including poor academic performance, substance abuse, smoking, depression, eating disorders, risky sexual behaviors, adult criminality and suicide. [6] These health issues contribute to a significant proportion of society’s health burden.

Medical practitioners and especially those working in ED, are in an advantageous position to be able to intervene in child toxic stress. It is important to be aware of signs or ‘red flags’ that may point to maltreatment including: failure to thrive, burn marks (cigarette), unusual bruising and injuries, symptoms that do not match the history, recurrent presentation to health services, recurrent vague symptoms, child being cold and withdrawn, lethargic appearance, immunodeficiency without specific pathology and less commonly Munchausen syndrome by proxy. [7]

Previously we alluded to the fact that the child protection data only reflects those reported to the child protective services. Economically disadvantaged families are more likely to come into contact with and be under the scrutiny of public authorities. This means that it is more likely that abuse and neglect will be identified in the economically disadvantaged, [4] however child abuse may occur in all socioeconomic demographics.

This case illustrates the common pitfalls in the clinical setting, one of these being the lack of a clear history obtained at initial presentation. It was mentioned that there was poor communication between the patient’s mother and the attending to gain any meaningful information, yet there was no use of an interpreting service or Aboriginal health workers. As Aboriginal health workers have usually lived in the community they work in and most have developed lasting relationships with the community and with the various government agencies. [8] This makes them experts at bridging the communication gap between the patient and the doctor.

Another clinical pitfall demonstrated by this case was the poor examination of this infant, and the failure to recognise important signs such as guarding and rigidity – highly suggestive of insidious pathology. These finding would lead a clinician to perform further investigations such as a CXR or CT-scan which would have determined the underlying pathology. Additionally, no systemic examination was conducted in the haste to discharge the patient from ED. However, this meant, another important sign of abuse – the bruising on the infant’s left arm, was missed. Additionally, no investigations were performed when the infant initially presented to ED and hence, the diagnosis of viral gastroenteritis was not confirmed. Furthermore, bacterial gastroenteritis was not properly excluded although it is highly likely in the context of bloody diarrhoea.

Emergency department physicians have many stressors and constant interruptions during their shifts and this combination is known to cause breaks in routine tasks. [9] In 2008, the Australian Medical Association conducted a survey of 914 junior doctors and found that the majority of individuals met well established criteria for low job satisfaction (71%), burnout (69%) and compassion fatigue (54%). [10] These factors indirectly affect patient outcomes and in particular, can lead to overlooking key diagnostic clues. With the recent introduction of the National Emergency Access Target (NEAT), also know as the ‘4 hour rule’, statistics have shown that there has been no change in mortality. [11,12] However, this is a recent implementation and there is a possibility that with junior doctors and nursing staff pushed for a high turnover of patients, that child maltreatment may be missed.

Recommendations

1. Early recognition of child abuse requires a high index of suspicion.

2. Be familiar with mandatory reporting legislation as it varies between state/territories.

3. As junior doctors it is imperative that we use all hospital services such as the interpreting services and the Aboriginal health workers. We can thus enhance optimum history taking.

4. It is important to practice in a reflective manner to prevent inexperience, external pressures and job dissatisfaction from affecting patient quality of care.

5. Services should be encouraged to have Indigenous social/case workers available for consultation.

Conclusion

Paediatric presentations within a hospital can be very challenging, and as junior doctors have the most contact with these patients, they must be aware of important signs of abuse and neglect. We have outlined the importance in communicating with Indigenous patients and the related pitfalls if this is done incorrectly. Doctors are in a position to detect child abuse and to intervene before the long-term consequences manifest.

Conflict of interest

None declared.

Consent declaration

Informed consent was obtained from the next-of-kin for publication of this case report and all accompanying figures.

Correspondence

M Jacob: matt.o.jacob@gmail.com

 

Categories
Case Reports Articles

Dengue fever in a rural hospital: Issues concerning transmission

Introduction: Dengue is either endemic or epidemic in almost every country located in the tropics. Within northern Australia, dengue occurs in epidemics; however, the Aedes aegypti vector is widespread in the area and thus there is a threat that dengue may become endemic in future years. Case presentation: An 18 year old male was admitted to a rural north Queensland hospital with the provisional diagnosis of dengue fever. No specific consideration was given to the risk that this patient posed to other patients, including a 56 year old male with chronic myeloid leukaemia and prior exposure to dengue. Discussion: Much media and public attention has been given to dengue transmission in the scope of vector control in the community. Hospital-based dengue transmission from patient-to-patient requires consideration so as to minimise unnecessary morbidity and mortality. Vector control within the hospital setting appears to be an appropriate preventative measure in the context of the presented case. Transfusion and transplantation-related transmission of dengue between patients are important considerations. Vertical dengue infection is also noted to be possible. Conclusion: Numerous changes in the management of dengue-infected patients can be made that are economically feasible. Education of healthcare workers is essential to ensure the safety of all patients admitted to hospitals in dengue-affected areas. Bed management in particular is one area that may benefit from increased attention.

Introduction

Dengue is diagnosed annually in more than 50 million people worldwide and represents one of the most important arthropod-borne viral infections. [1-4] Estimates suggest that the potentially lethal complication of dengue haemorrhagic fever occurs in 500 000 people and an alarming 24 000 deaths result from infection annually. [1,2,4] Coupled with the increasing frequency and severity of outbreaks in recent years, dengue has been identified as a major and escalating public health concern. [2,4,5]

Whilst most of the burden of dengue occurs in developing countries, northern Australia is known to have epidemics. Suggestions have been made that dengue may become endemic in this region in future years based on increasing migration, international travel, population growth, climate change and widespread presence of vectors. [6-12] The vast majority of studies have focused on vector control in the community setting. [2,4,5,9] The purpose of this report is to discuss the risks of transmission of dengue in a hospital setting and in particular, patient- to-patient transmission. Transmission of dengue in a hospital is important to consider as immunological responses and health status of hospitalised patients can be poor. Inadequate management of dengue- infected patients may ultimately threaten the lives and complicate treatment of other patients, creating unnecessary economic costs and demands on healthcare. [12-14]

This case report highlights the difficulties of handling a suspected dengue-infected patient from the perspective of an Australian rural hospital. Recommendations are made to improve management of such patients, in particular, embracing technological advancements including digital medical records that are likely to become available in future years.

Case report

An 18 year old male, patient 1, presented to a rural north Queensland hospital emergency department with a four day history of fever, generalised myalgia and headache. He resided in an area that was known to be in the midst of a dengue outbreak. He had no past medical or surgical history and had never travelled. On examination, the patient’s tympanic temperature was 38.9°C and he had dry mucous membranes. No rash was observed and no other abnormal findings were noted. Laboratory investigations, which included dengue PCR and dengue serology, were taken. He was admitted for observation and given intravenous fluids. A provisional diagnosis of dengue fever was made.

The patient was subsequently placed in a room with four beds. Whilst two of the beds in the room did not have patients in them, the remaining bed was occupied by patient 2, a 56 year old male with chronic myeloid leukaemia (CML), who had been hospitalised the previous day with a lower respiratory tract infection. The patient’s medical history was notable for a past episode of dengue fever five years previously following an overseas holiday.

The patient with presumed dengue fever remained febrile for two days. He walked around the ward and went outside for cigarettes. He also opened the room window, which was unscreened. Tests subsequently confirmed that he had a dengue viral infection.

Whilst no dengue transmission occurred, the incident raised a number of issues for consideration, as no concerns regarding transmission was raised by staff or either patients.

Discussion

The dengue viruses are single positive-stranded RNA viruses belonging to the Flaviviridae family, with four distinct serotypes described. [4,12] Infection can range from asymptomatic, to a mild viral syndrome associated with fever, malaise, headache, myalgia and rash, or an eventual severe presentation characterised by haemorrhage and shock. [3,9] Currently the immunopathogenesis of severe dengue infection, which occurs in less than 5 percent of infections and includes dengue haemorrhagic fever and shock syndromes, is poorly defined. [2,3]

Whilst primary infection in the young and well nourished has been associated with the development of severe infection, the major aetiology of severe infection is thought to be secondary infection with a different serotype. [3,9] This has been hypothesised to be as a result of an antibody-mediated enhancement reaction, although authors also suggest that other factors are likely to contribute. [3,4,9] Untreated dengue haemorrhagic fever is characterised by increased capillary permeability and haemostatic changes and has a mortality rate of 10-20 percent. [2,3,5] This complication can further deteriorate into dengue shock syndrome. [3] Whilst research shows that the serious complications of dengue infection occurs mainly in children, adults with asthma, diabetes and other chronic diseases may be at increased risk and secondary dengue infections could be life threatening in these groups. [4,5,15]

The most commonly reported route of infection is via the bite of an infected Aedes mosquito, primarily Aedes aegypti. [2-14] This vector feeds during the day, prefers human blood and breeds in close proximity to humans. [5,12,13] The transmission of dengue has been widely reported in the urban setting and has a geographical distribution including more than 100 countries. [3,13] However, only one study has reported dengue vector transmission from within a hospital. [16] Kularatne et al. (2007) recently described a dengue outbreak that started within a hospital in Sri Lanka and was unique such that a building site next to the hospital provided breeding sites for mosquitoes. [16] Dengue infection was noted to cause significant cardiac dysfunction, and of particular note was that medical students, nurses, doctors and other hospital employees were the main targets. [16] The authors report that at the initial outbreak one medical student died due to shock and severe pulmonary oedema as a result of acute viral myocarditis. [16] This case highlights the risk of dengue transmission within a hospital setting.

In addition to the vector-borne transmission, dengue can be also be transmitted by other routes, including transfusion. [17,18] The incidence of blood transfusion-associated dengue infection has been one area of investigation that has primarily been reported in endemic countries. In one study conducted in Hong Kong by Chuang et al. (2008) the prevalence of this mode of transmission was 1 in 126. [17] Whilst rare in Australia, an investigation undertaken during the 2004 outbreak in Cairns, Queensland calculated the risk of transfusion- related dengue infection by mathematical modelling and reported the risk of collecting a viraemic donation as 1 in 1028 persons during the course of the epidemic. [18] Donations from the affected areas were not used for transfusion. [18]

Case reports have also been published demonstrating that transplantation can represent a route of dengue infection between hospitalised patients. [19,20] Rigau-Pérez and Laufer (2006) described a six year old child who developed fever four days post-bone marrow transplantation and subsequently died. [19] Dengue virus was isolated from the blood and tissues of the child and the donor was subsequently known to have become febrile with tests for dengue being found to be positive. [19] Dengue infection resulting from solid organ transplantation has also been described in a 23 year old male with end-stage renal failure. [20] The donor of the transplanted kidney had dengue fever six months prior to the transplant and the recipient of the organ had dengue fever five days postoperatively. [20] The recipient had a complicated recovery and required an emergency laparotomy and blood products to ensure survival. [20] The authors of this case report further discuss the fact that the patient in question had resided in a dengue-endemic region and therefore could not exclude the usual mode of infection. [20]

Whilst not applicable to the presented case, vertical transmission of dengue has also been noted to be an important consideration in hospitalised patients. Reports from endemic countries have suggested that transmission can occur if infection of the mother occurs within eight days of delivery. [9,21] One neonatal death has been reported as a result of dengue infection and a number of studies have reported peripartum complications requiring medical treatment in other neonates. [21,22] Interpretation of this result should be viewed with caution due to difficulties cited in the clinical diagnosis of dengue in neonates, as it is possible that vertical transmission may be underreported. [22]

Taking into account the reported case study and presented evidence, it is clear that patient 1 presented a risk to patient 2. It is essential to acknowledge that dengue transmission can occur within a hospital setting. Whilst only one study has reported vector transmission of dengue within a hospital, it does define the real possibility of transmission associated with close contact and a competent vector. [16] There is also a need to emphasise the fact that patient 1 walked outside the hospital on numerous occasions and that unscreened windows were open within the hospital ward room. Consequently, it can be stated that patient-to-patient dengue infection would have been possible not only for patient 2, but also other admitted patients. Additionally, healthcare workers and community members that lived within the area surrounding the hospital were also at risk.

In acknowledging that vector transmission within a hospital is the most important hazard in regards to transmission of dengue from patient-to-patient, numerous control measures can be implemented to decrease the risk of transmission. Infrastructure plans within hospitals are important, as screened windows would decrease the ability of mosquitoes to enter hospitals. In those hospitals where such changes may not be economically feasible, studies have reported that having patients spend as much time as possible under insecticide treated mosquito nets, limiting outdoor time for infected patients, wearing protective clothing and applying insecticide numerous times throughout the day may decrease the possibility of dengue infection within hospitals. [23-25]

Educational programs for healthcare professionals and patients also warrant consideration. Numerous programs have been established primarily in the developing world and have proven to be beneficial. [26,27] It is important to create innovative education programs aimed at educating those healthcare workers that care for suspected dengue- infected patients as well as members of the public. This is one area that needs to be explored in future years.

Additionally, this case study demonstrates that current protocols in bed management do not consider a past medical history of dengue infection when assigning patients to beds. This report draws attention to the importance of identifying those patients at risk of secondary infection with dengue. As electronic patient records are implemented in many countries throughout the world, a past history of confirmed dengue infection needs to be considered. This may mean when resources are available, that patients are not placed in the same room thereby avoiding unnecessarily placing patients at risk. Whilst this would not completely exclude the possibility of dengue transmission in a hospital, it may set the trend for improved protocols in infection control particularly when secondary infection is associated with poorer outcomes. [2-5,9]

Conclusion

Infection control is often targeted in tertiary referral centres. This report clearly highlights the importance of appreciating infection control within a rural setting. Dengue infection between patients is a possibility with available evidence suggesting that this is most likely to be from exposure of an infected individual to a competent vector. Numerous changes have the potential to decrease the likelihood of dengue infection. Healthcare worker education is a critical component of these changes so that suspected dengue infected patients may also be educated regarding the risk that they represent to members of the public. The utilisation of screened windows, insecticide treated mosquito nets, and patient measures such as wearing protective clothing and applying insect repellents are all preventative measures that need to be considered. Future research is likely to develop technological aides for appropriate bed assignment. This will ensure that unnecessary morbidity and mortality associated with dengue infection are avoided.

Consent declaration

Informed consent was obtained from the patients for publication of this report.

Conflict of interest

None declared.

Correspondence

R Smith: ross.smith@my.jcu.edu.au

 

Categories
Review Articles Articles

Treatment of persistent diabetic macular oedema – intravitreal bevacizumab versus laser photocoagulation: A critical appraisal of BOLT Study for an evidence based medicine clinical practice guideline

Laser photocoagulation has remained the standard of treatment for diabetic macular oedema (DME) for the past three decades. However, it has been shown to be unbeneficial in chronic diffuse DME. Intravitreal bevacizumab (ivB) has been proposed as an alternate and effective treatment of DME. This review evaluates the evidence behind comparing bevacizumab to laser photocoagulation in treating persisting DME. A structured systematic search of literature, with critical appraisal of retrieved trials, was performed. Four randomised controlled trials (RCTs) supported beneficial effects of ivB over laser photocoagulation. Only one RCT, the BOLT study, compared laser to ivB effect in persistent DME. The results from the study showed significant improvement in mean best corrected visual acuity (BCVA) and greater reduction in mean central macular thickness (CMT) in the ivB group, with no significant difference in safety outcome measures.

Introduction

Diabetic macular oedema is a frequent manifestation of diabetic retinopathy and is one of the leading causes of blindness and visual acuity loss worldwide. [1] The presence of DME varies directly in proportion with the duration and stage of diabetic retinopathy, with a prevalence of three percent in mild non-proliferating retinopathy, 38% in moderate-to-severe non-proliferating retinopathy and 71% with proliferative retinopathy. [2]

Diabetic macular oedema (DME) is a consequence of micro-vascular changes in the retina that lead to fluid/plasma constituent accumulation in the intra-retinal layers of the macula thereby increasing macular thickness. Clinically significant macular oedema (CSME) is present when there is thickening within or close to the central macula with hard exudates within 500μm of the centre of the macula and with retinal thickening of at least one disc area in size. [3,4] As measured in optical coherence tomography, central macular thickness (CMT) corresponds approximately to retinal thickness at the foveal region and can quantitatively reflect the amount of CSME a patient has. [5] Two different types of DME exist: focal DME (due to fluid accumulation from leaking micro-aneurysms) and diffuse DME (due to capillary incompetence and inner-retinal barrier breakdown).

Diabetic macular oedema pathogenesis is multi-factorial; influenced by diabetes duration, insulin dependence, HbA1C levels and hypertension. [6] Macular laser photocoagulation has remained the standard treatment for both focal and diffuse DME, based on the recommendations of the Early Treatment Diabetic Retinopathy Study (ETDRS) since 1985. This study showed the risk of CSME decreases by approximately 50% (from 24% to 12%) at three years with the use of macular laser photocoagulation. However, the improvement in visual acuity is modest, observed in less than three percent of patients. [3]

Recent research indicates that macular laser therapy is not always beneficial and has limited results, especially for chronic diffuse DME, [3,7] with visual acuity improving in only 14.5% of patients. [8] Following laser treatment, scars may develop and reduce the likelihood of vision improvement [3] hence alternate treatments for DME such as intravitreal triamcinolone (ivT), have been investigated. Intravitreal triamcinolone (ivT) works via a number of mechanisms including reducing vascular permeability and down regulating VEGF (vascular endothelial growth factor). Anti-VEGF therapies have been the focus of recent research, and those modalities have been shown to potently suppress angiogenesis and to decrease vascular permeability in ocular disease such as DME, leading to improvement in visual acuity. [9] The results of treating DME with anti-VEGFs are controversial and are in need of larger prospective RCTs. [10]

Currently used anti-VEGFs include bevacizumab, ranibizumab and pegatanib. Ranibizumab has been shown to be superior in treating DME, both in safety and efficacy, compared to laser therapy, in several studies that include RESTORE, RESOLVE, RISE and RIDE studies. [11-13] It has been recently approved by the Food and Drug Administration (FDA) for treating DME in the United States of America. [14] Bevacizumab (Avastin®) is a full length monoclonal antibody against VEGF, binding to all subtypes of VEGF. [10] In addition to treating metastatic colon cancer, bevacizumab is also used extensively off-label for many ocular conditions that include age related macular degeneration (AMD), DME, retinopathy of prematurity and macular oedema secondary to retinal vein occlusion. [15] Documented adverse effects of ivB include transiently elevated intraocular pressure (IOP) and endopthalmitis. [16] Systemic effects associated with ivB injection include rise in blood pressure, thrombo-embolic events, myocardial infarction (MI), transient ischemic attack and stroke. [16,17] Other significant adverse events of bevacizumab when given systemically include delayed wound healing, impaired female fertility, gastrointestinal perforations, haemorrhage, proteinuria, congestive heart failure and hypersensitivity reactions. [17] Although not currently approved, a 1.25-2.5mg infusion of ivB is used for treating DME without significant ocular/systemic toxicity. [15]

The DRCR.net study (2007) has shown that ivB can reduce DME. [18] In addition, several studies, which have been carried out on diabetic retinopathy patients with CSME evaluating the efficacy of ivB ± ivT versus laser, demonstrated better visual outcomes with BCVA. [6,19- 21] Meta-analysis of those studies indicated ivB to be an effective short-term treatment for DME, with efficacy waning after six weeks. [6] This review evaluates the evidence behind the effect of ivB, compared to laser, in treating persisting DME despite standard treatment.

Clinical question

Our clinical question for this focused evidence based medicine article has been constructed to address the four elements of the problem, the intervention, the comparison and the outcomes as recommended by Strauss et al. (2005) [22]. “In diabetic patients with persistent clinically significant macular oedema (CSME) is intravitreal Bevacizumab (Avastin®) injection better than focal/grid laser photocoagulation in preserving the best-corrected visual acuity (BCVA)?”

Methodology

Comprehensive electronic searches in the British Medical Journal, Medical Journal of Australia, Cochrane Central Register of Controlled Trials, MEDLINE and PUBMED were performed for relevant literature, using the search terms diabetic retinopathy, CSME, CMT, bevacizumab and laser photocoagulation. Additional information from the online search engine, Google, was also incorporated. Reference lists of studies were then hand-searched for relevant studies/trials.

Selection

Results were restricted to systematic reviews, meta-analysis and randomised clinical trials (RCTs). Overall six RCTs were identified, which evaluated the efficacy of ivB compared to lasers in treating DME. [18- 21,23,24] There was also one meta-analysis comparing ivB to non-drug control treatment (lasers or sham) in DME. [7] One study was published showing pilot study results of the main trial, so the final version was selected for consideration to avoid duplication of results. [20,23] One study was excluded because it excluded focal DME patients. [19] The DRCR study (2007) was excluded because it was not designed to evaluate if treatment with ivB was beneficial in DME patients. [18] A meta-analysis by Goyal et al. was also excluded because it evaluated bevacizumab with sham treatment and not laser therapy. [7]

Thus, three relevant RCTs were narrowed down for analysis (Table 1) in this evidence based medicine review. [20,21,24] However, only the BOLT study (2012) evaluated the above treatment modalities in persistent CSME. The other two RCTs evaluated the treatment efficacies in patients with no prior laser therapies for CSME/diabetic retinopathy. Hence, only the BOLT study (2012) has been critically appraised in this report. The study characteristics of the other relevant RCTs evaluating ivB versus lasers are represented in Table 1, and where possible will be included in the discussion.

Outcomes

The primary outcomes of interest are changes in BCVA and CMT, when treated with ivB or lasers for DME, whilst the secondary outcomes are any associated adverse events. All three studies were prospective RCTs with NHMRC level-II evidence. Table 1 summarises the overall characteristics of the studies.

Critical appraisal

The BOLT Study (2010) is a twelve month report of a two year long single centre, two arm, randomised, controlled, masked clinical trial from the United Kingdom (UK). As such, it qualifies for NHMRC [25] level-II quality of evidence. It is the only RCT that compared the efficacy of ivB with laser in patients with persistent CSME (both diffuse and focal DME) who had undergone at least one laser therapy for CSME previously. Comparison of study characteristics of the three RCTs chosen are presented in Table 2.

Major strengths of the BOLT Study compared to Soheilian et al. and Faghihi et al. studies include the duration of study and increased frequency of review of patients in ivB groups. The BOLT Study was a two year study, whereas the other two studies’ duration was limited to less than a year (Table 1). Because of its lengthy duration, it was possible to evaluate the safety outcome profile of ivB in the BOLT Study, unlike in the other two studies.

Research has indicated that the effects of ivB could last between two to six weeks, [6] and the effects of lasers could last until three to six months. [3] In BOLT, the ivB group was assessed every six weeks, and re-treatment provided with ivB as required, while the laser group were followed up every four months ensuring the preservation of efficacy profile and its reflection in the results. Whereas, in Sohelian et al., [20] follow up visits were scheduled every twelve weeks after the first visit, and in Faghihi et al., [21] follow up was at six and sixteen weeks. Therefore, there may have been a bias against the efficacy profile of ivB, given the insufficiency in the nature of follow up/treatment. Apart from the follow up and therapy modalities, the groups were treated equally in BOLT, preserving the analysis against treatment bias.

Weaknesses of the BOLT Study [24] include limited number of patients: 80 eyes in total, with 42 patients allocated to ivB and 38 patients to laser therapy. Of them, in the ivB group, six patients discontinued intervention; only 37 patients were included in the analysis at 24 months and five were excluded as the data was not available. Similarly, of the 38 patients allocated to the laser group, 13 patients discontinued the intervention; 28 patients were analysed overall where ten were excluded from analysis. However, the BOLT Study performed intention to treat analysis minimizing dropout effects. Given these, we feel the BOLT Study fulfills the criteria for a valid RCT with significant strengths.

Magnitude and precision of treatment effect from BOLT Study

Best corrected visual acuity outcomes

Significant difference existed between mean ETDRS BCVA at 24 months in the ivB group (64.4±13.3) compared to the laser group (54.8±12.6) with p=0.005 (Any p-value <0.05 indicates statistical significance between the groups under comparison). Furthermore, the study reports of the ivB group gaining a median of 9 ETDRS letters whereas the laser group gaining a median of 2.5 letters (p=0.005). Since there was a significant difference between the duration of CSME between the two groups, the authors of the study performed analysis after adjusting for this variable. They also adjusted for the baseline BCVA and for patients who had cataract surgery during the study. The mean BCVA still remained significantly higher in the ivB group compared to laser.

Marked difference has also been shown in the proportion of people who gained or lost vision between the two treatment groups. Approximately, 49% of patients in the ivB group gained more than or equal to ten ETDRS letters compared to seven percent of patients in laser group (p-value = 0.01). Similarly, none of the patients in the ivB group, compared to 86% in the laser group (p=0.002), lost fewer than 15 ETDRS letters. In addition, the study also implied that BCVA and CMT can be maintained long term with reduced injection frequency of six to twelve months. However, the authors also suggest that increasing the frequency of injections to every four weeks (rather than the six week frequency opted in the study) may provide better visual acuity gains as reported in RISE and RIDE studies. [13]

Central macular thickness outcomes

The mean change in the CMT over the 24 month period was -146±171μm in ivB group compared to -118±112μm in the laser group (p=0.62), showing statistically no significant difference in ivB/laser effectively reducing the CMT. This differed from the twelve month report of the same study that indicated improvement in CMT in the ivB group compared to the laser group.

Retinopathy

Results of the BOLT Study indicated a trend of reducing retinopathy severity level in the ivB group, while the laser group showed stabilised grading. However, the Mann-Whitney test indicated no significant difference between the groups (p=0.13). [24]

We summarised the results of the author’s analysis of the step-wise changes in retinopathy grading levels, for further analysis, into three categories: deteriorating, stable and improving (Table 3). As shown in the table, we calculated the p-values using the chi-square test between both groups for each category.

We attempted to further quantify the magnitude of ivB treatment compared to lasers on the retinopathy severity level by calculating the number needed to treat (NNT) using the data in Table 3. The results showed an absolute risk reduction of nine percent with an NNT of 10.9 (95% CI indicating harm in 21.6 harm to benefit in 3.6 patients treated). Since the confidence interval indicates an uncertainty between benefit and harm, this trial does not give sufficient information to inform clinical decision making regarding change in retinopathy severity levels with ivB treatment.

Safety outcome measures

As mentioned, one of the strengths of the BOLT Study is evaluating the safety profile of ivB given its two year duration. The study analysed the safety outcomes of macular perfusion and retinal nerve fibre layer (RNFL) thickness in detail. The results indicated no significant difference in the mean greatest linear diameter of foveal avascular zone between the laser and the ivB group, from baseline or in the worsening of severity grades. Similarly, no significant changes in median RNFL thickness have been reported between ivB and laser groups.

At 24 months, the number of observed adverse events, ocular and systemic, in the study was low. We have analysed the odds ratio (Table 4) as per the published results in the study. Statistically significant higher chances of having eye pain and irritation (eighteen times greater risk) during or after intervention, sustaining sub-conjunctiva haemorrhage and of having a red eye (eighteen times greater risk) was found in the ivB group compared to lasers. As can be further inferred from the table, no significant differences in sustaining other non-ocular adverse events, ocular serious adverse events or non-ocular serious adverse events including stroke/MI/other thrombo-embolic events were found between both the groups.

Clinical applicability of results

The BOLT Study participants were from Moorfields Eye Hospital (UK) and had comparable demographics and healthcare standards to Australia. In the study, both patient (BCVA, retinopathy severity level changes, adverse events) and disease-oriented outcomes (CMT) were considered, making the study both theoretically and practically relevant, informing both clinicians and researchers of the outcomes. Given this, clinical applicability of the results to the Australian population appears reasonable. All other personnel involved in the study (outcome assessors) and imaging technology are available as well, making the treatment feasible in our setting.

In Australia, the overall diabetic retinopathy prevalence is 24.5%, [6] the statistics associated with it rise every year due to the progressing obesity/diabetes epidemic. Bevacizumab is currently approved under the pharmaceutical benefits scheme for metastatic colon cancer.

It is being successfully used ‘off-label’ for the treatment of ocular conditions including age related macular degeneration and diabetic macular oedema. It costs about 1/40th the cost of ranibizumab, another anti-VEGF drug that has current approval for AMD treatment in Australia and FDA approval for DME treatment in America. [26] Since recent studies indicate no superior effect of ranibizumab versus bevacizumab in safety and efficacy profile in preserving visual acuity, [27,28] and since recent NICE guidelines also recommend not using ranibizumab for diabetic macular oedema due to high costs involved with the administration of that drug, [29] bevacizumab must be further considered and evaluated for cost effectiveness in routine usage in clinical practice.

Given the benefits with ivB, that is, improved BCVA, no significant adverse events and no risk of permanent laser scarring of the retina, and the aforementioned discussion, using ivB in treatment for persisting DME appears to be evidence based, and relatively safe practice.

Conclusion

The BOLT Study assessed the safety and efficacy of ivB in persistent DME despite previous laser therapy. The power of the study was 0.8 enabling it to detect BCVA differences between two groups. In line with many other previous studies evaluating ivB’s efficacy, the results indicate significant improvement in the mean ETDRS BCVA, and no significant differences in severe systemic/ocular adverse events compared to the laser group. This study supports the use of ivB in patients with CSME, with adequate precision. However the magnitude of the effect on changes in the severity of diabetic retinopathy, in CMT changes and other adverse events, needs to be evaluated further through large prospective RCTs.

Conflict of interest

None declared.

Correspondence

pavani.kurra@gmail.com

Categories
Articles Review Articles

Seasonal influenza vaccination in antenatal women: Views of health care workers and barriers in the delivery of the vaccine

Background: Pregnant women are at an increased risk of developing influenza. The National Health and Medical Research Council recommends seasonal influenza vaccination for all pregnant women who will be in their second or third trimester during the influenza season. The aim of this review is to explore the views of health care workers regarding seasonal influenza vaccination in antenatal women and describe the barriers in the delivery of the vaccine. Methods: A literature search was conducted using MEDLINE for the terms: “influenza,” “pregnancy,” “antenatal,” “vaccinations,” “recommendations,” “attitudes,” “knowledge” and “opinions”. The review describes findings of publications concerning the inactivated influenza vaccination only, which has been proven safe and is widely recommended. Results: No studies have addressed the knowledge and attitudes of Australian primary health care providers towards influenza vaccination despite their essential role in immunisations in Australia. Overseas studies indicate that factors that contribute to the low vaccination rates are 1) the lack of general knowledge of influenza and its prevention amongst health care workers (HCWs) 2) variable opinions and attitude regarding the vaccine 3) lack of awareness of the national guidelines 4) and lack of discussion of the vaccine by the HCW. Lack of maternal knowledge regarding the safety of the vaccine and the cost-burden of the vaccine are significant barriers in the uptake of the vaccination. Conclusion: Insufficient attention has been given to the topic of influenza vaccinations in pregnancy. Significant efforts are required in Australia to obtain data about the rates of influenza vaccination of pregnant women.

Introduction

Seasonal influenza results in annual epidemics of respiratory diseases. Influenza epidemics and pandemics increase hospitalisation rates and mortality, particularly among the elderly and high risk patients with underlying conditions. [1-3] All pregnant women are at an increased risk of developing influenza due to progressive suppression of Th1- cell-mediated immunity and other physiological changes that cause culmination of morbidity towards the end of pregnancy. [4-7]

Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications [8] Trivalent inactivated influenza vaccine (TIV) has been proven safe and is recommended for person aged ≥6 months, including those with high- risk conditions such as pregnancy. [8-10] A randomised controlled study in Bangladesh demonstrated that TIV administered in the third trimester of pregnancy resulted in reduced maternal respiratory illness and reduced infant influenza infection. [11, 12] Another randomised controlled trial has shown that influenza immunisation of pregnant women reduced influenza-like illness by more than 30% in both the mothers and the infants, and reduced laboratory-proven influenza infections in 0- to 6-month-old infants by 63%. [13]

The current Australian Immunisation Guidelines recommend routine administration of influenza vaccination for all pregnant women who will be in the second or third trimester during the influenza season, including those in the first trimester at the time of vaccination. [4,14,15] The seasonal influenza vaccination has been made available for free to all pregnant women in Australia since 2010. [4] However, The Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZOG) statement for ‘Pre-pregnancy Counselling and routine Antenatal Assessment in the absence of pregnancy Complications’ does not explicitly mention routine delivery of influenza vaccination to healthy pregnant women. [16] RANZCOG recently published the college statement on swine flu vaccination during pregnancy; advising that pregnant women without complications and recent travel history must weigh the risk-benefit ratio before deciding to uptake the H1N1 influenza immunisation. [17] Therefore, it is evident that there is conflicting advice in Australia about the routine delivery of influenza vaccination to healthy pregnant women. In contrast, firm recommendation for routine influenza vaccination for pregnant women was established in 2007, by the National Advisory Committee on Immunisations (NACI) in Canada, with minimal conflict from The Society of Obstetricians and Gynaecologists of Canada (SOGC). [6] Succeeding the 1957 influenza pandemic, the rate of influenza immunisations increased significantly with greater than 100,000 women receiving the vaccination annually between 1959-1965 in the United States. [8] Since 2004 the American Advisory Committee on Immunisation Practice (ACIP) has recommended influenza vaccination for all pregnant women, at any stage of gestation. [9] This is supported by The American College of Obstetricians and Gynaecologists’ Committee on Obstetric Practice. [18]

A recent literature review performed by Skowronski et al. (2009) found that TIV is warranted to protect women against influenza- related hospitalisation during the second half of normal pregnancy, but evidence is otherwise insufficient to recommend routine TIV as the standard of practice for all healthy women beginning in early pregnancy. [6] Similarly, another review looked at the evidence for the risks of influenza and the risks and benefits of seasonal influenza vaccination in pregnancy and concluded that data on influenza vaccine safety in pregnancy is inadequate. [19] However, based on the available literature, there was no evidence of serious side effects in women or their infants, including no indication of harm from vaccination in the first trimester. [19]

We aim to review the literature published on the delivery and uptake of influenza vaccination during pregnancy and identify the reasons for low adherence to guidelines. The review will increase our understanding of how the use of the influenza vaccination is perceived by health care providers and the pregnant women.

Evidence of health care provider’s attitude, knowledge and opinions

Several published studies have revealed data supporting deficits in the knowledge of health care providers regarding the significance of the vaccine and the national guidelines, hence suggesting a low rate of vaccine recommendation and uptake by pregnant women. [20] A research project in 2006 performed a cross-sectional study of the knowledge and attitudes towards the influenza vaccination in pregnancy amongst all levels of health care workers (HCW’s) working at the Department for Health of Women and Children at University of Milan, Italy. [20] The strength of this study was that it included 740 HCWs representing 48.4% working in obstetrics/gynaecology, 17.6% in neonatology and 34% in paediatrics, of whom 282 (38.1%) were physicians, 319 (43.1%) nurses, and 139 (18.8%) paramedics (health aides/healthcare assistants). The respondents were given a pilot-tested questionnaire about their perception of the seriousness of influenza, their general knowledge of influenza recommendations and preventive measures, and their personal use of influenza vaccination; which was to be self-completed in 20 mins in an isolated room. Descriptive analysis of the 707 (95.6%) HCWs that completed the questionnaire revealed that the majority (83.6%) of HCW’s in obstetrics/gynaecology never recommended the influenza vaccination to healthy pregnant women. Esposito et al. (2007) highlighted that only a small number of nurses and paramedics, from each speciality, regarded influenza as serious in comparison to the physicians. [20] Another study investigating practices of the Midwives found that only 37% believed that influenza vaccine is effective and 22% believed that the vaccine was a greater risk than influenza. [21] The results from these studies clearly indicate deficiencies in the general knowledge of influenza and its prevention amongst health care staff.

In contrast, a study by Wu et al. (2006) suggested unusually high vaccination uptake rate of the fellows from the American College of Obstetricians and Gynaecologists (ACOG) who live and practice in Nashville, Tennessee. [22] The survey focussed on physician knowledge, practices, and opinions regarding influenza vaccination of pregnant women. Results revealed that 89% of practitioners responded that they routinely recommend the vaccine to pregnant women and 73% actually administered the vaccination to pregnant and postpartum women. [21] Sixty-two percent responded that the earliest administration of the vaccine should be the second trimester, while 32% reported that it should be offered in the first trimester. Interestingly, 6% believed that it should not be delivered at all during the pregnancy. Despite the national recommendation to administer the vaccination routinely to all pregnant women, [4] more than half of the obstetricians preferred to withhold it until second trimester due to concerns regarding vaccine safety, association with spontaneous abortion and possibility of disruption in embryogenesis. [22] Despite the high uptake rate identified by the respondents, there are a few major limitations in this study. First, the researchers excluded the family physicians and midwives practicing obstetrics in their survey, which prevents a true representation of the sample population. Second, the vaccination rates were identified by the practitioners and not validated, which increases the likelihood of personal bias by the practitioners.

It is evident that HCWs attending to pregnant women and children have limited and frequently incorrect beliefs concerning influenza and its prevention. [20,23] A recent study by Tong et al. (2008) demonstrated that only 40% of the health care providers at the three hospitals studied in Toronto were aware of the high-risk status of pregnant women and only 65% were aware of the NACI recommendations. [23] Furthermore, obstetricians were less likely than family physicians to indicate that it was their responsibility to discuss, recommend, or provide influenza vaccination. [23] Tong et al. (2008) also demonstrated that high levels of provider knowledge about influenza and maternal vaccination, positive attitudes towards influenza vaccination, increased age, being a family physician, and having been vaccinated against influenza, were associated with recommending influenza vaccine to pregnant women. [23] This data is also supported by Wu et al. and Espostio et al.

In 2001, Silverman et al. (2001) concluded that physicians were more likely to recommend vaccine if they were aware of current ‘Centers for Disease Prevention and Control’ guidelines, gave vaccinations in their offices and had been vaccinated against influenza themselves. [24] Similarly, Lee et al. (2005) showed that midwives who received the immunisation themselves and firmly believed in its benefits, were more likely to offer it to pregnant women. [21] Wallis et al. (2006) conducted a multisite interventional study involving educational sessions with the physicians and the use of “Think Flu Vaccine” notes on active obstetric charts, to illustrate a fifteen fold increase in the rate of influenza vaccinations in pregnancy. [25] This study also demonstrated that increase in uptake was greater in family practices versus obstetric practices, and furthermore increased in small practices as opposed to large practices.

Overall, the literature here is derived mostly from American and Canadian studies as there is no data available for Australia. Existing data suggest that there is a significant lack of understanding regarding influenza vaccine safety, benefits and recommendations amongst the HCW’s. [20-27] These factors may lead to wrong assumptions and infrequent vaccine delivery.

Barriers in delivering the influenza vaccinations to pregnant women

Aside from the gaps in the health care provider’s understanding of vaccine safety and national guidelines, several other barriers in delivering the influenza vaccine to pregnant women have been identified. A study published in 2009, based on CDC analysis of data from the Pregnancy Risk Assessment and Monitoring System from Georgia and Rhode Island over the period of 2004-2007, showed that the most common reasons for not receiving the vaccination were, “I don’t normally get the flu vaccination” (69.4%), and “my physician did not mention anything about a flu vaccine during my pregnancy” (44.5%). [28] Lack of maternal knowledge about the benefits of the influenza vaccination has also been demonstrated by Yudin et al. (2009), who conducted a cross-sectional in hospital survey of 100 postpartum women during the influenza season in downtown Toronto. [29] This study concluded that 90% of women incorrectly believed that pregnant women have the same risk of complications as non-pregnant women and 80% incorrectly believed that the vaccine may cause birth defects. [29]. Another study highlighted that 48% of physician listed patient refusal as a barrier for administering the vaccine. [22] These results were supported by Wallis et al. (2006), which focused on using simple interventions such as chart reminders to surmount the gaps in knowledge of women. [25] ‘Missed opportunities’ by obstetricians and family physicians to offer the vaccination have been suggested as a major obstacle in the delivery of the influenza vaccination during pregnancy. [14,23,25,28]

During influenza season, hospitalized pregnant women with respiratory illness had significantly longer lengths of stay and higher odds of delivery complications than hospitalized pregnant women without respiratory illness. [5] In some countries cost-burden of the vaccine to women is another major barrier that contributes to lower vaccination rates among pregnant women. [22] This is not an issue in Australia where the vaccination is free for all pregnant women. Provision of free vaccination to all pregnant women is likely to have a significant advantage when considering the cost-burden of influenza on the health-care sector. However, the cost-burden on the patient can be viewed as lack of access, as reported by Shavell et al. (2012) As such patients that lacked insurance and transportation were less likely to receive the vaccine. [30]

This is supported by several studies that have shown that the vaccine is comparatively cost-effective when considering the financial burden of influenza related morbidity. [31] A 2006 study based on decision analysis modelling revealed that vaccination rate of 100% in pregnant women would save approximately 50 dollars per woman, resulting in a net gain of approximately 45 quality-adjusted hours relative to providing supportive care alone in the pregnant population. [32] Beigi et al. (2009) demonstrated that maternal influenza vaccination using either the single- or 2-dose strategy is a cost-effective approach when influenza prevalence is 7.5% and influenza-attributable mortality is 1.05%. [32] As the prevalence of influenza and/or the severity of the outbreak increases the incremental value of vaccination also increases. [32] Moreover, a study in 2006 has proven the cost-effectiveness to the health sector of the single dose influenza vaccination for influenza like illness. [31] Therefore, patient education about the relative cost- effectiveness of the vaccine and adequate reimbursement by the government is required to alleviate this barrier in other nations but not in Australia where the vaccination is free for all pregnant women.

Lack of vaccine storage facilities in physician offices is an important barrier preventing the recommendation and uptake of the vaccine by pregnant women. [23,33] A recent study monitoring the immunisation practices amongst practicing obstetricians found that less than 30% store influenza vaccine in their office. [18] One study showed acceptance rates of influenza vaccine of 71% of 448 eligible pregnant women who were offered the influenza vaccine at routine prenatal visit due to the availability of storage facilities at the practice, suggesting that the uptake of vaccination can be increased by simply overcoming the logistical and organisational barriers such as vaccine storage, inadequate reimbursement and patient education. [34]

Conclusion

From the limited data available, it is clear that there are is a variable level of knowledge of influenza and its prevention amongst HCWs. There is also and a general lack of awareness of the national guidelines in their countries. However, there is no literature for Australia to compare with other nations. There is some debate regarding the trimester in which the vaccine should be administered. There is further lack of clarity in terms of who is responsible for the discussion and delivery of the vaccine – the general practitioner or the obstetrician. These factors contribute to a lack of discussion of vaccine use and amplify the amount of ‘missed opportunities.’

Lack of maternal knowledge about the safety of the vaccine and its benefits is also a barrier that must be overcome by the HCW through facilitating an effective discussion about the vaccine. Since the vaccine has been rendered free in Australia, cost should not prevent vaccination. Regular supply and storage of vaccines especially in remote towns of Australia is likely to be a logistical challenge.

There is limited Australian literature exploring the uptake of influenza vaccine in pregnancy and the contributing factors such as the knowledge, attitude and opinion of HCWs, maternal knowledge of the vaccine and logistical barriers. A reasonable first step would be to determine the rates of uptake and prevalence of influenza vaccination in antenatal women in Australia.

Conflict of interest

None declared.

Correspondence

S Khosla: surabhi.khosla@my.jcu.edu.au

 

Categories
Review Articles Articles

Spontaneous regression of cancer: A therapeutic role for pyrogenic infections?

Spontaneous regression of cancer is a phenomenon that is not well understood. While the mechanisms are unclear, it has been hypothesised that infections, fever and cancer are linked. Studies have shown that infections and fever may be involved in tumour regression and are associated with improved clinical outcomes. This article will examine the history, evidence and future prospects of pyrogenic infections towards explaining spontaneous regression and how they may be applied to future cancer treatments.

Introduction

Spontaneous regression of cancer is a phenomenon that has been observed since antiquity. [1] It can be defined as a reversal or reduction of tumour growth in instances where treatment has been lacking or ineffectual. [2] Little is known about its mechanism but two observations in cancer patients are of particular interest: first, infections have been shown to halt tumour progression while second, development of fever has been associated with improved prognosis.

Until recently, fever and infections have been regarded as detrimental states that should be minimized or prevented. However, in the era preceding the use of antibiotics and antipyretics, the prior observations were prevalent and were used as the basis of crude yet stunningly effective immunological-based treatments. The promise of translating that success to modern cancer treatment is a tempting one and should be examined further.

History: Spontaneous Regression & Coley’s Toxins

Spontaneous regression of cancers was noted as early as the 13th century. The Italian Peregrine Lazoisi was afflicted with painful leg ulcers which later developed into a massive cancerous growth. [3]The growth broke through the skin and became badly infected. Miraculously, the infection induced a complete regression of the tumour and surgery was no longer required. He later became the patron saint of cancer sufferers.

Reports that associated infections and tumour regression continued to grow. In the 18th century, Trnka and Le Dran reported cases of breast cancer regressions which occurred after tumour site infection. [4, 5] These cases are often accompanied by signs of inflammation and fever and gangrene are common. [3]

In the 19th century, such observations became the basis of early clinical trials by physicians such as Tanchou and Cruveillhier. Although highly risky, they attempted to replicate the same conditions artificially by applying a septic dressing to the wound or injecting patients with pathogens such as malaria. [1] The results were often spectacular and suddenly, this rudimentary form of ‘immunotherapy’ seemed to offer a genuine alternative to surgery.

Until then, the only option for cancer was surgery and outcomes were at times very disappointing. Dr. William Coley (a 19th century New York surgeon) related his anguish after his patient died despite radical surgery to remove a sarcoma of the right hand. [3] Frustrated by the limitations of surgery, he sought an alternative form of treatment and came across the work of the medical pioneers Busch and Fehleisen. They had earlier experimented with erysipleas, injecting or physically applying the causative pathogen, Streptococcus pyogenes, onto the tumour site. [6] This was often followed by a high fever which correlated with a concomitant decrease in tumour size in a number of patients. [3] Coley realized that using live pathogens was very risky and he eventually modified the approach using a mixture of killed S. pyogenes and Serratia marescens. [7] The latter potentiated the effects of S. pyogenes such that a febrile response can be induced safely without an ‘infection’, and this mixture became known as Coley’s toxins. [1]

A retrospective study in 1999 showed that there was no significant difference in cancer death risk between patients treated using Coley’s toxins and those treated with conventional therapies (i.e. chemotherapy, radiotherapy and surgery). [8] Data from the second group was obtained from the Surveillance Epidemiology End Result (SEER) registry in the 1980s. [3] This observation is remarkable given that Coley’s toxins were developed at a fraction of the cost and resources afforded to current conventional therapies.

Researchers also realized that Coley’s toxins have broad applicability and are effective across cancers of mesodermal embryonic origin such as sarcomas, lymphomas and carcinomas. [7] One study comparing the five-year survival rate of patients with either inoperable sarcomas or carcinomas found that those treated with Coley’s toxin showed had a survival rate as high as 70-80%. [9]

Induction of a high grade fever proved crucial to the success of this method. Patients with inoperable sarcoma who were treated with Coley’s toxins and developed a fever between 38-40 oC had a five-year survival rate three times higher than that of afebrile patients. [10] As cancer pain can be excruciating, pain relief is usually required. Upon administration of Coley’s toxins, an immediate and profound analgesic effect was often observed; allowing the discontinuation of narcotics. [9]

Successes related to ‘infection’ based therapies are not isolated. In the early 20th century, Nobel laureate Dr. Julius Wagner-Jauregg used tertian malaria injections in the treatment of neurosyphilis-induced dementia paralytica. [3]This approach relied on the induction of prolonged and high grade fevers. Considering the high mortality rate of untreated patients in the pre-penicillin era, he was able to achieve an impressive remission rate of approximately one in two patients. [11]

More recently, Bacillus Calmette-Guérin (BCG) vaccine has been used in the treatment of superficial bladder cancers. [12] BCG consists of live attenuated Mycobacterium bovis and is commonly used in tuberculosis vaccinations. [12,13] Its anti-tumour effects are thought to involve a localized immune response stimulating production of inflammatory cytokines such as tumour necrosis factor α (TNF-α) and interferon γ (IFN-γ). [13] Similar to Coley’s toxins, it uses a bacterial formulation and requires regular localized administration over a prolonged period. BCG is shown to reduce bladder cancer recurrence rates in nearly 70% of cases and recent clinical trials suggest a possible role in colorectal cancer treatment. [14] From these examples, we see that infections or immunizations can have broad and effective therapeutic profiles.

Opportunities Lost: The End of Coley’s Toxins

After the early success of Coley’s toxins, momentum was lost when Coley died in 1936. Emergence of chemotherapy and radiotherapy overshadowed its development while aseptic techniques gradually gained acceptance. After World War II, large-scale production of antibiotics and antipyretics also allowed better suppression of infections and fevers. [1] Opportunities for further clinical studies using Coley’s toxins were lost when despite decades of use, it was classified as a new drug by the US Food and Drug Administration (FDA). [15] Tightening of regulations regarding clinical trials of new drugs after the thalidomide incidents in the 1960s meant that Coley’s toxins were highly unlikely to pass the stringent safety requirements. [3]

With fewer infections, spontaneous regressions became less common. An estimated yearly average of over twenty cases in the 1960-80s decreased to less than ten cases in the 1990s. [16] It was gradually believed that the body’s immune system had a negligible role in tumour regression and focus was placed on chemotherapy and radiotherapy. Despite initial promise, these therapies have not fulfilled their full potential and the treatment for certain cancers remains out of reach.

In a curious turn of events, advances in molecular engineering have now provided us with the tools to transform immunotherapy into a viable alternative. Coley’s toxins have provided the foundations for early immunotherapeutic approaches and may potentially contribute significantly to the success of future immunotherapy.

Immunological Basis of Pyrogenic Infections

The most successful cases treated by Coley’s toxins are attributed to: successful infection of the tumour, induction of a febrile response and daily intra-tumoural injections over a prolonged period.

Successful infection of tumour

Infection of tumour cells results in infiltration of lymphocytes and antigen-presenting cells (APCs) such as macrophages and dendritic cells (DCs). Binding of pathogen-associated molecular patterns (PAMPs) (e.g. lipopolysaccharides) to toll-like receptors (TLRs) on APCs induces activation and antigen presentation. The induction process also leads to the expression of important co-stimulatory molecules such as B7 and interleukin-12 (IL-12) required for optimal activation of B and T cells. [17] In some cases, pathogens such as the zoonotic vesicular stomatitis virus (VSV) have oncolytic properties and selectively lyse tumour cells to release antigens. [18]

Tumour regression or progression depends on the state of the immune system. A model of duality in which the immune system performs either a defensive or reparative role has been proposed. [1, 3] During the defensive mode, tumour regression occurs and immune cells are produced, activated and mobilized against the tumour. In the reparative model, tumour progression is favoured and invasiveness is promoted via immunosuppressive cytokines, growth factors, matrix metalloproteinases and angiogenesis factors. [1, 3]

The defensive mode may be activated by external stimuli during infections; this principle can be illustrated by the example of M1/M2 macrophages. M1 macrophages are involved in resistance against infections and tumours and produce pro-inflammatory cytokines such as IL-6, IL-12 and IL-23. [19, 20] M2 macrophages promote tumour progression and produce anti-inflammatory cytokines such as IL-10 and IL-13. [19, 20] M1 and M2 macrophage polarization is dependent on transcription factors such as interferon response factor 5 (IRF5). [21] Inflammatory stimuli such as bacterial lipopolysaccharides induce high levels of IRF5 and this commits macrophages to the M1 lineage while also inhibiting expression of M2 macrophage marker expression. [21] This two-fold effect may be instrumental in facilitating a defensive mode.

Induction of febrile response

In Matzinger’s ‘danger’ hypothesis, the immune system responds to signals produced during distress known as danger signals, including inflammatory factors released from dying cells. [22] T cells remain anergic unless both danger signals and tumour antigens are provided. [23] A febrile response is advantageous as fever is thought to facilitate inflammatory factor production. Cancer cells are also more vulnerable to heat changes and elevated body temperature during fever may promote cell death and the massive release of tumour antigens. [24]

Besides a physical increase in temperature, fever encompasses profound physiological effects. An example of this is the induction of heat-shock protein (HSP) expression on tumour cells. [16] Studies have shown that Hsp70 expression on carcinoma cells promotes lysis by natural killer T (NKT) cells in vitro, while tumour expression of Hsp90 may play a key role in DC maturation. [25, 26] Interestingly, HSPs also associate with tumour peptides to form immunogenic complexes involved in NK cell activation. [25] This is important since NK cells help overcome subversive strategies by cancer cells to avoid T cell recognition. [27] Down regulation of major histocompatibility complex (MHC) expression on cancer cells results in increased susceptibility to NK cell attacks. [28] These observations show that fever is equally adept at stimulating innate and adaptive responses.

Route and duration of administration

The systemic circulation poses a number of obstacles for successful delivery of infectious agents to the tumour site. Neutralization by pre-immune Immunoglobulin M (IgM) antibodies and complement activation impede pathogens. [18] Infectious agents may bind non- specifically to red blood cells and undergo sequestration by the reticuloendothelial system. [29] In the liver, specialized macrophages called, Kupffer cells, can also be activated by pathogen-induced TLR binding and cause inflammatory liver damage. [29] An intratumoural route therefore has the advantage of circumventing most of these obstacles to increase the probability of successful infection. [18]

It is currently unclear if innate or adaptive immunity is predominantly responsible for tumour regression. Coley observed that shrinkage often occurred hours after administration whereas if daily injections were stopped, even for brief periods, the tumour continued to progress. [30] Innate immunity may therefore be important and this is consistent with insights from vaccine development, in which adjuvants enhance vaccine effectiveness by targeting innate immune cells via TLR activation. [1]

Although T cell numbers in tumour infiltrates are substantial, tolerance is pervasive and attempts to target specific antigens have been difficult due to antigenic drift and heterogeneity of the tumour microenvironment. [31] A possible explanation for the disproportionality between T cell numbers and the anti-tumour response is that the predominant adaptive immune responses are humoral rather than cell-mediated. [32] Clinical and animal studies have shown that spontaneous regressions in response to pathogens like malaria and Aspergillus are mainly antibody mediated. [3] Further research will be required to determine if this is the case for most infections.

Both innate and adaptive immunity are probably important at specific stages with sequential induction holding the key to tumour regression. In acute inflammation, innate immunity is usually activated optimally and this in turn induces efficient adaptive responses. [33] Conversely, chronic inflammation involves a detrimental positive feedback loop that acts reversibly and over-activates innate immune cells. [34] Instability of these immune responses can result in suboptimal anti- tumour responses.

Non-immune considerations and constructing the full picture

Non-immune mechanisms may be partly responsible for tumour regression. Oestrogen is required for tumour progression in certain breast cancers and attempts to block its receptors by tamoxifen have proved successful. [35] It is likely that natural disturbances in hormone production may inhibit cancerous growth and promote regression in hormone dependent malignancies. [36]

Genetic instability has also been mentioned as a possible mechanism. In neuroblastoma patients, telomere shortening and low levels of telomerase have been associated with tumour regression. [37] This may be due to the fact that telomerase activity is required for cell immortality. Other potential considerations may include stress, hypoxia and apoptosis but these are not within the scope of this review. [38]

As non-immune factors tend to relate to specific subsets of cancers, they are unlikely to explain tumour regression as a whole. They may instead serve as secondary mechanisms  which support a primary immunological system. During tumour progression, these non-immune factors may either malfunction or become the target of subversive strategies.

A simplified outline of the possible role of pyrogenic infections in tumour kinetics is illustrated below (Figure 1).

Discussion

The intimate link between infections, fever and spontaneous regression is slowly being recognized. While the incidence of spontaneous regression is steadily decreasing due to circumstances in the modern clinical se

Categories
Review Articles Articles

The therapeutic potentials of cannabis in the treatment of neuropathic pain and issues surrounding its dependence

Cannabis is a promising therapeutic agent, which may be particularly beneficial in providing adequate analgesia to patients with neuropathic pain intractable to typical pharmacotherapy. Cannabinoids are the lipid-soluble compounds that mediate the analgesic effects associated with cannabis by interacting with the endogenous cannabinoid receptors CB1 and CB2, which are distributed along neurons associated with pain transmission. From the 60 different cannabinoids that can be found in cannabis plants, delta-9 tetrahydrocannabinol (THC) and cannabidiol are the most important in regards to analgesic properties. Whilst cannabinoids are effective in providing diminished pain responses, their therapeutic use is limited due to psychotropic side effects via interaction with CB1, which may lead to cannabis dependence. Cannabinoid ligands also interact with glycine receptors, selectively to CB2 receptors, and act synergistically with opioids and non-steroidal anti-inflammatory drugs (NSAIDs) to attenuate pain signals. This may be of therapeutic potential due to the lack of psychotropic effects produced. Clinical trials of cannabinoids in neuropathic pain have shown efficacy in providing analgesia; however, the small number of participants involved in these trials has greatly limited their significance. Although the medicinal use of cannabis is legal in Canada and some parts of the United States, its use as a therapeutic agent in Australia is not permitted. This paper will review the role cannabinoids play in providing analgesia, the pharmacokinetics associated with various routes of administration and dependence issues that may arise from its use.

Introduction

Compounds in plants have been found to be beneficial, and now contribute to many of the world’s modern medicines. Delta-9- tetrahydrocannibinol (THC), the main psychoactive cannabinoid derived from cannabis plants, mediates its analgesic effects by acting at both the central and peripheral cannabinoid receptors.[1] The analgesic properties of cannabis were first observed by Ernest Dixon in 1899, who discovered that dogs failed to react to pin pricks following the inhalation of cannabis smoke.[2] Since that time, there has been extensive research into the analgesic properties of cannabis, including whole plant and synthetic cannabinoid studies. [3-5]

Although the use of medicinal cannabis is legal in Canada and parts of the United States, every Australian jurisdiction currently prohibits its use.[6] Despite this, Australians lead the world in the illegal use of cannabis for both medicinal and recreational reasons. [7]

Although the analgesic properties of cannabis could be beneficial in treating neuropathic pain, the use of cannabis in Australia is a controversial, widely debated subject. The issue of dependence to cannabis arising from medicinal cannabis use is of concern to both medical and legal authorities. This review aims to discuss the pharmacology of cannabinoids as it relates to analgesia, and also the dependence issues that may arise from the use of cannabis.

Medicinal cannabis can be of particular benefit in the treatment of neuropathic pain that is intractable to the typical agents used, such as tricyclic antidepressants, anticonvulsants and opioids. [3,8] Neuropathic pain is a disease affecting the somatosensory nervous system which thereby causes pain that is unrelated to peripheral tissue injury. Treatment options are limited. The prevalence of chronic pain in Australia has been estimated at 20% of the population, [9] with neuropathic pain estimated to affect up to 7% of the population. [10]

The role of cannabinoids in analgesia

Active compounds found in cannabis

Cannabis contains over 60 cannabinoids, with THC being the quintessential mediator of analgesia and the only psychoactive constituent found in cannabis plants. [11] Another cannabinoid, cannabidiol, also has analgesic properties; however, instead of interacting with cannabinoid receptors, its analgesic properties are attributed to inhibition of anandamide degradation. [11] Anandamide is the most abundant endogenous cannabinoid in the CNS and acts as an agonist at cannabinoid receptors. By inhibiting the breakdown of anandamide, its time in the synapse is prolonged and its analgesic effects are perpetuated.

Cannabinoid and Vanilloid receptors

Distributed throughout the nociceptive pathway, cannabinoid receptors are a potential target for the administration of exogenous cannabinoids to suppress pain. Two known types of cannabinoid receptors, CB1 and CB2, are involved in pain transmission. [12] The CB1 cannabinoid receptor is highly expressed in the CNS as well as in peripheral tissues, and is responsible for the psychotropic effects produced by cannabis. There is debate regarding the location of the CB2 cannabinoid receptor, previously found to be largely distributed in peripheral immune cells. [12-13] Recent studies, however, suggest that CB2 receptors may also be found on neurons. [12-13] The CB2 metabotropic G-protein coupled receptors are negatively coupled to adenylate cyclase and positively coupled to mitogen-activated protein kinase. [14] The cannabinoid receptors are also coupled to pre-synaptic voltage-gated calcium channel inhibition and inward- rectifying potassium channel activation, thus depressing neuronal excitability, eliciting an inhibitory effect on neurotransmitter release and subsequently decreasing pain transmission. [14]

Certain cannabinoids have targets other than cannabinoid receptors through which they mediate their analgesic properties. Cannabidiol can act at vanilloid receptors, where capsacsin is active, to produce analgesia. [15] Recent studies have found that the actions of administered cannabinoids in mice have a synergestic effect to the response of glycine, an inhibitory neurotransmitter that may contribute to its analgesic effects. Analgesia was absent in mice that lacked glycine receptors, but not in those lacking cannabinoid receptors, thus indicating an important role of glycine in the analgesic affect of cannabis. [16] Throughout this study, modifications were made to the compound to enhance binding to glycine receptors and diminish binding to cannabinoid receptors, which may be of therapeutic potential to achieve analgesia without psychotropic side effects. [16]

Mechanism of action in producing analgesia and side effects

Cannabinoid receptors also play an important role in the descending inhibitory pathways via the midbrain periaqueductal grey (PAG) and the rostral ventromedial medulla (RVM). [17] Pain signals are conveyed via primary afferent nociceptive fibres to the brain via ascending pain pathways that synapse on the dorsal horn of the spinal cord. The descending inhibitory pathway modulates pain transmission in the spinal cord and medullary dorsal horn via the PAG and RVM before noxious stimuli reaches a supraspinal level and is therefore interpreted as pain. [17] Cannabinoids activate the descending inhibitory pathway via gamma-aminobutyric acid (GABA)-mediated disinhibition, thus decreasing GABAergic inhibition and enhancing impulses responsible for the inhibition of pain; this is similar to opioid-mediated analgesia. [17]

Cannabinoid receptors, in particular CB1, are distributed throughout the cortex, hippocampus, amygdala, basal ganglia outflow tracts and cerebellum, which corresponds to the capacity of cannabis to produce motor and cognitive impairment. [18] These deleterious side effects limit their therapeutic use as an analgesic. Since ligands binding to CB1 receptors are responsible for mediating the psychotropic effects of cannabis, studies have been undertaken on the effectiveness of CB2 agonists; they were found to attenuate neuropathic pain without experiencing CB1-mediated CNS side effects. The discovery of a suitable CB2 agonist may be of therapeutic potential. [19]

Synergism with commonly used analgesics

Cannabinoids are also important in acting synergistically with non- steroidal anti-inflammatory drugs (NSAIDs) and opioids to produce analgesia; cannabis could thus be of benefit as an adjuvant to typical analgesics. [20] A major central target of NSAIDs and opioids is the descending inhibitory pathway. [20] The analgesia produced by NSAIDs through its action on the descending inhibitory pathway requires simultaneous activation of the CB1 cannabinoid receptor. In the presence of an opioid antagonist, cannabinoids are still effective analgesics. Whilst cannabinoids do not act via opioid receptors, cannabinoids and opioids show synergistic activity. [20] On the other hand, Telleria-Diaz et al. reported that the analgesic effects of non- opioid analgesics, primarily indomethacin, in the spinal cord can be prevented by a CB1 receptor antagonist, thus highlighting synergism between the two agents. [21] Although no controlled studies in pain management have used cannabinoids with opioids, anecdotal evidence suggest synergistic benefits in analgesia, particularly in patients with neuropathic pain. [20] Whilst the interaction between opioids, NSAIDs and cannabinoids is poorly understood, numerous studies do suggest that they act in a synergistic manner in the PAG and RVM via GABA- mediated disinhibition to enhance descending flow of impulses to inhibit pain transmission. [20]

Route of Administration

Clinical trials of cannabis as an analgesic in neuropathic pain have shown cannabis to reduce the intensity of pain. [5,22] The most common administration of medicinal cannabis is through inhalation via smoking. Two randomised clinical trials assessing smoked cannabis showed that patients with HIV-associated neuropathic pain achieved significantly reduced pain intensity (34% and 46%) compared to placebo (17% and18% respectively). [5,22] One of the studies was composed of participants whose pain was intractable to first-line analgesics used in neuropathic pain, such as tricyclic antidepressants and anticonvulsants. [22] The numbers needed to treat (NNT=3.5) were comparable to agents already in use (gabapentin: NNT=3.8 and lamotrigine: NNT=5.4). [22] All of the studies undertaken on smoked cannabis have been short-term studies and do not address long- term risks of cannabis smoking. An important benefit associated with smoking cannabis is that the pharmacokinetic profile is superior to orally ingested cannabinoids. [23]After smoking one cannabis cigarette, peak plasma levels of THC are reached within 3-10 minutes and due to its lipid solubility, levels quickly decrease as THC is rapidly distributed throughout the tissues. [23] While the bioavailability of THC when inhaled via smoke is much higher than oral preparations, due to first pass metabolism, there are obvious harmful affects associated with smoking which warranted the study of using other means of inhalation such as vapourisation. In medicinal cannabis therapy, vapourisation may be less harmful than smoking as the cannabis is heated below the point of combustion where carcinogens are formed. [24] A recent study found that the transition from smoking to vapourising in cannabis smokers improved lung function measurements and, following the study, participants refused to participate in a reverse design in which they would return to smoking. [24]

Studies undertaken on the efficacy of oro-mucosal cannabinoid preparations (Sativex) showed a 30% reduction in pain as opposed to placebo; the NNT was 8.6.[4] Studies comparing oral cannabinoid preparations (Nabilone) to dihydrocodeine in neuropathic pain found that dihydrocodeine was a more effective analgesic. [25] The effects of THC from ingested cannabinoids lasted for 4-12 hours with a peak plasma concentration at 2-3 hours. [26] The effects of oral cannabinoids was variable due to first pass metabolism where significant amounts of cannabinoids are metabolized by cytochrome P450 mixed-function oxidases, mainly CYP 2C9. [26] First pass metabolism is very high and bioavailability of THC is only 6% for ingested cannabis, as opposed to 20% for inhaled cannabis. [26] The elimination of cannabinoids occurs via the faeces (65%) and urine (25%), with a clinical study showing that after five days 90% of the total dose was excreted. [26]

The issue of cannabis dependence

One of the barriers to the use of medicinal cannabis is the controversy regarding cannabis dependence and the adverse effects associated with chronic use. Cannabis dependence is a highly controversial but important topic, as dependence may increase the risk of adverse effects associated with chronic use. [27] Adverse effects resulting from long-term use of cannabis include short term memory impairment, mental health problems and, if smoked, respiratory diseases. [28] Some authors report that cannabis dependence and subsequent adverse negative effects upon cessation are only observed in non- medical cannabis users, other authors report that dependence is an issue for all cannabis users, whether its use is for medicinal purposes or not. An Australian study assessing cannabis use and dependence found that one in 50 Australians had a DSM-IV cannabis use disorder, predominately cannabis dependence. [27] They also found that cannabis dependence was the third most common life-time substance dependence diagnosis following tobacco and alcohol dependence. [27] Cannabis dependence can develop; however, the risk factors for dependence come predominantly from studies that involve recreational users, as opposed to medicinal users under medical supervision. [29]

A diagnosis of cannabis dependence, according to DSM-IV, is made when three of the following seven criteria are met within the last 12 months: tolerance; withdrawal symptoms; cannabis used in larger amounts or for a longer period than intended; persistent desire or unsuccessful efforts to reduce or cease use; a disproportionate amount of time spent obtaining, using and recovering from use; social, recreational or occupational activities were reduced or given up due to cannabis use; and use continued despite knowledge of physical or psychological problems induced by cannabis. [29] Unfortunately, understanding of cannabis dependence arising from medicinal use is limited due to the lack of studies surrounding cannabis dependence in the context of medicinal use. Behavioural therapies may be of use; however, their efficacy is variable. [30] A recent clinical trial indicated that orally-administered THC was effective in alleviating cannabis withdrawals, which is analogous to other well-established agonist therapies including nicotine replacement and methadone. [30]

The pharmacokinetic profiles also affect cannabis dependence. Studies suggest that the risk of dependence seems to be marginally greater with the oral use of isolated THC than with the oral use of combined THC-cannabidiol. [31] This is important because hundreds of cannabinoids can be found in whole cannabis plants, and cannabidiol may counteract some of the adverse effects of THC; however, more studies are required to support this claim. [31]

The risk of cannabis dependence in the context of long term and supervised medical use is not known. [31] However, some authors believe that the pharmacokinetic profiles of preparations used for medicinal purposes differ from those used for recreational reasons, and therefore causalities in terms of dependence and chronic adverse effects between the two differ greatly. [32]

Conclusion

Cannabis appears to be an effective analgesic and provides an alternative to analgesic pharmacotherapies currently in use for the treatment of neuropathic pain. Cannabis may be of particular use in neuropathic pain that is intractable to other pharmacotherapy. The issue of dependence and adverse side effects including short term memory impairment, mental health problems and if smoked, respiratory diseases arising from medicinal cannabis use is a highly debated topic and more research needs to be undertaken. The ability of cannabinoids to modulate pain transmission by enhancing the activity of descending inhibitory pathways and acting as a synergist to opioids and NSAIDs is important as it may decrease the therapeutic doses of opioids and NSAIDs required, thus decreasing the likelihood of side effects. The possibility of a cannabinoid-derived compound with analgesic properties free of psychotropic effects is quite appealing, and its discovery could potentially lead to a less controversial and more suitable analgesic in the future.

Conflict of interest

None declared.

Correspondence

S Sargent: stephaniesargent@mail.com