Categories
Articles Editorials

Freedom of information

Early last year, a David and Goliath battleraged between the most unlikely of foes. The gripes of a single blog post inspired a group of disaffected mathematicians and scientists to join forces and boycott the world’s largest publisher of scientific journals, Elsevier. Their movement, dubbed “Academic Spring”, was in response to the company’s political backing of the Research Works Act, a proposed bill in the United States (US) aimed at denying public access to scientific research funded by the US National Institute of Health (NIH). Drafted solely to benefit the interests of publishing companies, Elsevier reneged on its support for the bill following months of escalating protests and scathing publicity. Though the bill never saw the light of day, the struggle that unfolded was symptomatic of a more deep-seated and pervasive conflict between academics and publishers; a conflict that has been thrown into sharp relief by the rise of online publishing.

Since the publication of the first scholarly journal in 1665, journals have played an integral role in the scientific process. [1] As vanguards of modern day science, journals have been an enduring and authoritative source of the latest scientific research and developments. Academics form a key ingredient in the turnover and success of journals. Not only are they responsible for generating content, but they also volunteer as peer-reviewers for submissions relevant to their field of expertise and as mediators of the editorial process; a peculiar arrangement that plays into the hands of publishers. Before the arrival of the internet, journals facilitated the quick and widespread exchange of information throughout the scientific world. Publishers performed services including proofing, formatting, copyediting, printing, and worldwide distribution. [1] The digital age, however, rendered many of these tasks redundant and allowed publishers to dramatically reduce their costs. [1] Publishers also used the opportunity to offload further responsibilities onto the shoulders of academics, such as formatting and most copyediting, in order to significantly increase profits despite playing a limited role in the journal’s overall production.

The changing landscape of scientific publishing has seen commercial publishing firms acquire a lion’s share of the market from not-for-profit scientific societies in the last few decades. [2] The resulting monopolistic stranglehold has led to exorbitant subscription fees for access to their treasury of knowledge. Profit margins have hovered between 30-40 percent for over a decade, due in part to subscription prices outpacing inflation by seven percent per annum. [3] Moreover, publishers have exploited the practice of offering journals subscriptions in bundles, rather than on an individual needs basis, a crucial ploy underlying their profits. [4] Long-standing price increases, accompanied by dwindling library budgets, have gravely hampered the ability of libraries, universities, and investigators to acquire the most up-todate publications necessary for research and education. [4] The total expenditure on serials by Australian university libraries in 2010 was a staggering AU$180 million. [5] Even the most affluent libraries, such as Harvard, are declaring the situation as untenable and are resorting to subscriptions cuts. [3]

Along with cost, the principle of access for clinicians, scientists, and the general public alike underscores the ensuing debate. There is little argument that the accessibility of scientific findings is critical to the advancement of scientific progress. Consequently, the great paywalls of publishing houses have fostered an environment that stagnates the translation of science to the bedside and stifles medical innovation. Peer-reviewed literature is often funded by taxpayer-supported government grants. In Australia and New Zealand, over 80% of research and development is funded by the public purse. [6] In effect, governments have been held ransom by firms privatizing the profits accruing to publicly-financed knowledge. The barriers of access and cost also extend to developing nations. Without access to reliable medical literature, efforts to develop sustainable health care systems in these regions are severely undermined.

Researchers are equally culpable for their current plight. Typically, works of intellectual property warrant financial remuneration. However, writing for impact instead of payment has become both intrinsic and unique to academic journals, a paradigm from centuries before when journals were unable to pay authors for their work. [3] Impact, a proxy measure developed by commercial publishers, reflects an academic journal’s visibility for a given year. It is derived from the ratio between the average number of citations per article received during the two preceding years and the total number of articles it published during the same period. [7] The higher the impact factor of a journal, the greater its clout and influence. The importance placed on impact factor has become ingrained in the collective psyche of academia. Academics are competitively assessed on their publication record in scientific journals to secure grants and advance their careers. Inevitably, researchers have become servile to an archaic system, which serves only the interests of commercial publishers.

Open access (OA) represents a new business model in the academic journal industry, underpinned by the growth and reach of the internet. It provides unfettered online access to all research material, as well as the right to copy and redistribute it without restrictions. [1] Open access (OA) uses two channels of distribution: the “gold” or the “green” paths. [1] The “gold” path publishes articles in freely available OA journals that maintain peer review to preserve their academic reputations. The Public Library of Science (PLoS) and BioMed Central (BMC) are leading examples of OA publishers. The “green” path requires authors to self-archive their work on an online repository, available free of charge to the public. [1] Table 1 highlights some of the differences between traditional and OA journals.

Open access (OA) offers many advantages compared to traditional journal publishing. Evidence shows that OA has substantially increased the amount of scholarly work available to all, regardless of economic status or institutional affiliation, increasing the probability of research being read and, accordingly, of being cited. [8] Open access (OA) can integrate new technological approaches such as text mining, collaborative filtering, and semantic indexing, and has the potential to encourage new research methodologies. [8] A significant bone of contention with traditional journals has been the need for authors to relinquish copyright of their material. Open access (OA) allows authors to retain copyright, and provides readers and other authors with the rights to re-use, re-publish, and, in some cases, create derivatives of their work. [8] Furthermore, OA bridges both the digital and physical divide between the developing and developed worlds, mitigating some of the limitations faced by scientists in low-income countries to publish their work. Institutional repositories and OA publication fee waivers have been instrumental in promoting their research profile onto the international stage, by shedding the burden of cost. [9]

Despite offering free access to readers, OA has been plagued by its share of criticism. Traditional publishing firms, one of its fiercest opponents, contend that OA journals shift the cost of production from consumer to author, with fees ranging from $1,000-5,000 per article. [3] Whilst levelling this critique, commercial firms overlook the fact that they also foist publication fees onto authors which may even exceed the costs of OA journals. [1,3] Publication costs are now a common element in grant fund applications, and authors incur minimal to no charge. Inevitably, ethical concerns also arise from the OA model. The author-pay model may compromise the peerreview process as journals become financially dependent on researchers to publish articles. However, these concerns have been assuaged in recent years, due to the widespread number of high-quality OA journals that employ robust peer-review on par with their subscription counterparts. [1] The “green” route also poses problems for authors who may not possess the technical capabilities or resources to self-archive articles.
Open access (OA) represents the fastest growing business model for academic journals, and is likely to remain sustainable in the long-term. Many OA journals are now highly trusted, referenced, indexed, and well received. Its support has been bolstered by the evolving mandates of research funding agencies, including Australia’s National Health and Medical Research Council (NHMRC), the United Kingdom’s Wellcome Trust, and the NIH, placing research funded by their grants into the public domain within a year of initial publication. [7,10] Major data aggregators are also facilitating this trend, including PubMed and OVID, releasing OA databases and platforms dedicated to OA material. [11] Estimates project that 60 percent of all journal content will be published in OA journals by 2019. [11] Moreover, OA journals are rapidly approaching the same scientific impact and quality as subscription journals, particularly in the field of biomedicine, as suggested by one study. [7] Many have opined that OA could redefine measures of impact, using additional metrics such as number of downloads, bookmarks, tweets, and Facebook likes.Proponents of OA have turned their attention to how corporations like drug and chemical companies can support its efforts, which benefit from free access while contributing only a small subset of scientific articles and fees overall.

The advent of the internet has created a realm of possibilities for some and a minefield of challenges for others. Journals have navigated such obstacles for centuries, embracing new opportunities and adapting to change. Although the internet has effectively transformed publishers into “de facto” gatekeepers of their lucrative commodity, it has also been the impetus behind the OA revolution, proving to be a more cost-effective and equitable alternative to traditional publishing. But while OA continues to develop into the mainstay of journal publishing, perhaps its most immediate impact will be to diversify competition and precipitate a cultural change within the industry that sees science re-emerge at the forefront of its interests.

Conflict of interest
None declared.

Correspondence
S Chatterjee: s.chatterjee@amsj.org

References

[1] Albert KM. Open access: Implications for scholarly publishing and medical libraries. J Med Libr Assoc. 2006 Jul;94(3):253-62.

[2] Jha A. Academic spring: How an angry maths blog sparked a scientific revolution. The Guardian. 2012 Apr 9.

[3] Owens S. Is the academic publishing industry on the verge of disruption. U.S. News and World Report. 2012 Jul 23.

[4] Taylor MP. Opinion: Academic publishing is broken. The Scientist. 2012 Mar 19.

[5] Australian higher education statistics [Internet]. Council of Australian University Librarians; 2009 [updated 2012 Nov 29; cited 2013 Mar 5]. Available from: http://www.caul.edu.au/caul-programs/caul-statistics/auststats.

[6] Soos P. The great publishing swindle: The high price of academic knowledge. The Conversation. 2012 May 3.

[7] Björk BC, Solomon D. Open access versus subscription journals: A comparison of scientific impact. BMC Med. 2012;10(73).

[8] Wilbanks J. Another reason for opening access to research. BMJ. 2006;333(1306).

[9] Chan L, Aruachalam A, Kirsop B. Open access: A giant leap towards bridging health inequities. Bull. World Health Organ. 2009;87:631-635.

[10] Dissemination of research findings [Internet]. National Health and Medical Research Council; 2012 Feb 12 [updates 2013 Jan 25; cited 2013 Mar 4]. Available from: http://www.nhmrc.gov.au/grants/policy/disseminationresearch-findings

[11] Rohrich RJ, Sullivan D. Trends in medical publishing: Where the publishing industry is going. Plast Reconstr Surg. 2012;131(1):179-81.

Categories
Case Reports Articles

Blood culture negative endocarditis – a suggested diagnostic approach

This case report describes a previously healthy male patient with a subacute presentation of severe constitutional symptoms, progressing to acute pulmonary oedema, and a subsequent diagnosis of blood culture negative endocarditis with severe aortic regurgitation. Blood culture negative endocarditis represents an epidemiologically varying subset of endocarditis patients, as well as a unique diagnostic dilemma. The cornerstones of diagnosis lay in careful clinical assessment and exposure history, as well as knowledge of common aetiologies and appropriate investigations. The issues of clinically informed judgement and having a systematic approach to the diagnosis of these patients, especially within an Australian context, are discussed. Aetiological diagnosis of these patients modifies and directs treatment, which is fundamental in minimising the high morbidity and mortality associated with endocarditis.

Case

Mr NP was a previously healthy, 47 year old Caucasian male who presented to a small metropolitan emergency department with two days of severe, progressive dyspnoea which was subsequently diagnosed as acute pulmonary oedema (APO). This occurred on a three month background of dry cough, malaise, lethargy and an unintentional weight loss of 10 kilograms.

History

Apart from the aforementioned, Mr NP’s history of the presenting complaint was unremarkable. In the preceding three months Mr NP was previously treated in the community for pertussis and atypical pneumonia, resulting in no significant improvement. Notably, this therapy included two courses of antibiotics (the specifics unable to be remembered by the patient), with the latest course completed the week prior to admission. He had no relevant past medical or family history, specifically denying a history of tuberculosis, malignancy, and heart and lung disease. There were no current medications or known allergies; he denied intravenous or other recreational drug use, reported minimal alcohol use, and had never smoked.

Mr NP lived in suburban Melbourne with his wife and children. He kept two healthy dogs at home. There had been no sick contacts and no obvious animal or occupational exposures, although he noted that he occasionally stopped cattle trucks on the highway as part of his occupation, but had no direct contact with the cattle. He recently travelled to Auckland, New Zealand for two weeks, two months prior. There were no stopovers, notable exposures or travel throughout the country.

During the initial assessment of Mr NP’s acute pulmonary oedema, blood cultures were drawn with a note made of oral antibiotics during the preceding week. A transthoracic echocardiogram (TTE) found moderate aortic regurgitation with left ventricular dilatation. A subsequent transoesophageal echocardiogram (TOE) noted severe aortic regurgitation, a one centimetre vegetation on the aortic valve with destruction of the coronary leaflet, LV dilation with preserved ejection fraction greater than 50%. Blood cultures, held for 21 days, revealed no growth.

Empirical antibiotics were started and Mr NP was transferred to a large quaternary hospital for further assessment and aortic valve replacement surgery.

Table 1. A suggested schema for assessing exposures to infectious diseases during the clinical history, illustrated using the commonly used CHOCOLATES mnemonic.

Exposure Assessment Schemata: CHOCOLATES mnemonic
Country of origin

Household environment

Occupation

Contacts

Other: Immunisations, intravenous drug user, immunosuppression,

splenectomy, etc.

Leisure activities/hobbies

Animal exposures

Travel and prophylaxis prior

Eating and drinking

Sexual contact

AVR – Aortic valve replacement; ANA – Antinuclear antibodies; ENA – Extractable nuclear antigens

Examination

Examination of Mr NP, after transfer and admission, showed an alert man, pale but with warm extremities, with no signs of shock or sepsis. Vital signs revealed a temperature of 36.2°C, heart rate of 88 beats per minute, blood pressure of 152/50 mmHg (wide pulse pressure of 102 mmHg) and respiratory rate of 18 breaths per minute, saturating at 99% on room air.

No peripheral stigmata of endocarditis were noted, and there was no lymphadenopathy. Examination of the heart and lungs noted a loud diastolic murmur through the entire precordium, which increased with full expiration, but was otherwise normal with no signs of pulmonary oedema. His abdomen was soft and non-tender with no organomegaly noted.

Workup and Progress

Table 2 shows relevant investigations and results from Mr NP.

Table 2. Table outlining the relevant investigation results for Mr NP performed for further assessment of blood culture negative endocarditis.


 

Investigation Result
Blood Cultures Repeat Blood Cultures x 3 (on antibiotics) No growth until date; held for 21 days
Autoimmune Rheumatoid Factor Weak Positive – 16 [N <11]
ANA Negative
ENA Negative
Serology Q Fever Phase I Negative Phase II Negative
Bartonella Negative
Atypical Organisms; (Legionella, Mycoplasma) Negative
Valve Tissue (post AVR) Histopathology Non-specific chronic inflammation and fibrosis
Tissue Microscopy and Culture Gram positive cocci seen. No growth until date.
16S rRNA Streptococcus mitis
18S rRNA Negative

Empirical antibiotics for culture negative endocarditis were initiated during the initial presentation and were continued after transfer and admission:

Benzylpenicillin for streptococci and enterococci

Doxycycline for atypical organisms and zoonoses

Ceftriaxone for HACEK organisms

Vancomycin for staphylococcus and resistant gram positive bacteria.

During his admission, doxycycline was ceased after negative serology testing and microscopy identifying gram positive cocci. Benzylpenicillin was changed to ampicillin after a possible allergic rash. Ceftriaxone, ampicillin and vancomycin were continued until the final 16S rRNA result from valvular tissue identifying Streptococcus mitis, a viridians group Streptococci.

The patient underwent a successful aortic valve replacement (AVR) and was routinely admitted to the intensive care unit (ICU) post cardiac surgery. He developed acute renal failure, most likely due to acute tubular necrosis from a combination of bacteraemia, angiogram contrast, vancomycin, and the stresses of surgery and bypass. Renal functional gradually returned after resolution of contributing factors without the need for removal of vancomycin, and Mr NP was discharged to the ward on day six ICU.

Clinical improvement was seen in Mr NP, as well as through a declining white cell count and a return to normal renal function. He was discharged successfully with Hospital in the Home for continued outpatient IV vancomycin for a combined total duration of four weeks and for follow up review in clinic.

Discussion

There is an old medical adage, that “persistent bacteraemia is the sine qua non of endovascular infection.” The corollary is that persistently positive blood cultures is a sign of an infection within the vascular system. In most clinical situations this is either primary bacteraemia or infective endocarditis, although other interesting, but less common differentials, exist (e.g. septic thrombophlebitis/Lemierre’s Syndrome, septic aneurysms, aortitis, etc.). Consequently, blood culture negative endocarditis (BCNE) becomes both an oxymoron, and a unique clinical scenario.

BCNE can be strictly defined as endocarditis (as per Duke criteria) without known aetiology after three separate blood cultures with no growth after at least seven days, [1] although less rigid definitions have been used throughout the literature. The incidence is approximately 2-7% of endocarditis cases, although it can be as much as 31%, due to multiple factors such as regional epidemiology, the administration of prior antibiotics and the definition of BCNE used. [1-3] Importantly, the morbidity and mortality associated with endocarditis remains high despite multiple advances, and early diagnosis and treatment remains fundamental. [1,4,5]

The most common reason for BCNE is prior antibiotic treatment before blood culture collection, [1-3] as was the case with Mr NP. Additional associated factors for BCNE include exposure to zoonotic agents, underlying valvular disease, right-sided endocarditis and presence of a pacemaker. [1,3]

Figure 1 shows the aetiology of BCNE; Table 3 lists clinical associations and epidemiology of common organisms which may be identified during assessment. Notably, there is a high prevalence of zoonotic infections, as well as a large portion remaining unidentified. [2] Additionally, the incidence of normal endocarditis organisms is comparatively high, which in most cases have been suppressed through prior antibiotic use. [2]

Table 3. Common aetiologies in BCNE and associated clinical features and epidemiology. [1,2,5-9]

Aetiology Clinical Associations and Epidemiology
Q Fever (Coxiella burnetii) Zoonosis: contact with farm animals (commonly cattle, sheep, and goats). Farmers, abattoir workers, veterinarians, etc. Check for vaccination in aforementioned high risk groups.
Bartonella spp. Zoonosis: contact with cats (B henselae); transmitted by lice, poor hygiene, homelessness (B quintana).
Mycoplasma spp. Ubiquitous. Droplet spread from person to person, increased with crowding. Usually causes asymptomatic or respiratory illness, rarely endocarditis.
Legionella spp. Usually L pneumophila; L longbeachae common in Australia. Environmental exposures through drinking/inhalation. Colonises warm water, and soil sediments. Cooling towers, air conditioners, etc. help aerosolise bacteria. Urinary antigen only for L pneumophila serogroup 1. Usually respiratory illness, rarely endocarditis.
Tropheryma whipplei Associations with soil, animal and sewerage exposures. Wide spectrum of clinical manifestations. Causative organism of Whipple’s Disease (malabsorptive diarrhoeal illness).
Fungi Usually with Candida spp. Normal GIT flora. Associated with candidaemia, HIV/immunosuppression, intravascular device infections, IVDU, prosthetic valves, ICU admission, parenteral feeding, broad spectrum antibiotic use. Associated with larger valvular vegetations.
HACEK organisms* Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella spp. Fastidious Gram negative rods. Normal flora of mouth and upper GI. Associated with poor dentition and dental work. Associated with larger valvular vegetations.
Streptococcus viridans group* Umbrella term for alpha haemolytic streptococci commonly found as mouth flora. Associated with poor dentition and dental work.
Streptococcus bovis* Associated with breaches of colonic mucosa: colorectal carcinoma, inflammatory bowel disease and colonoscopies.
Staphylococcus aureus* Normal skin flora. IVDU, intravascular device infections, post-operative valve infections.

IVDU – Intravenous drug user; GIT – Gastrointestinal tract.

* Traditional IE organisms. Most BCNE cases with usual IE bacteria isolated where antibiotics given before culture. [1-3]


 

The HACEK organisms (Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella) are fastidious (i.e. difficult to grow), gram negative oral flora. Consequently (and as a general principle for other fastidious organisms), these slow growing organisms tend to produce both more subacute presentations as well as larger vegetations at presentation. They have been traditionally associated with causing culture negative endocarditis, but advancements in microbiological techniques have resulted in the majority of these organisms being able to be cultured within five days, and now have a low incidence in true BCNE. [1]

Q Fever is of particular importance as it is both the most common identified aetiology of BCNE, as well as an important offender in Australia, given the large presence of primary industry and the consequent potential for exposure. [1-3,6] Q Fever is caused by the Gram negative obligate intracellular bacteria Coxiella burnetii, (named after Australian Nobel laureate Sir Frank Macfarlane Burnet), and is associated in particular with various farm animal exposures (see Table 4). The manifestations of this condition are variable and nonspecific, and the key to diagnosis often lies in an appropriate index of suspicion and an exposure history. [6] In addition, Q fever is a very uncommon cause of BCNE in Northern Europe and UK, and patient exposures in this region may be less significant. [1,2,6]

The clinical syndrome is separated into acute and chronic Q Fever. This differentiation is important to note for two reasons: firstly, Q fever endocarditis is a manifestation of chronic, not acute, Q fever, and secondly because of the implication on serological testing. [6] Q fever serology is the most common diagnostic method used, and is separated into Phase II (Acute Q Fever) and Phase I (Chronic Q Fever) serologies. Accordingly, to investigate Q fever endocarditis, Phase I serology must be performed. [6]

Given the large incidence of zoonotic aetiologies, the modified Duke criteria suggests that positive blood culture or serology for Q fever be classed as a major criterion for diagnosis of endocarditis. [10] However, Lamas and Eykyn [3] found that even with the modifications to the traditional Duke criteria this is still a poor predictor for BCNE, identifying only 32% of their pathologically proven endocarditis patients. Consequently, they suggest the addition of minor criteria to improve sensitivity, making particular note of rapid onset splenomegaly or clubbing which can occur especially in patients with zoonotic BCNE. [3]

Figure 2 outlines the suggested diagnostic approach, modified from the original detailed by Fournier et al. [2] The initial steps are aimed at high incidence aetiologies and to rule out non-infectious causes, with stepwise progression to less common causes. Additionally, testing of valvular tissue plays a valuable role in aiding diagnosis in situations where this is available. [1,2,11,12]

16S ribosomal RNA (rRNA) gene sequence analysis and 18S rRNA gene sequence analysis are broad range PCR tests, which can be used to amplify genetic material that may be present inside a sample. Specifically, it identifies sections of rRNA which are highly preserved against mutation, and are specific to a species of organism. When a genetic sequence has been identified, it is compared against a library of known genetic codes to identify the organism if listed. 16S identify prokaryotic bacteria, and 18S is the eukaryotic fungal equivalent. These tests can play a fundamental role in the identification of aetiology where cultures are unsuccessful, although they must be interpreted with caution and clinical judgement, as they are highly susceptible to contamination and false positives due to their high sensitivity. [11-13] Importantly, antibiotic sensitivity testing is unable to be performed on these results, as there is no living microorganism isolated. This may necessitate broader spectrum antibiotics to allow for potential unknown resistance – as was demonstrated by the choice of vancomycin in the case of Mr NP.

The best use of 16S and 18S rRNA testing in the diagnosis of BCNE is upon valvular tissue; testing of blood is not very effective and not widely performed. [2,11,13] Notwithstanding, 18S rRNA testing on blood may be appropriate in certain situations where first line BCNE investigations are negative, and fungal aetiologies become much more likely. [2] This can be prudent given that most empirical treatment regimes do not include fungal cover.

Fournier et al. [2] suggested the use of a Septifast© multiplex PCR (F Hoffmann-La Roche Ltd, Switzerland) – a PCR kit designed to identify 25 common bacteria often implicated in sepsis – in patients who have had prior antibiotic administration. Although studies have shown its usefulness in this context, it has been excluded from Figure 2 because, to the best of the author’s knowledge, this is not a commonly used test in Australia. The original diagnostic approach from Fournier et al. [2] identified aetiology in 64.6% of cases, with the remainder being of unknown aetiology.

Conclusion

BCNE represents a unique and interesting, although uncommon, clinical scenario. Knowledge of the common aetiologies and appropriate testing underpins the timely and effective diagnosis of this condition, which in turn modifies and directs treatment. This is especially important due to the high morbidity and mortality rate of endocarditis and the unique spectrum of aetiological organisms which may not be covered by empirical treatment.

Acknowledgements

The author would like to thank Dr Adam Jenney and Dr Iain Abbott for their advice regarding this case.

Consent declaration

Informed consent was obtained from the patient for publication of this case report and accompanying figures.

Conflict of interest

None declared.

Correspondence

S Khan: sadid.khan@gmail.com

References

[1] Raoult D, Sexton DJ. Culture negative endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[2] Fournier PE, Thuny F, Richet H, Lepidi H, Casalta JP, Arzouni JP, Maurin M, Célard M, Mainardi JL, Caus T, Collart F, Habib G, Raoult D. Comprehensive diagnostic strategy for blood culture negative endocarditis: a prospective study of 819 new cases. CID. 2010; 51(2):131-40.
[3] Lamas CC, Eykyn SJ. Blood culture negative endocarditis: analysis of 63 cases presenting over 25 years. Heart. 2003;89:258-62.
[4] Wallace SM, Walton BI, Kharbanda RK, Hardy R, Wilson AP, Swanton RH. Mortality from infective endocarditis: clinical predictors of outcome.
[5] Sexton DJ. Epidemiology, risk factors & microbiology of infective endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[6] Fournier PE, Marrie TJ, Raoult D. Diagnosis of Q Fever. J. Clin. Microbiol. 1998, 36(7):1823.
[7] Apstein MD, Schneider T. Whipple’s Disease. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[8] Baum SG. Mycoplasma pneumonia infection in adults. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[9] Pedro-Botet ML, Stout JE, Yu VL. Epidemiology and pathogenesis of Legionella infection. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[10] Li JS, Sexton DJ, Mick N, Nettles R, Fowler VG Jr, Ryan T, Bashore T, Corey GR. Proposed modifications to the Duke criteria for the diagnosis of infective endocarditis. CID. 2000; 30:633-38.
[11] Vondracek M, Sartipy U, Aufwerber E, Julander I, Lindblom D, Westling K. 16S rDNA sequencing of valve tissue improves microbiological diagnosis in surgically treated patients with infective endocarditis. J Infect. 2011; 62(6):472-78
[12] Houpikian P & Raoult D. Diagnostic methods: Current best practices and guidelines for identification of difficult-to-culture pathogens in infective endocarditis. Infect Dis Clin N Am. 2002; 16:377–92
[13] Muñoz P, Bouza E, Marín M, Alcalá L, Créixems MR, Valerio M, Pinto A. Heart Valves Should Not Be Routinely Cultured. J Clin Microbiol. 2008; 46(9):2897.

Categories
Feature Articles Articles

Putting awareness to bed: improving depth of anaesthesia monitoring

Intraoperative awareness and subsequent explicit recall can lead to prolonged psychological damage in patients. There are many methods currently in place to prevent this potentially traumatic phenomenon from occurring. Such methods include identifying haemodynamic changes in the patient, monitoring volatile anaesthetic concentration, and various electroencephalographic algorithms that correlate with a particular level of consciousness. Unfortunately none of these methods are without limitations.

Introduction

Intraoperative awareness is defined by both consciousness and explicit memory of surgical events. [1] There are a number of risk factors that predispose patients to such a phenomenon, both surgical and patient-related. Procedures where the anaesthetic dose is low, such as in caesarean sections, trauma and cardiac surgery, have been associated with a higher incidence. Likewise patients with low cardiac reserve or resistance to some agents are prominent attributable factors. [2] A small number of cases are also due to a lack of anaesthetist vigilance with administration of incorrect drugs or failure to recognize equipment malfunction. [2] Ultimately it is largely an iatrogenic complication due to administration of inadequate levels of anaesthetic drugs. Most cases of awareness are inconsequential, with patients not experiencing pain but rather having auditory recall of the experience, which is usually not distressing. [3] In some cases, however, patients experience and recall pain, which can have disastrous, long-term consequences. Awareness has a high association with post-operative psychosomatic dysfunction, including depression and post-traumatic stress disorder, [4] and is a major medico-legal liability. Though the incidence of awareness is infrequent, estimated to occur in 1-2 cases per 1000 patients having general anaesthesia in developed countries, [1] the sequelae of experiencing such an event necessitates the development and implementation of a highly sensitive monitoring system to prevent it from occurring.

Measuring depth of anaesthesia:

1. Monitoring clinical signs

Adequate depth of anaesthesia occurs when the administration of anaesthetic agents are sufficient to allow conduct of the surgery whilst ensuring the patient is unconscious. There are both subjective and objective methods of monitoring this depth. [5] Subjective methods rely primarily on the patient’s autonomic response to a nociceptive stimulus. [5] Signs such as hypertension, tachycardia, sweating, lacrimation and mydriasis indicate a possible lightening of anaesthesia. [5] Such signs however are not specific as they can be the result of other factors that cause haemodynamic changes, such as haemorrhage. Additionally, patient body habitus, autonomic tone and medications (in particular beta-adrenergic blockers and calcium channel antagonists) can also haemodynamically affect the patient. [5] Consequently the patient’s autonomic response is a poor indicator of depth of anaesthesia, [6] and the presence of haemodynamic change in response to a surgical incision does not indicate awareness, nor does the absence of autonomic response exclude it. [5]

Patient movement remains an important sign of inadequate depth of anaesthesia, however is often suppressed by administration of neuromuscular blocking drugs. [1] This consequent paralysis can be overcome with the ‘isolated forearm technique’. In this technique, a tourniquet is placed on an arm of the patient prior to administration of a muscle relaxant and inflated above systolic pressure to exclude the effect of the relaxant and retain neuromuscular function. The patient is then instructed to move their arm during the surgery if they begin to feel pain. [5] Though this technique is effective in monitoring depth of anaesthesia, it has not been adopted into clinical practice. [7] Furthermore, patient movement and autonomic signs may reflect the analgesic rather than hypnotic component of anaesthesia and thus are not an accurate measure of consciousness. [8]

2. Minimum Alveolar Concentration (MAC)

The unreliable nature of subjective methods for assessing depth of anaesthesia has seen the development and implementation of various objective methods which rely on the sensitivity of monitors. The measurement of end-tidal volatile anaesthetic agent concentration to determine the MAC has become a standard component of modern anaesthetic regimens. MAC is defined as the concentration of inhaled anaesthetic required to prevent 50% of subjects from responding to noxious stimuli. [9] It is recommended that administration of at least 0.5 MAC of volatile anaesthetic should reliably prevent intra-operative awareness. [10]

Unfortunately the MAC is affected by a number of factors and thus it is difficult to determine an accurate concentration that will reliably prevent awareness. Patient age is the major determinate of the amount of inhalation anaesthesia required, as are altered physiological states such as pregnancy, anaemia, alcoholism, hypoxaemia and temperature of the patient. [11] Most importantly, the administration of opioids and ketamine, both commonly included in the anaesthetic regimen, severely curtail the ability of the gas analyser to determine the MAC. [12] Further, the MAC is a reflection of inhalational anaesthetic concentration, not effect. The suppression of response to noxious stimuli whilst under volatile anaesthesia is mediated largely through the spinal cord, and thus does not accurately reflect cortical function and the penetration of the anaesthetic into the brain. [13] Another major limitation to using gas analysers is that they have limited reliability when intravenous anaesthesia is used. Simultaneous administration of intravenous anaesthetic agents is extremely common and in many cases total intravenous anaesthesia is used; in such cases the use of the MAC is not applicable.

3. Electroencephalogram (EEG) and derived indices

Bispectral Index (BIS)

Advances in technology have lead to the concomitant development of processed encephalographic modalities and their use as parameters to assess depth of anaesthesia; the most widely used being the BIS monitor. The BIS monitor uses algorithmic analysis of a patient’s EEG to produce a single number from 1 to 100, which correlates with a particular level of consciousness. [5,14] For general anaesthesia, 40-60 is recommended. [14] The establishment of this monitor at first seemed promising with the publication of several studies advocating its use in preventing awareness. The first of these was conducted by Ekman et al, [15] and indeed found that there was a substantial decrease in incidence of awareness when the BIS monitor was used. In this study, however, the patients were not randomly allocated to the control group and the BIS monitoring group, and thus the results are subject to a high degree of bias and cannot be reliably interpreted. The second study, the B-Aware trial, [16] also found that BIS-guided anaesthesia resulted in a reduction in awareness in high risk patients, however despite having a sound study design, subsequent studies failed to reproduce this result. One prominent study, the B-Unaware trial, [17] compared BIS monitoring to more traditional analysis of end-tidal concentrations of anaesthetic gases to assess depth of anaesthesia during surgeries on high risk patients. This study failed to show a significant reduction in the incidence of awareness using BIS monitoring, however a major criticism of this study is that the criteria used to classify the patients in the trial as ‘high-risk’ was less stringent than those used in the B-Aware trial which likely biased the results. Also, given the low incidence of awareness, a larger number of study subjects would be required to demonstrate any significant reduction.

The BIS monitor also has several practical issues that further question its efficacy in monitoring consciousness. It is subject to electrical interference from the theatre environment, particularly from electromyography, diathermy and direct vibration. [14] This is more likely in cases where the surgical field is near the BIS electrode (such as facial muscle surgery) which will falsely elevate BIS values, leading to possible excess administration of anaesthesia. [14] Similar to the MAC, standard BIS scores are not applicable to all patient populations, particularly in patients with abnormal EEGs – those with dementia, head injuries, cardiac arrest and have hypo- or hyperthermia. [1] In such cases, the BIS value may underestimate the depth of anaesthesia, leading to the administration of excess anaesthetic and a deeper level of anaesthesia than required. Further, as the molecular action of various anaesthetic agents differs, the consequent EEG changes are not uniform. Specifically, the BIS monitor cannot accurately assess changes in consciousness when the patient is administered ketamine [18] and nitrous oxide, [19] both commonly used agents.

Despite these practical downfalls, however, there are substantial benefits to the BIS monitor which should be incorporated into future depth of anaesthesia monitors. The BIS monitor helps anaesthetists to titrate the correct dosage of anaesthetic for the patient, [5] and to adjust this accordingly throughout the surgery to keep the patient within the recommended range for general anaesthesia without administering excess agent. This results in decreased haemodynamic disturbance, faster recovery times and reduced post-operative side effects. [20] A meta-analysis found that use of BIS monitoring significantly reduced anaesthetic consumption by 10%, reduced the incidence of nausea/vomiting by 23% and reduced time in the recovery room by four minutes. [21] This may offer a cost-benefit as less anaesthetic will be required during surgeries.

Despite the aforesaid advantages of using the MAC and BIS monitor to assess consciousness during surgery, the major inadequacy to both of these methods is that they only measure the hypnotic element of anaesthesia. [8] Anaesthetic depth is in fact a complex construct of several components including hypnosis, analgesia, amnesia and reflex suppression. [8] Different anaesthetic agents have varying effects across these areas; some are able to be administered independently, and others only have properties in one area, and thus must be used in conjunction with other pharmacologic agents to achieve anaesthesia. [8] If only the hypnotic component of anaesthesia is monitored, optimal drug delivery is difficult and there is a risk that insufficient analgesia may go unnoticed. Thus the MAC and BIS monitors can be used to monitor hypnosis and sedation, but have little role in predicting the quality of analgesia or patient movement mediated by spinal reflexes.

Entropy

Entropy monitoring is based on the acquisition and processing of EEG and electromyelogram (EMG) signals by using the entropy algorithm. [22] It relies on the concept that the irregularity within an EEG signal decreases as the anaesthetic concentration in the brain rises. Similar to the BIS, the signal is captured via a sensor that is mounted on the patient’s forehead, and the monitor produces two numbers between 0 and 100 – the response entropy (RE) and the state entropy (SE). The RE incorporates higher frequency components (including EMG activity) thus allowing a faster response from the monitor in relation to clinical state. [22] Numbers close to 100 suggest consciousness whereas numbers close to 0 indicate a very deep level of anaesthesia. The ideal values for general anaesthesia lie between 40 and 60. [22] Studies have shown that entropy monitoring measures the level of consciousness just as reliably as the BIS, and is subject to less electrical interference during the intraoperative period. [23]

Evoked potentials

Alternative mechanisms such as evoked potentials, which monitor the electrical potential of nerves following a stimulus, have also demonstrated a clear dose-response relationship with increasing anaesthetic administration [14,24]. In particular, auditory evoked potentials (in which the response to auditory canal stimulation is recorded) have lead to the development of the auditory evoked potential index. This index was proven to have greater sensitivity than the BIS monitor in detecting unconsciousness. [24] Unfortunately, using evoked potentials to monitor depth of anaesthesia is a complex process, and as with BIS many artifacts can interfere with the EEG reading. [14,24]

Brain Anaesthesia Response (BAR) Monitor

New electroencephalographically derived algorithms have been developed which define both the patient’s hypnotic and analgesic states individually. [25,26] This is essential in cases where combinations of anaesthetic agents that have separate sedative and analgesic properties are used. Dr David Liley, Associate Professor of the Brain Sciences Institute at Swinburne University of Technology, began a research project a decade ago with the aim of producing such a means of assessing consciousness, and subsequently pioneered the Brain Anaesthesia Response (BAR) monitor. [25] Liley initially analyzed EEG data from 45 patients in Belgium who were administered both propofol (a hypnotic agent) and remifentanil (an analgesic agent) as part of their anaesthetic regimen. Two measures were derived from the EEG to measure the brain response to the anaesthetic agents – cortical state (which measures brain responsiveness to stimuli) and cortical input (which quantifies the strength of each stimuli that reaches the brain). He was able to detect the effects of the drugs separately; cortical state reflected changes for hypnotic agents, and cortical input reactions reflected change in levels of analgesia; from this, the BAR algorithm was developed. [25] Its use will allow anaesthetists to determine which class of drug needs adjustment, and to titrate it accordingly. It is suggested that the BAR monitor will narrow the range of the exclusion criteria that limit previously mentioned indexes such as the BIS and Entropy. [25,26] This innovative monitor has an improved ability to detect a number of drugs that are not effectively measured using the BIS monitor, for example ketamine and nitrous oxide. [25] The capacity to titrate anaesthetics specifically and accurately would increase optimal drug delivery, not only reducing the likelihood of intra-operative awareness but also avoiding issues of over or under sedation. This in turn might reduce side effects associated with excess anaesthetic administration and improve post-operative recovery. The BAR monitor is currently undergoing trial at the Royal Melbourne Hospital under Professor Kate Leslie, and at St. Vincent’s Hospital in Melbourne under Dr. Desmond McGlade. [25,26]

Though advancements have undoubtedly been made in regards to depth of anaesthesia monitors, it cannot be emphasized enough that the most important monitor of all is the anaesthetist themselves. A significant percentage of awareness cases are caused by drug error or equipment malfunction. [2,27] These cases can easily be prevented by adhering to strict practice guidelines, such as those published by the Australian and New Zealand College of Anaesthetists. [28]

Conclusion

Measuring depth of anaesthesia to prevent intra-operative awareness remains a highly contentious aspect of modern anaesthesia. Current parameters for monitoring consciousness include the observation of clinical signs, the MAC and BIS indices, as well as less commonly used methods such as evoked potentials and entropy. These instruments allow clinicians to accurately titrate anaesthetic agents leading to a subsequent decrease in post-operative side effects and a reduction in awareness among patients at increased risk of this complication. Despite these benefits, all of the current monitors have limitations and there is still no completely reliable method of preventing this potentially traumatising event. What is required now is a parameter or measure that shows minimal inter-patient variability and the capacity to respond consistently to an array of anaesthetic drugs with different molecular formulations. It is important to remember, however, that no monitor can replace the role of the anaesthetist in preventing awareness.

Conflict of interest

None declared.

Correspondence

L Kostos: lkkos1@student.monash.edu

References

[1] Mashour GA, Orser BA, Avidan MS. Intraoperative awareness: From neurobiology to clinical practice. Anesthesiology 2011;114(5):1218-33.
[2] Ghoneim MM, Block RI, Haffarnan M, Mathews MJ. Awareness during anaesthesia: risk factors, causes and sequelae: a review of reported cases in the literature. Anesth Analg 2009; 108:527-35.
[3] Orser BA, Mazer CD, Baker AJ. Awareness during anaesthesia. CMAJ 2008; 178:185–8.
[4] Osterman JE, Hopper J, Heran WJ, Keane TM, Van der Kolk BA. Awareness under anaesthesia and the development of posttraumatic stress disorder. Gen Hosp Psychiatry 2001; 23:198-204.
[5] Kaul HL, Bharti N. Monitoring depth of anaesthesia. Indian J. Anesth 2002;46(4):323-32.
[6] Struys MM, Jensen EW, Smith W, Smith NT, Rampil I, Dumortier FJ et al. Performance of the ARX-derived auditory evoked potential index as an indicator of anesthetic depth: a comparison with bispectral index and hemodynamic measures using propofol administration. Anesthesiology 2002;96:803-16.
[7] Bruhn J, Myles P, Sneyd R, Struys M. Depth of anaesthesia monitoring: what’s available, what’s validated and what’s next? Br J Anaesth 2006; 97:85-94.
[8] Myles PS. Prevention of awareness during anaesthesia. Best Pract Res Clin Anesthesiol 2007; 21(3):345-55.
[9] Eger EI 2nd, Saidman IJ, Brandstater B. Minimum alveolar anaesthetic concentration: a standard of anaesthetic potency. Anesthesiology 1965; 26:756-63.
[10] Eger EI 2nd, Sonner JM. How likely is awareness during anaesthesia? Anaesth Analg 2005; 100:1544.
[11] Eger EI 2nd. Age, minimum alveolar anesthetic concentration and the minimum alveolar anesthetic concentration-awake. Anesth Analg 2001;93:947-53.
[12] Nost R, Thiel-Ritter A, Scholz S, Hempelmann G, Muller M. Balanced anesthesia with remifentanil and desflurane: clinical considerations for dose adjustment in adults. J Opioid Manag 2008;4:305-9.
[13] Rampil IJ, Mason P, Singh H. Anesthetic potency (MAC) is independent of forebrain structures in the rat. Anesthesiology 1993;78:707-12.
[14] Morimoto, Y. Usefulness of electroencephalogramic monitoring during general anaesthesia. J Anaesth 2008;22:498-501.
[15] Ekman A, Lindholm ML, Lennmarken C, Sandin R. Reduction in the incidence of awareness using BIS monitoring. Acta Anaesthesiol Scand 2004;48:20-6.
[16] Myles PS, Leslie K, McNeil J, Forbes A, Chan MT. Bispectral index monitoring to prevent awareness during anaesthesia : the B-Aware randomised controlled trial. Lancet 2004;363:1757-63.
[17] Avidan M, Shang L, Burnside BA, Finkel KJ, Searleman AC, Selvidge JA et al. Anesthesia awareness and the bispectral index. N Engl J Med 2008;358:1097-1108.
[18] Morioka N, Ozaki M, Matsukawa T, Sessler D, Atarashi K, Suzuki H. Ketamine causes a paradoxical increase in the Bispectral index. Anesthesiology 1997;87:502.
[19] Puri GD. Paradoxical changes in bispectral index during nitrous oxide administration. Br J Anesth 2001;86:141-2.
[20] Sebel PS, Rampil I, Cork R, White P, Smith NT. Brull S et al. Bispectral analysis for monitoring anaesthesia – a multicentre study. Anesthesiology 1993;79:178.
[21] Liu SS. Effects of bispectral index monitoring on ambulatory anaesthesia: a meta-analysis of randomized controlled trials and a cost analysis. Anesthesiology 2004;101:591-602.
[22] Bein B. Entropy. Best Pract Res Clin Anaesthesiol 2006;20:101-9.
[23] Baulig W, Seifert B, Schmid E, Schwarz U. Comparison of spectral entropy and bispectral index electroencephalography in coronary artery bypass graft surgery. J Cardiothorac Vasc Anesth 2010;24:544-9.
[24] Gajraj RJ, Doi M, Mantzaidis H, Kenny GNC. Analysis of the EEG bispectrum, auditory evoked potentials and EEG power spectrum during repeated transitions from consciousness to unconsciousness. Br. J. Anaesth 1998;80:46-52.
[25] Thoo M. Brain monitor puts patients at ease. Swinburne Magazine 2011 Mar 17;6-7.
[26] Breeze, D. Ethics approval obtained for BAR monitor trial in Melbourne. Cortical Dynamics Ltd. 2011 Nov; 1.
[27] Orser B, Mazer C, Baker A. Awareness during anaesthesia. CMAJ 2008;178(2):185–8.
[28] Australian and New Zealand College of Anaesthetics. Guidelines on checking anaesthesia delivery systems [document on the Internet]. Melbourne; 2012 [cited 2012 Sept 20]. Available from ANZCA: http://www.anzca.edu.au

Categories
Feature Articles Articles

Is there a role for end-of-life care pathways for patients in the home setting who are supported with community palliative care services?

The concept of a “good death” has developed immensely over the past few decades and we now recognise the important role of palliative care services in healthcare for the dying, our most vulnerable population. [1-3] In palliative care, end-of-life care pathways have been developed to transfer the gold standard hospice model of care for the dying to other settings, addressing the physical, psychosocial and practical issues surrounding death. [1,4] Currently, these frameworks are used in hospitals and residential aged-care facilities across Australia. [1] However, there is great potential for these pathways to be introduced into the home setting with support from community palliative care services. This could help facilitate a good death for these patients in the comfort of their own home, and also support their families through the grieving process.

Although there is no one definition of a “good death”, many studies have examined factors considered important at the end-of-life by patients and their families. Current literature acknowledges that terminally ill patients highly value adequate pain and symptom management, avoidance of prolongation of death, preparation for end-of-life, relieving the burden imposed on their loved ones, spirituality, and strengthening relationships with health professionals through acknowledgement of imminent death. [2] Interestingly, the Steinhauser study noted a substantial disparity in views on spirituality between physicians and patients. [3] Physicians were found to rank good symptom control as most important, whilst patients considered spiritual issues to hold equal significance. These studies highlight the individual nature of end-of-life care, which refl ects why the holistic approach of palliative care can improve the quality of care provided.

It is recognised that patients with life-limiting illnesses have complex needs that often require a multidisciplinary approach with multiple care providers. [1] However, an increased number of team members also creates its own challenges, and despite the best intentions, care can often become fragmented due to poor interdisciplinary communication. [5] This can lead to substandard end-of-life care with patients suff ering prolonged and painful deaths, and receiving unwanted, expensive and invasive care, as demonstrated by the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). [6] Temel et al. also demonstrated that palliative care can improve the documentation of advanced care directives. [7] For terminally ill patients, this is essential in clarifying and enabling patients’ wishes regarding end-of-life to be respected.

In 2010, Temel et al. conducted a randomised controlled trial in patients with newly diagnosed metastatic non-small-cell lung cancer, comparing the effect of palliative care and standard oncologic therapy, to standard oncologic therapy alone. [7] Results demonstrated that palliative care intervention improves quality of life and reduces rates of depression, consistent with existing literature. [7] Furthermore, despite receiving less aggressive end-of-life care, the additional early involvement of palliative care services resulted in a significant prolongation of life, averaging 2.7 months (p = 0.02). [7] This 30% improved survival benefit is equivalent to that achieved with a response to standard chemotherapy regimens, which has profound significance for patients with metastatic disease. [7] This study thereby validates the benefits of early palliative care intervention in oncology patients. In addition, early palliative intervention encourages advance care planning, allowing treating teams to elicit and acknowledge patient preferences regarding end-of-life care.

Many physicians often find it difficult to discuss poor prognoses with patients, potentially leaving patients and their families unaware of their terminal condition, despite death being anticipated by the treating team. [1,4] Many health care professionals are uncomfortable discussing death and dying, citing lack of training and fear of upsetting the patient. [8] Regardless, patients are entitled to be informed and supported through this difficult time. In addition, terminal patients and their caregivers are often neglected in decisions about their care, [9] despite their fundamental legal and ethical right to be involved, and studies indicate that they often want to be included in such discussions. [1,10,11] With the multitude of patient values and preferences for care, it can often be difficult to standardise the care provided. End-of-life care pathways encourage discussion of prognosis, facilitating communication that allows patients’ needs to be identified and addressed systematically and collaboratively. [1]

End-of-life care pathways provide a systematic approach and a standardised level of care for patients in the terminal phase of their illness. [1] This framework includes documentation of discussion with the patient and carers of the multi-disciplinary consensus that death is now imminent and life-prolonging treatment is futile, and also provides management strategies to address the individual needs of the dying. There is limited evidence to support the use of end-of-life care pathways, however we cannot discount the substantial anecdotal benefits. [1,12] The lack of high-quality studies indicates a need for further research. [1,12] When used in conjunction with clinical judgment, these pathways can lead to benefits such as: improved symptom control, earlier acknowledgement of terminal prognosis by the patient and family, prescription of medications for end-of-life, and aiding the grieving process for relatives. [1,12,13] As such, end-of-life care pathways are highly regarded in palliative care, transferring the benchmarked hospice model of care of the dying into other settings, [14] and have been widely implemented nationally and internationally. [1]

The most recognised and commonly used end-of-life care pathway is the Liverpool Care Pathway (LCP), which was developed in the United Kingdom to transfer the hospice model of care for the dying to other care settings. [13,15] It has been implemented into hospices, hospitals and aged care facilities, and addresses the physical, psychosocial and spiritual needs of these patients. [1,13,15] In 2008, Verbeek et al. examined the effect of the LCP pre- and post-implementation on patients from hospital, aged care and home settings. [13] Results demonstrated improved documentation and reduced symptom burden as assessed by nurses and relatives, in comparison with the baseline period. [13] Although increased documentation does not necessarily equate to better care, high-quality medical records are essential to facilitate communication between team members and ensure quality care is provided. In this study, staff also reported that they felt the LCP provided a structure to patient care, assisted the anticipation of problems, and promoted proactive management of patient comfort. [13] The LCP has significantly increased the awareness of good terminal care, and has provided a model for the end-of-life care pathways currently in use in hospitals and institutions throughout Australia. [1,4]

Community palliative care services support terminally ill patients at home in order to retain a high quality of life. Recognising the holistic principles of palliative care, these multidisciplinary teams provide medical and nursing care, counselling, spiritual support and welfare supports. In the Brumley trial, which evaluated an in-home palliative care intervention with a multidisciplinary team for homebound terminally ill patients, results demonstrated that the intervention group had greater satisfaction with care, were less likely to visit the emergency department, and were more likely to die in the comforts of their own home. [16] These results infer that the community palliative care team provided a high standard of care where symptoms were well-managed and did not require more aggressive intervention. This prevented unnecessary emergency presentations, potential distress for the patient and family, and allowed better use of resources. This study demonstrates that community palliative care services can significantly improve the quality of care for patients living at home with life-limiting illnesses, however, there is still scope for improvement in the current healthcare system.

End-of-life care pathways are regarded as best practice in guiding care for patients where death is imminent. [1] In Australia, there are a number of these frameworks that have been implemented in hospitals and aged-care facilities, demonstrating an improvement in the quality of care in these settings. However, there are also many terminally ill patients who choose to reside in the comfort of their own home, and are supported by community palliative care services. End-of-life care pathways support a high standard of care, which should be available to all patients, irrespective of where they choose to die. As such, there may be a role for end-of-life care pathways in the home setting, supported by community palliative care services. Introducing already implemented local end-of-life care pathway into the community has great potential to reap similar benefits. Initially, these frameworks would be implemented by the community palliative care team, however, caregivers could be educated and empowered to participate in the ongoing care. This could be a useful means to facilitate communication between treating team members and family, and also empower the patient and family to become more involved in their care.

The potential benefits of implementing end-of-life care pathways into community palliative care services include those currently demonstrated in the hospital and aged-care settings, however there are potentially further positive effects. By introducing these frameworks into the homes of terminally ill patients, caregivers can also be encouraged to take a more active role in the care of their loved ones. This indirect education for the patient and family can provide a sense of empowerment, and assist them to make informed decisions. Additional potential benefits of these pathways could include a reduction in the number of hospital admissions and emergency department presentations, which would reduce the pressures on our already overburdened acute care services. Empowered family and carers could also assist with monitoring, providing regular updates to the community palliative care team, which could potentially lead to earlier detection for when more specialised care is required. The documentation within the pathways could also allow for a smoother transition to hospices if required, and prevent unnecessary prolongation of death. This may translate to prevention of significant emotional distress for the patient and family in an already difficult time, and promote more effective use of limited hospital resources. Integrating end-of-life care pathways into community palliative care services has many potential benefits for patients at home with terminal illnesses, and should be considered as an option to improve the delivery of care.

Palliative care can significantly improve the quality of care provided to patients in the terminal phase, which can be guided by end-of-life care pathways. Evidence validates that these pathways encourage a multidisciplinary change in practice that facilitates a “good death”, and supports the family through the bereavement period. In the community, this framework has the potential to empower patients and their caregivers, and assist them to make informed decisions regarding their end-of-life care, thereby preventing unwanted aggressive intervention and unnecessary prolongation of death. However, there is a need for further high-quality studies to validate the anecdotal benefits of these pathways, with potential for a randomised controlled trial investigating the use of end-of-life care pathways in the home setting in Australia. In conclusion, the introduction of end-of-life care pathways into community palliative care services has great potential, particularly if supported and used in conjunction with specialist palliative care teams.

Acknowledgements

I would like to acknowledge Dr Leeroy William from McCulloch House, Monash Medical Centre for his support in developing this proposal, and Andrew Yap for his editorial assistance.

Conflicts of interest

None declared.

Correspondence

A Vo: amanda.vo@southernhealth.org.au

Categories
Review Articles Articles

Suxamethonium versus rocuronium in rapid sequence induction: Dispelling the common myths

Rapid sequence induction (RSI) is a technique used to facilitate endotracheal intubation in patients at high risk of aspiration and for those who require rapid securing of the airway. In Australia, RSI protocols in emergency departments usually dictate a predetermined dose of an induction agent and a neuromuscular blocker given in rapid succession. Suxamethonium, also known as succinylcholine, is a depolarising neuromuscular blocker (NMB) and is commonly used in RSI. Although it has a long history of use and is known for producing good intubating conditions in minimal time, suxamethonium possesses certain serious side effects and contraindications (that are beyond the scope of this article).

If there existed no alternative NMB, then the contraindications associated with suxamethonium would be irrelevant – yet there exists a suitable alternative. Rocuronium, a non-depolarising NMB introduced into Australia in 1996, has no known serious side effects or contraindications (excluding anaphylaxis). Unfortunately, many myths surrounding the properties of rocuronium have propagated through the anaesthesia and emergency medicine communities, and have resulted in some clinicians remaining hesitant to embrace this drug as a suitable alternative to suxamethonium for RSI. This essay aims to dispel a number of these myths through presenting the evidence currently available and thus allowing physicians to make informed clinical decisions that have the potential to significantly alter patient outcomes. It is not intended to provide a clear answer to the choice of NMB in RSI, but rather to encourage further debate and discussion on this controversial topic under the guidance of evidence-based medicine.

One of the more noteworthy differences between these two pharmacological agents is their duration of action. The paralysis induced by suxamethonium lasts for five to ten minutes, while rocuronium has a duration of action of 30-90 minutes, depending on the dose used. The significantly shorter duration of action of suxamethonium is often quoted by clinicians as being of great significance in their decision to utilise this drug. In fact, some clinicians are of the opinion that by using suxamethonium, they insert a certain ‘safety margin’ into the RSI protocol under the belief that the NMB will ‘wear off ’ in time for the patient to begin spontaneously breathing again in the case of a failed intubation. Benumof et al. (1997) [1] explored this concept by methodically analysing the extent of haemoglobin desaturation (SpO2) following administration of suxamethonium 1.0mg/kg in patients with a non-patent airway. This study found that critical haemoglobin desaturation will occur prior to functional recovery (that is, return of spontaneous breathing).

In 2001, a study by Heier et al. [2] was conducted, involving twelve healthy volunteers aged 18 to 45 years who were all pre-oxygenated to an end-tidal oxygen concentration >90% (after breathing a FiO2 of 1.0 for three minutes). Following the administration of thiopental and suxamethonium 1.0mg/kg, no assisted ventilation was provided and the oxygen saturation levels were closely monitored. The results demonstrated that one third of the patients included in the study desaturated to SpO2 <80% (at which point they received assisted ventilation during the trial). As the authors clearly stated, the study participants were all young, healthy and slim individuals who received optimal pre-oxygenation, yet still a significant proportion suff ered critical haemoglobin desaturation before spontaneous ventilationresumed. In a real-life scenario, particularly in the patient population who require RSI, an even higher number of patients would be expected to display significant desaturation due to their failing health and the limited time available to provide pre-oxygenation. Although one may be inclined to argue that the results would be altered by reducing the dose of suxamethonium, Naguib et al. [3] affirmed that, while reducing the dose from 1.0mg/kg to 0.6mg/kg did slightly reduce the incidence of SpO2 <90% (from 85% to 65%), it did not shorten the time to spontaneous diaphragmatic movements. Therefore, the notion that the short duration of action of suxamethonium can be relied upon to improve safety in RSI is not supported and should not be trusted as a reliable means to rescue a “cannot intubate, cannot ventilate” situation.

Having demonstrated that differences in the duration of action should not sway one in the false belief of improving safety in RSI, let us compare the effect of the two drugs on oxygen saturation levels if apnoea was to occur following their administration. As suxamethonium is a depolarising agent, it has the side effect of muscle fasciculations following administration, whereas rocuronium, a non-depolarising agent, does not. It has long been questioned whether or not the existence of fasciculations associated with the use of suxamethonium alters the time to onset of haemoglobin desaturation if the airway was unable to be secure in a timely fashion and thus prolonged apnoea occurred.

This concept was explored by Taha et al. [4] who divided enrolled participants in the study into three groups: lidocaine/fentanyl/ rocuronium, lidocaine/fentanyl/suxamethonium and propofol/ suxamethonium. Upon measuring the time to onset of haemoglobin desaturation (deemed to be SpO2 <95%), it was discovered that both groups receiving suxamethonium developed significantly faster desaturation than the group receiving rocuronium. By analysing the differences between the two groups receiving suxamethonium, one discovers a considerable difference in results, with the lidocaine/ fentanyl group having a longer onset to desaturation than the propofol group. Since lidocaine and fentanyl are recognised to decrease (but not completely attenuate) the intensity of suxamethonium-induced fasciculations, these results suggested that the fasciculations associated with suxamethonium do result in a quicker onset to desaturation compared to rocuronium.

Another recent study by Tang et al. [5] provides further clarification on this topic. Overweight patients with a BMI of 25-30 who were undergoing elective surgery requiring RSI were enrolled in the study. Patients were given either 1.5mg/kg suxamethonium or 0.9mg/ kg rocuronium and no assisted ventilation was provided following induction until SpO2 <92% (designated as the ‘Safe Apnoea Time’). The time taken for this to occur was measured in conjunction with the time required to return the patient to SpO2 >97% following introduction of assisted ventilation with FiO2 of 1.0. The authors concluded that suxamethonium not only made the ‘Safe Apnoea Time’ shorter but also prolonged the recovery time to SpO2 >97% compared to rocuronium. In summary, current evidence suggests that the use of suxamethonium results in a faster onset of haemoglobin desaturation than rocuronium, most likely due to the increased oxygen requirements associated with muscle fasciculations.

Since RSI is typically used in situations where the patient is at high risk of aspiration, the underlying goal is to secure the airway in the minimal amount of time possible. Thus, the time required for the NMB to provide adequate intubating conditions is of great importance, with a shorter time translating into better patient outcomes, assuming all other factors are equal. Suxamethonium has long been regarded as the ‘gold-standard’ in this regard, yet recent evidence suggests that the poor reputation of rocuronium in regards to the time required is primarily due to inadequate dosing. Recommended doses for suxamethonium tend to be reliably stated as 1.0-1.5mg/kg, [6] whereas rocuronium dosages have often been quoted as 0.6mg/kg, which, as will be established below, is inadequate for use in RSI.

A prospective, randomised trial study published by Sluga et al. [7] in 2005 concluded that, upon comparing intubating conditions following administration of either 1.0mg/kg suxamethonium or 0.6mg/kg rocuronium, there was a significant improvement in conditions with suxamethonium at 60 seconds post-administration. Another study [8] examined the frequency of good and excellent intubating conditions with rocuronium (0.6mg/kg and 1.0mg/kg) or suxamethonium (1.0mg/kg). Upon comparison of the groups receiving rocuronium, the 1.0mg/kg group had a consistently greater frequency of both good and excellent intubating conditions at 50 seconds. While the rocuronium 1.0mg/kg and suxamethonium 1.0mg/kg groups had a similar frequency of acceptable intubating conditions, there was a higher incidence of excellent conditions in the suxamethonium group. A subsequent study [9] confirmed this finding, with the intubating physician reporting a higher degree of overall satisfaction with the paralysis provided with suxamethonium 1.7mg/kg when compared to rocuronium 1.0mg/kg. In other words, it appears that the higher dose of 1.0mg/kg of rocuronium produces better intubating conditions than 0.6mg/kg, yet it does not do so to the same extent as suxamethonium.

If no evidence were available comparing an even higher dose of rocuronium, the argument for utilising suxamethonium in RSI would defi nitely be strengthened by the articles presented above. However, a retrospective evaluation of RSI and intubation from an emergency department in Arizona, United States provides further compelling evidence. [10] The median doses used were suxamethonium 1.65mg/ kg (n=113) and rocuronium 1.19mg/kg (n=214) and the study authors state there was “no difference in success rate for first intubation attempt or number of attempts regardless of the type of paralytic used or the dose administered.” To add further weight to this issue, a Cochrane Review in 2008 titled “Rocuronium versus succinylcholine for rapid sequence induction intubation” combined 37 studies for analysis and concluded that “no statistical difference in intubating conditions was found when [suxamethonium] was compared to 1.2mg/kg rocuronium.” [11] Hence, there exists sufficient evidence that with adequate dosing, rocuronium (1.2mg/kg) is comparable to suxamethonium in time to onset of intubating conditions and thus this argument cannot be used to aid in selecting an appropriate neuromuscular blocker for RSI. In recent times, particularly here in Australia, there have been questions posed regarding a supposedly increased risk of anaphylaxis to rocuronium. Rose et al. [12] from Royal North Shore Hospital in Sydney addressed this query in a paper in 2001. They found that the incidence of anaphylaxis to any NMB will be determined by its market share. Since the market share (that is, number of uses) of rocuronium is increasing, the cases of anaphylaxis are also increasing – but importantly, they are only increasing “in proportion to usage.” Of note, the authors state that rocuronium should still be considered a drug of “intermediate risk” of anaphylaxis, compared to suxamethonium which is “high risk”. Although not addressed in this paper, there are additional factors that have the potential to alter the incidence of anaphylaxis, such as geographical variation that may be related to the availability of pholcodine in cough syrup. [13]

Before the focus of this paper shifts to a novel agent that has the potential to significantly alter the decision of selecting between suxamethonium versus rocuronium in RSI, there remains a pertinent issue that needs to be discussed. It appears as though one of the key properties of suxamethonium is its brief duration of only five to ten minutes and many clinicians tend to quote this as an important aspect, with the Cochrane Review itself stating that “succinylcholine was clinically superior as it has a shorter duration of action,” despite finding no statistical difference otherwise. [11]

The question that needs to be posed is whether this is truly an advantage of a NMB used in RSI. Patients who require emergency intubation often have a dire need for a secure airway to be established – simply allowing the NMB to “wear off ” and the patient to begin spontaneously breathing again does nothing to alter their situation. One must consider that, even if the clinician was aware of the evidence against relying on suxamethonium’s short duration of action to rescue them from a failed intubation scenario, the decision to initiate further measures (that is, progress to a surgical airway) would be delayed in such a scenario. If rocuronium, with its longer duration of action, was used, would clinicians then feel more compelled to ‘act’ rather than ‘wait’ in this rare scenario, knowing that the patient would remain paralysed? If rescue techniques such as a surgical airway were instigated, would the awakening of the patient (due to suxamethonium terminating its effect) be a hindrance? Although the use of rocuronium presents the risk of a patient requiring prolonged measures to maintain oxygenation and ventilation in a “cannot intubate, can ventilate” scenario, paralysis would be reliably maintained if a surgical airway was required.

No discussion on the debate of suxamethonium versus rocuronium would be complete without mentioning a new drug that appears to hold great potential in this arena – sugammadex. A γ-cyclodextrin specifically designed to encapsulate rocuronium and thus cause disassociation from the acetylcholine receptor, it acts to reverse the effects of neuromuscular blockade from rocuronium. In addition to its action on rocuronium, sugammadex also appears to have some crossover effect on vecuronium, another steroidal non-depolarising NMB. While acetylcholinesterase inhibitors are often used to reverse NMBs, they act non-specifically on both muscarinic and nicotinic synapses and cause many unwanted side effects. If they are given before there is partial recovery (>10% twitch activity) of neuromuscular blockade, they do not shorten the time to 90% recovery and thus are ineffective against profound block.

Sugammadex was first administered to human volunteers in 2005 with minimal side effects. [14] It displayed great potential in achieving quick recovery from rocuronium-induced paralysis within a few minutes. Further trials were conducted, including by de Boer et al. [15] in the Netherlands. Neuromuscular blockade was induced with rocuronium 1.2mg/kg and doses ranging from 2.0 to 16.0mg/kg of sugammadex given. With recovery of the train-of-four ratio to 0.9 designated as the primary outcome, the authors found that successive increases in the dose of sugammadex resulted in decreased time required to reverse profound blockade at five minutes following administration of rocuronium, with sugammadex 16mg/kg giving a mean recovery time of only 1.9 minutes compared to the placebo recovery time of 122.1 minutes. In a review article, Mirakhur [16] further supported the use of high-dose sugammadex (16mg/kg) in a situation requiring rapid recovery of neuromuscular blockade.

With an effective reversal agent for rocuronium presenting a possible alternative to suxamethonium in rapid sequence inductions, Lee et al. [17] closely examined the differences in time to termination of effect. They studied 110 patients randomised to either rocuronium 1.2mg/kg or suxamethonium 1mg/kg. At three minutes following administration of rocuronium, 16mg/kg sugammadex was given. The results of this study confirmed the potential of sugammadex and its possible future role in RSI, as the study group given rocuronium and sugammadex (at three minutes) recovered significantly faster than those given suxamethonium (mean recovery time to first twitch 10% = 4.4 and 7.1 minutes respectively). The evidence therefore suggested that administering sugammadex 16mg/kg at three minutes after rocuronium 1.2mg/kg resulted in a shorter time to reversal of neuromuscular blockade compared to spontaneous recovery with suxamethonium. While sugammadex has certainly shown great potential, it remains an expensive drug and there still exist uncertainties regarding repeat dosing with rocuronium following reversal with sugammadex, [18] as well as the need to suitably educate and train staff on its appropriate use, as demonstrated by Bisschops et al. [19] It is also important to note that for sugammadex to be of use in situations where reversal of neuromuscular blockade is required, the full reversal dose (16mg/kg) must be readily available. Nonetheless, it appears as if sugammadex may revolutionise the use of rocuronium not only in RSI, but also for other forms of anaesthesia in the near future.

As clinicians, we should strive to achieve the best patient outcomes possible. Without remaining abreast of the current literature, our exposure to new therapies will be limited and, ultimately, patients will not always be provided with the high level of medical care they desire and deserve. I urge all clinicians who are tasked with the difficult responsibility of establishing an emergency airway with RSI to consider rocuronium as a viable alternative to suxamethonium and to strive to understand the pros and cons associated with both agents, in order to ensure that an appropriate choice is made on the basis of solid evidence-based medicine.

Conflicts of interest

None declared.

Correspondence

S Davies: sjdav8@student.monash.edu

References

[1] Benumof JL, Dagg R, Benumof R. Critical haemoglobin desaturation will occur before return to an unparalysed state following 1 mg/kg intravenous succinylcholine. Anesthesiology. 1997; 87:979-82.

[2] Heier T, Feiner JR, Lin J, Brown R, Caldwell JE. Hemoglobin desaturation after succinylcholine-induced apnea. Anesthesiology. 2001; 94:754-9.

[3] Neguib M, Samarkandi AH, Abdullah K, Riad W, Alharby SW. Succinylcholine dosage and apnea-induced haemoglobin desaturation in patients. Anesthesiology. 2005; 102(1):35-40.

[4] Taha SK, El-Khatib MF, Baraka, AS, Haidar YA, Abdallah FW, Zbeidy RA, Siddik-Sayyid SM. Effect of suxamethonium vs rocuronium on onset of oxygen saturation during apnoea following rapid sequence induction. Anaesthesia. 2010; 65:358-361.

[5] Tang L, Li S, Huang S, Ma H, Wang Z. Desaturation following rapid sequence induction using succinylcholine vs. rocuronium in overweight patients. Acta Anaesthesiol Scand. 2011; 55:203-8.

[6] El-Orbany M, Connolly LA. Rapid Sequence Induction and Intubation: Current Controversy. Anaesth Analg. 2010; 110(5):1318-24.

[7] Sluga M, Ummenhofer W, Studer W, Siegemund M, Marsch SC. Rocuronium versus succinylcholine for rapid sequence induction of anesthesia and endotracheal intubation a prospective, randomized trial in emergent cases. Anaesth Analg. 2005; 101:1356-61.

[8] McCourt KC, Salmela L, Mirakhur RK, Carroll M, Mäkinen MT, Kansanaho M, Kerr C, Roest GJ, Olkkola KT. Comparison of rocuronium and suxamethonium for use during rapid sequence induction of anaesthesia. Anaesthesia. 1998; 53:867-71.

[9] Laurin EG, Sakles JC, Panacek EA, Rantapaa AA, Redd J. A comparison of succinylcholine and rocuronium for rapid-sequence intubation of emergency department patients. Acad Emerg Med. 2000; 7:1362-9.

[10] Patanwala AE, Stahle SA, Sakles JC, Erstad BL. Comparison of Succinylcholine and Rocuronium for First-attempt Intubation Success in the Emergency Department. Acad Emerg Med. 2011; 18:11-14.

[11] Perry JJ, Lee JS, Sillberg VAH, Wells GA. Rocuronium versus succinylcholine for rapid sequence induction intubation. Cochrane Database Syst Rev. 2008:CD002788.

[12] Rose M, Fisher M. Rocuronium: high risk for anaphylaxis? Br J Anaesth. 2001; 86(5):678-82.

[13] Florvaag E, Johansson SGO. The pholcodine story. Immunol Allergy Clinic North Am. 2009; 29:419-27.

[14] Gijsenbergh F, Ramael S, Houwing N, van Iersel T. First human exposure of Org 25969, a novel agent to reverse the action of rocuronium bromide. Anaesthesiology. 2005; 103:695- 703.

[15] De Boer HD, Driessen JJ, Marcus MA, Kerkkamp H, Heeringa M, Klimek M. Reversal of rocuronium-induced (1.2 mg/kg) profound neuromuscular block by sugammadex. Anesthesiology. 2007; 107:239-44.

[16] Mirakhur RK. Sugammadex in clinical practice. Anaesthesia. 2009; 64:45-54.

[17] Lee C, Jahr JS, CandiottKA, Warriner B, Zornow MH, Naguib M. Reversal of profound neuromuscular block by sugammadex administered three minutes after rocuronium. Anesthesiology. 2009; 110:1020-5.

[18] Cammu G, de Kam PJ, De Graeve K, van den Heuvel M, Suy K, Morias K, Foubert L, Grobara P, Peeters P. Repeat dosing of rocuronium 1.2 mg/kg after reversal of neuromuscular block by sugammadex 4.0 mg/kg in anaesthetized healthy volunteers: a modelling-based pilot study. British Journal of Anaesthesia. 2010; 105(4):487-92.

[19] Bisschops MM, Holleman C, Huitink JM. Can sugammadex save a patient in a simulated ‘cannot intubate, cannot ventilate’ situation? Anaesthesia. 2010; 65:936-41.

Categories
Review Articles Articles

Control of seasonal influenza in healthcare settings: Mandatory annual influenza vaccination of healthcare workers

Introduction: The aim of this review is to emphasise the burden and transmission of nosocomial seasonal influenza, discuss the influenza vaccine and the need for annual influenza vaccination of all healthcare workers, discuss common attitudes and misconceptions regarding the influenza vaccine among healthcare workers and means to overcome these issues, and highlight the need for mandatory annual influenza vaccination of healthcare workers. Methods: A literature review was carried out; Medline, PubMed and The Cochrane Collaboration were searched for primary studies, reviews and opinion pieces pertaining to influenza transmission, the influenza vaccine, and common attitudes and misconceptions. Key words used included “influenza”, “vaccine”, “mandatory”, “healthcare worker”, “transmission” and “prevention”. Results: Seasonal influenza is a serious disease that is associated with considerable morbidity and mortality and contributes an enormous economic burden to society. Healthcare workers may potentially act as vectors for nosocomial transmission of seasonal influenza. This risk to patients can be reduced by safe, effective annual influenza vaccination of healthcare workers and has been specifically shown to significantly reduce morbidity and mortality. However, traditional strategies to improve uptake consistently fail, with only 35 to 40% of healthcare workers vaccinated annually. Mandatory influenza vaccination programs with medical and religious exemptions have successfully increased annual influenza vaccination rates of healthcare workers to >98%. Exemption requests often reflect misconceptions about the vaccine and influenza, and reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. Conclusion: Mandatory annual influenza vaccination of healthcare workers is ethically justified and, if implemented appropriately, will be acceptable. Traditional strategies to improve uptake are minimally effective, expensive and inadequate to protect patient safety. Therefore, low voluntary influenza vaccination rates of healthcare workers leave only one option to protect the public: mandatory annual influenza vaccination of healthcare workers.

Introduction

Each year, between 1,500 and 3,500 Australians die from seasonal influenza and its complications. [1] The World Health Organization (WHO) estimates that seasonal influenza affects five to fifteen per cent of the population worldwide annually, with an associated three to five million cases of serious illness and 250,000-500,000 deaths. [2] In Australia, it is estimated that seasonal influenza causes 18,000 hospitalisations and over 300,000 general practitioner (GP) consultations every year. [3] Nosocomial seasonal influenza is associated with considerable morbidity and mortality among the elderly, neonates, immuno-compromised and patients with chronic diseases. [4] The most effective way to reduce or prevent nosocomial transmission of seasonal influenza is annual influenza vaccination of all healthcare workers. [5,6] The Centre for Disease Control and Prevention (CDC) has recommended annual influenza vaccination of all healthcare workers since 1981, and the provision and administration of the vaccine to healthcare workers at the work site, free of charge, since 1993. [7] Despite this, only 35% to 40% of healthcare workers are vaccinated annually. [8]

 

Transmission of seasonal influenza

The influenza virus attaches and invades the epithelial cells of the upper respiratory tract. [8] Viral replication in these epithelial cells leads to pro-infl ammatory cytokines, and necrosis of epithelial cells. [8] Influenza is primarily transmitted from person to person by droplets that are generated when an infected person breathes, coughs, sneezes and speaks. [8] These droplets settle on the mucosal surfaces of the upper respiratory tract of susceptible persons; thus transmission of influenza primarily occurs in those who are near the infected person. [8]

The influenza vaccine

The influenza vaccines currently available in Australia are inactivated, split virion or subunit vaccines, produced using viral strains propagated in fertilised hens’ eggs. [9] The inactivated virus is incapable of replication inside the human body, and thus incapable of causing infection. [10] Influenza vaccines are trivalent, i.e. they protect against three different strains of influenza. [9] As influenza viruses are continually subject to antigenic change, annual adaptation of the influenza vaccine is needed to ensure the vaccine provides protection against the virus strains likely to be circulating during the influenza season. [9] The composition of the influenza vaccine in 2011 covered pandemic H1N1 2009 (swine flu), H3N2 and B strains of influenza. [11] Influenza vaccines are included in the Australian National Immunisation Program only after evaluation of their quality, safety, effectiveness and cost-effectiveness for its intended use in the Australian population. [9] The only common adverse effect of the influenza vaccine is minor injection site soreness for one to two days. [10] Influenza vaccine effectiveness depends on the age and immune status of the individual being vaccinated, and on the match between the strains included in the vaccine and those strains circulating in the community. [12] The influenza vaccine is 70 to 90% effective in preventing influenza infection in healthy individuals under 65 years of age; the majority of healthcare workers fall into this category. [12] Influenza vaccination has been shown to be 88% effective in preventing laboratory-confirmed influenza in healthcare workers. [13]

The need for annual influenza vaccination

Transmission of influenza has been reported in a variety of healthcare settings and healthcare workers may often be implicated in the outbreaks. [13] Healthcare workers are at an increased risk of acquiring seasonal influenza because of exposure to the virus in both the healthcare and community settings. [13] However, simply staying home from work during symptomatic illness is not an effective strategy to prevent nosocomial transmission of seasonal influenza. [10] The incubation period ranges from one to four days; the contagious period begins before symptoms appear, and the virus may be shed for at least one day prior to symptomatic illness. [4,10] Less than 50% of people show classic signs of influenza; asymptomatic healthcare workers may fail to recognise that they are infected, yet can shed the virus for five to ten days. [13,14] Symptomatic healthcare workers also often continue to work despite the presence of symptoms of influenza. [10,15] In one study, 23% of serum samples from healthcare workers contained specific antibody suggesting seasonal influenza infection during a single season; however, 59% of those infected could not recall influenza-like illness and 28% were asymptomatic. [13] The direct implication of this fact is that healthcare workers themselves may potentially act as vectors for nosocomial transmission of seasonal influenza to patients who are at increased risk of morbidity and mortality from seasonal influenza. [10] Many of these patients do not mount an appropriate immune response to influenza vaccination, making vaccination of healthcare workers especially important. [16] Only 50% of residents in long-term care settings develop protective influenza vaccinationinduced antibody titres. [17] Influenza vaccination of healthcare workers may reduce the risk of seasonal influenza outbreaks in all types of health care settings and has been specifically shown to significantly reduce morbidity and mortality. [12] A randomised controlled trial evaluating the effect of annual influenza vaccination of healthcare workers found that it was significantly associated with a 43% reduction in influenza-like illness and a 44% reduction in mortality among geriatric patients in long-term care settings. [12] Furthermore, an algorithm evaluating the effect of annual influenza vaccination of healthcare workers on patient outcomes predicted that if all healthcare workers in healthcare settings were vaccinated annually with the influenza vaccine, then approximately 60% of patient influenza infections could be prevented. [18]

Although a number of factors contribute to the overall burden of seasonal influenza, the economic burden to society results primarily from the loss of working time/productivity associated with influenza-related work absence and increased use of medical resources required to treat patients with influenza and its complications. [19] Typically, the indirect costs associated with loss of working time/productivity due to illness account for the greater proportion (>80%) of the economic burden of seasonal influenza. [19] One study reported those healthcare workers who received the influenza vaccine had 25% fewer episodes of respiratory illness, 43% fewer days of sickness absenteeism due to respiratory illness and 44% fewer visits to physicians’ offices for upper respiratory illness than those who received placebo. In a review of studies that confirmed seasonal influenza infection using laboratory evidence, the mean reported sickness absenteeism per episode of seasonal influenza ranged from 2.8 to 4.9 days for adults. [19] Furthermore, a retrospective cohort study investigating the association between influenza vaccination of emergency department healthcare workers and sickness absenteeism found that a significantly larger proportion took sick leave because of influenza-like illness in the vaccine non-recipient group (55% against 30.3%). [20]

Attitudes and misconceptions

Self-protection, rather than protection of patients, is often the dominant motivation for influenza vaccination. Many healthcare workers report they would be more willing to be vaccinated against pandemic influenza, which is perceived to be more dangerous than seasonal influenza. [15] One study found that the most popular reason (100% of those surveyed) for receiving the influenza vaccine among healthcare workers was self-protection against influenza. [21] Seventy percent of healthcare workers were also concerned about their colleagues, patients and community in preventing cross-infection. [21] Popular reasons mentioned for not receiving the influenza vaccine included “trust in, or the wish to challenge natural immunity”, “physician’s advice against the vaccine for medical reasons”, “severe localised effects from the vaccine” and “not believing the vaccine to have any benefit.”[21] A multivariate analysis of a separate study revealed that “older age”, “believing that most colleagues had been vaccinated” and “having cared for patients suffering from severe influenza” were significantly associated with compliance with influenza vaccination, with the main motivation being “individual protection”. [22] Lack of information as to effectiveness, recommended use, adverse effects of the vaccine and composition, again reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. [22]

Major issues

Analysis of interviews with healthcare workers indicated that successfully adding mandatory annual influenza vaccination to the current policy directive would require four major issues to be addressed: providing and communicating a solid evidence base supporting the policy directive; addressing the concerns of staff about the influenza vaccine; ensuring staff understand the need to protect patients; and addressing the logistical challenges of enforcing an annual vaccination campaign. [23] A systematic review of influenza vaccination campaigns for healthcare workers revealed that a combination of education or promotion and improved access to the influenza vaccine yielded greater increases in coverage among healthcare workers. [24] Campaigns involving legislative or regulatory components such as mandatory declination forms achieved higher rates than other interventions. [24]

Influenza vaccination is currently viewed as a public health initiative focused on personal choice of employees. [12] However, a shift in the focus of vaccination strategy is appropriate – seasonal influenza vaccination of healthcare workers is a patient health and safety initiative. [12] In 2007, the CDC Advisory Committee on Immunisation added a recommendation that health care settings implement policies to encourage influenza vaccination of healthcare workers with informed declination. [25] A switch from influenza vaccination of healthcare workers on a voluntary basis to a mandatory policy should be considered by all public-health bodies. [4]

Mandatory annual influenza vaccination

Fifteen states in the USA now have laws requiring annual influenza vaccination of healthcare workers, although they permit informed declination; and at least five states require it of all healthcare workers. Many individual medical centres have instituted policies requiring influenza vaccination, with excellent results. [26]

A year-long study of approximately 26,000 employees at BJC HealthCare found that a mandatory influenza vaccination program successfully increased vaccination rates to >98%. [27] Influenza vaccination was made a condition of employment for all healthcare workers, with those still not vaccinated or exempted, terminated after one year. [27] Medical or religious exemption could be sought, including hypersensitivity to eggs, prior hypersensitivity reaction to influenza vaccine, and history of Guillain-Barre syndrome. [27] Exemption requests often reflected misconceptions about the vaccine and influenza. [27] Several requests cited chemotherapy or immuno-compromise as a reason not to get the influenza vaccine, even though these groups are at high risk for complications from influenza and are specifically recommended to be vaccinated. [27] Several requests cited pregnancy, although the influenza vaccine is recommended during pregnancy. [27]

Similarly, a five-year study of mandatory influenza vaccination of approximately 5,000 healthcare workers from Virginia Mason Medical Centre sustained influenza vaccination rates of more than 98% during 2005-2010. [28] Less than 0.7% of healthcare workers were granted exemption for medical or religious reasons and were required to wear a mask at work during influenza season, and less than 0.2% of healthcare workers refused vaccination and leftthe centre. [28]

Conclusion

Mandatory annual influenza vaccination of healthcare workers raises complex professional and ethical issues. However, the arguments in favour are clear. 1. Seasonal influenza is a serious and potentially fatal disease, associated with considerable morbidity and mortality among the elderly, neonates, immuno-compromised and patients with chronic diseases. [4] 2. The influenza vaccine has been evaluated for safety, quality, effectiveness and cost-effectiveness for its intended use in the Australian population. [9] 3. Healthcare workers themselves may potentially act as vectors for nosocomial transmission of seasonal influenza and this risk to patients can be reduced by safe, effective annual influenza vaccination of healthcare workers. [10] 4. The contagious period of seasonal influenza begins before symptoms appear and the virus may be shed for at least one day prior to symptomatic illness. [14] 5. Influenza vaccination of healthcare workers may reduce the risk of seasonal influenza outbreaks in all types of health care settings and has been specifically shown to significantly reduce morbidity and mortality. [12] 6. Seasonal influenza contributes an enormous economic burden to society from the loss of working time/productivity associated with influenza-related work absence and increased use of medical resources required to treat patients with influenza and its complications. [19] 7. Traditional strategies to improve uptake by healthcare workers consistently fail, with only 35% to 40% of healthcare workers vaccinated annually. [8] 8. Mandatory influenza vaccination programs with medical and religious exemptions have successfully increased annual influenza vaccination rates of healthcare workers to >98%. [27,28] 9. Exemption requests often reflected misconceptions about the vaccine and influenza, and reflect the importance of continuous education programs and the need for a better understanding of the reasons for compliance with influenza vaccination. [27, 22]

These facts suggest that mandatory annual influenza vaccination of healthcare workers is ethically justified and, if implemented appropriately, will be acceptable. [15] For this to occur, a mandatory program needs leadership by senior clinicians and administrators; consultation with healthcare workers and professional organisations; appropriate education; free, easily accessible influenza vaccine and adequate resources to deliver the program efficiently. It further requires provision for exemptions on medical and religious grounds and appropriate sanctions for those who refuse annual influenza vaccination, for example, requirement to wear a mask during influenza season, or termination of employment. [15] Healthcare workers accept a range of moral and other professional responsibilities, including a duty to protect patients in their care from unnecessary harm, to do good, to respect patient autonomy, and to treat all patients fairly. They also accept reasonable, but not unnecessary, occupational risk such as exposure to infectious diseases. [15] Vaccination is often seen as something that people have a right to accept or refuse. However, freedom to choose also depends on the extent to which that choice affects others. [15] In the healthcare settng, the autonomy of healthcare workers must be balanced against patients’ rights to protection from avoidable harm, and the moral obligation of healthcare workers not to put others at risk. [15] Mandatory annual influenza vaccination of healthcare workers is consistent with the right the public have to expect that healthcare workers will take all necessary and reasonable precautions to keep them safe and minimise harm. [15] Traditional strategies to improve uptake by healthcare workers are minimally effective, expensive, and inadequate to protect patient safety. Therefore, low voluntary influenza vaccination rates of healthcare workers leave only one option to protect the public: mandatory annual influenza vaccination of healthcare workers.

Conflicts of interest

None declared.

Correspondence

K Franks: kathryn.franks@my.jcu.edu.au

References

[1] Australian Bureau of Statistics. 3303.0 – Causes of death, Australia. 2007.

[2] World Health Organization. Fact sheet no. 211. Revised April 2009.

[3] Williams U, Finch G. Influenza specialist group – influenza fact sheet. Revised March2011.

[4] Maltezou H. Nosocomial influenza: new concepts and practice. Curr Opin Infect Dis.2008;21:337-43.

[5] Weber D, Rutala W, Schaff ner W. Lessons learned: protection of healthcare workers from infectious disease risks. Crit Care Med. 2010;38(8):306-14.

[6] Ling D, Menzies D. Occupation-related respiratory infections revisited. Infect Dis Clin North Am. 2010;24:655-80.

[7] Centre for Disease Control and Prevention. Influenza vaccination of healthcare personnel: recommendations of the healthcare infection control practices advisory committee and the advisory committee on immunization practices. MMWR Morb Mortal Wkly Rep. 2006;55:1-41.

[8] Beigel J. Influenza. Crit Care Med. 2008;36(9):2660-6.

[9] Horvath J. Review of the management of adverse effects associated with Panvax and Fluvax: fi nal report. In: Ageing DoHa, editor. 2011. p.1-58.

[10] McLennan S, Gillert G, Celi L. Healer, heal thyself: health care workers and the influenza vaccination. AJIC. 2008;36(1):1-4.

[11] Bishop J. Seasonal influenza vaccination 2011. In: Ageing DoHa, editor. Canberra 2011.

[12] Schaff ner W, Cox N, Lundstrom T, Nichol K, Novick L, Siegel J. Improving influenza vaccination rates in health care workers: strategies to increase protection for workers and patients. In: NFID, editors. 2004. p.1-19.

[13] Goins W, Talbot H, Talbot T. Health care-acquired viral respiratory diseases. Infect Dis Clin North Am. 2011;25(1):227-44.

[14] Maroyka E, Andrawis M. Health care workers and influenza vaccination. AJHP. 2010;67(1):25.

[15] Gilbert GL, Kerridge I, Cheung P. Mandatory influenza immunisation of health-care workers. Laninf. 2010;10:3-4.

[16] Carlson A, Budd A, Perl T. Control of influenza in healthcare settings: early lessons from the 2009 pandemic. Curr Opin Infect Dis. 2010;23:293-9.

[17] Lee P. Prevention and control of influenza. Southern Medical Journal. 2003;96(8):751-7.

[18] Ottenburg A, Wu J, Poland G, Jacobson R, Koenig B, Tilburt J. Vaccinating health care workers against influenza: the ethical and legal rationale for a mandate. AJPH. 2011;101(2).

[19] Keech M, Beardsworth P. The impact of influenza on working days lost: a review of the literature. TPJ. 2008;26(1):911-24.

[20] Chan SS-W. Does vaccinating ED health care workers against influenza reduce sickness absenteeism? AJEM. 2007;25:808-11.

[21] Osman A. Reasons for and barriers to influenza vaccination among healthcare workers in an Australian emergency department. AJAN. 2010;27(3):38-43.

[22] Takayanagi I, Cardoso M, Costa S, Araya M, Machado C. Attitudes of health care workers to influenza vaccination: why are they not vaccinated? AJIC. 2007;35(1):56-61.

[23] Leask J, Helms C, Chow M, Robbins SC, McIntyre P. Making influenza vaccination mandatory for health care workers: the views of NSW Health administrators and clinical leaders. New South Wales Public Health Bulletin. 2010;21(10):243-7.

[24] Lam P-P, Chambers L, MacDougall DP, McCarthy A. Seasonal influenza vaccination campaigns for health care personnel: systematic review. CMAJ. 2010;182(12):542-8.

[25] Centre for Disease Control and Prevention. Prevention and control of influenza, recommendation of the Advisory Committee on Immunization Practices (ACIP). MRR- 6MWR Recomm Rep. 2007;56(RR-6):1-54.

[26] Tucker S, Poland G, Jacobson R. Requiring influenza vaccination for health care workers: the case for mandatory vaccination with informed declination. AJN. 2008;108(2):32-4.

[27] Babcock H, Gemeinhart N, Jones M, Dunagan WC, Woeltje K. Mandatory influenza vaccination of health care workers: translating policy to practice. CID. 2010;50:259-64.

[28] Rakita R, Hagar B, Crome P, Lammert J. Mandatory influenza vaccination of healthcare workers: a 5-year study. ICHE. 2010;31(9):881-8.

Categories
Original Research Articles Articles

Immunisation and informed decision-making amongst Islamic primary school parents and staff

Background: The Islamic community represents a recognisable and growing minority group in the broader Australian context. Some sectors of the international Muslim community have voiced concerns about the ritual cleanliness of vaccines, and seen subsequent lower levels of compliance. Anecdotal evidence suggests Australian Muslims may hold similar concerns. Aim: This study aims to evaluate the information and knowledge with which Islamic parents and staff are equipped to make decisions about immunisation. Methods: Parents and staff at an Islamic primary school were recruited through survey forms sent home for voluntary completion. These surveys were designed to assess the sources of information and level of confidence regarding immunisations as well as highlighting personal perspectives of the participants and misapprehensions. All participants identified as Muslim parents. Results: 40.7% (n = 64) of respondents were not confident that they knew enough about vaccines to make good decisions, while 73.3% (n=115) respondents stated a personal desire for further education about vaccinations and vaccination schedules, suggesting a significant degree of uncertainty associated with the amount of information currently accessible to this cohort of the community. Qualitative responses reflected concerns associated with side effects and the halal nature of vaccines. As these responses included a perceived information gap about material risks, it raises the possibility of invalid consent. Parents obtain information from a variety of sources, the most popular being their general practitioner. However, our data suggested that the public health nurses of the shire council facilitated better knowledge outcomes than general practitioners. Conclusion: By taking the time to communicate material risks to Muslim parents, health professionals ensure confident, informed decision-making and consent.

Introduction

The Islamic community presents a recognisable and growing minority group in the broader Australian context. In light of the nature of their religious fidelity, Islamic patients will bring different attitudes and knowledge to the clinical setting, requiring sensitive and appropriate medical attention. [1] A working knowledge of the core tenets of Islam allows clinicians to provide culturally relevant information to facilitate informed consent and decision-making. For example, the prohibition in Islam against receiving pork and other unclean meat products (“haram”), and the inclusion of derivatives from these in some surgical and pharmacological interventions can be an important consideration to convey, and potentially damaging omission to make in a consultation. [2]

While there is a corpus of published information pertaining to Muslim cultural considerations in medical and especially nursing practice in Australia, we identified a gap in the literature in relation to attitudes and behaviours towards immunisation. Some isolated voices in the large religious grouping of Islam have voiced major concerns about haram or unclean content in vaccines: Dr Abdul Majid Katme [3], of the ‘Islamic Medical Association of Britain’ is reported as “urging British Muslims not to vaccinate their children against diseases such as measles, mumps and rubella because they contain substances making them unlawful for Muslims to take.”

Concerns have been raised in the broader medical fraternity in relation to how statements such as this have influenced Islamic patient’s compliance with immunisation, with data demonstrating a decrease in immunisation rates in majority Muslim countries such as Nigeria [4] and Pakistan [5], where leaders and clerics have made complex claims against the safety of vaccines. The level of non-compliance that resulted from these attitudes has set back efforts to eradicate polio worldwide. [6] In response, Warraich [7] made calls for further study into Muslim populations’ attitudes towards vaccination. In the Australian context, Zwar [8] mentions that there is “anecdotal evidence that Australian Muslims may share the concerns and fears about vaccination safety” held by their brethren overseas.

Having identified the need for more data from the Muslim Australian perspective on vaccines, we endeavoured to assess the information sources and knowledge of the members of one diverse Islamic community, a primary school. Focussing on the degree to which parents are capable and confident to make informed consensual decisions about their child’s immunisations, we endeavoured to determine the extent to which the data reflects trends of unease, and to provide some insight into what gives rise to such concerns.

Methods

This project received ethics approval from the Community Based Placement Program conveners, mandated by the Monash University Human Research Ethics Committee (MUHREC) to approve low impact research.

A mixed methods design was employed. A survey, designed by the authors, was used to collect qualitative and quantitative data anonymously from participants, who were parents of the students who attended the Australian International Academy King Khalid Campus primary school, and members of staff who were parents (irrespective of where their children attended school). Conducted as part of a community based health promotion project, the school agreed to host the researchers and provide supervision on the condition that sensitive questions pertaining to demographics or religious sensitivities were not explicitly asked.

Participants were recruited through one of two methods: the first being hand delivery of surveys to staff with children of their own (thereby parents themselves), and the second through the bi-monthly newsletter received by every family within the school community. Surveys were accompanied by corresponding consent documents and explanatory statements. Consent forms were received before inclusion of data. A total of 300 surveys were distributed to potential participants.

Key measures of interest were the information sources and knowledge with which these parents are equipped to make decisions surrounding vaccinations for their children and themselves. Thirteen survey questions were organised into three domains: “Obtaining Information” asking about where their knowledge about vaccines was sourced, “Concerns” which assessed for misapprehensions and misinformation about vaccines, and “Vaccines” which invited them to indicate how confident they felt in the process and their level of understanding, and their desire for more education on vaccines. At the end of those questions a single space was given where respondents could write any comments or questions sparked by the survey. Additionally, individuals surveyed were asked to include the year level of their eldest child in order to allow comparison of data across a spectrum of child age as an indicator of length of parent exposure to the immunisation process. No other demographic data was collected at the request of the school.

Descriptive statistics were used to analyse responses, with bivariate analysis of statistics to assess correlations between sources of information, age of eldest child and degree of confidence and knowledge about vaccinations. The narrative provided as feedback was also analysed for themes.

Data were analysed using Microsoft® Excel 2003. Qualitative responses were examined for recurring themes and considered in conjunction with statistical evidence as a means of determining study results.

Results

The researchers received a total of 157 validly completed survey forms out of the 300 distributed to the parents and staff of the school, a 52.3% response rate. No differentiation was made between parent or staff member status within the school as all participants were Muslim parents. In accordance with state legislation, all children of respondents were fully immunised at the time of enrolment. 15 respondents chose to use the space provided in the survey to give qualitative feedback, comments from which are interspersed below into the relevant domains.

Obtaining information

Participants of the study indicated that knowledge and guidance regarding immunisation were gained through a multitude of sources. The research confirmed that all participants had undertaken information seeking regarding childhood vaccination.

Surveys illustrated that 80.9% (127/157) of participants had used more than one source to assist in the decision-making process; while only 19.1% (30/157) had relied solely upon a single source. Of the 30 who had based their perspectives upon one source of information, 90.0% (27 out of 30) had consulted a healthcare professional – general practitioner (GP) or nurse, while the remaining 10% (3 out of 30) all received input from the local council. Flyers, (3.2%), friends (20.4%), internet (22.9%) and media (26.1%) were all used, in conjunction with other resources, to aid in the enhancement of their vaccination knowledge. Results indicated that 86.0% of all participants had sought education from GPs making them the most commonly accessed source. “I go with what my local doctor tells me to do, which I assume was the best thing to follow,” was the feedback received from one participant.

Concerns

One respondent commented: “I don’t believe enough information is provided to families about each vaccination, what it does and the side effects.” When asked about possible health concerns associated with vaccinations, 50.0% of all respondents (78 out of 157) were not aware of the possible side effects of vaccinations. In fact, 75.8% (119 of 157) of participants stated that they were concerned that vaccinations would have adverse outcomes on their children’s health.

Vaccines

“I wish there was more information about it as we took it as it is a must and the government encourages it,” remarked one respondent. Only 60.3% (93 of 157) of parents were sufficiently comfortable with their level of knowledge to make an informed decision pertaining to their child’s immunisations. This suggested that almost 40.0% (64 parents and staff) were not confident in their ability to make an informed decision for themselves or their family. Furthermore, 73.3% (115 of 157) stated a personal desire for more information about vaccinations and vaccination schedules.

When comparing knowledge confidence between those who received information from a GP versus those who received their information from the shire, council nurses, the latter group had slightly better outcomes than GPs (73.5% to 71.1%). The value of increased engagement with council nurses was highlighted in our report recommendations.

The constituents of vaccines were also highlighted as a concern in the qualitative responses: “I wanted to know what the vaccines were made of.” In particular, the halal status of vaccines was brought up in this comment and others: “Hopefully you could work on making a vaccine that will be significant with our religion background which is Halal Vaccine (without pork products).”

Discussion

This research represents a sizable study into the Australian Muslim community’s approach to immunisation: the largest published study involved only 22 informants belonging to one ethnic group. [9] We valued the opportunity to undertake our fieldwork in a school environment, because it provided a snapshot of the broad cross-section of individuals who make up this community, and the attitudes and knowledge of those who make decisions on vaccinations. In the future, it would be useful to explore how patient-centred factors, such as education and language impact on decision-making.

Education and side effects

The emergent theme was that the greatest concerns could be traced back to accessing relevant information about vaccination, with 73.3% of respondents having stated a personal desire for further education about vaccinations and vaccination schedules. This suggests some dissatisfaction with respect to their own levels of knowledge around vaccines and the education provided about vaccines as part of their decision-making. As a result of this disparity our research saw a startlingly high proportion of respondents (75.8%) concede concerns that vaccinations would have adverse outcomes on their children’s health. Half of all respondents also admitted ignorance with regards to potential vaccine side effects in the survey.

Side effects are not uncommon with vaccines, and a sure cause for concern amongst parents. The degree to which the study findings illuminated participant’s limited existing knowledge pertaining to side effects involved with vaccination, both qualitatively and quantitatively, indicates a dereliction of duty on the behalf of the general practitioners administering vaccines. This is an example of a process-centred barrier to informed consent. [10]

One of the consequences of this lack of knowledge by decision-makers is evidenced by the 40.76% of respondents who did not feel well equipped to make good decisions for their families. This reflected in comments such as this from one respondent: “I go with what my local doctor tells me to do, which I assume was the best thing to follow,” sentiments echoed across the world. [11] We propose that this lack of confidence, combined with a suggested sense of a culture of paternalism, remains prevalent in the doctor-patient interaction with regards to vaccine decision-making in this community, hampering the quality of consent given.

Analysis of the levels of confidence in participants’ knowledge showed that those participants who had received information from council nurses had more confidence in their decision making about vaccines than those whose main source was their general practitioner. This highlights a need for patients and general practitioners to partner with these valuable community nurses to enhance patient education and confident decision-making.

Material risks with respect to immunisation in Islam

As Young states: “In a health-care setting, when a patient exercises her autonomy, she decides which of the options for dealing with her health-care problem (including having no treatment at all) will be best for her, given her particular values, concerns and goals.” [12] Practicing Muslim patients place great value on the consumption only of those things deemed “halal” (ritually clean) and avoiding those things which may be unclean (“haram”). Pork is considered ritually unclean in Islam; and if a particular intervention contained pork-derived materials, this could reasonably constitute a material risk to a Muslim patient. [13] For example, in the British context, in a study of Muslim patients, 42% indicated that they would not take any medical interventions unless they were sure it was halal. [14]

Various vaccines, including MMR and the Hib vaccines, compulsory for Muslim pilgrims undertaking the Hajj [15] contain or involve porcine products in their manufacture, and are thus technically unclean. However, Islamic judicial and medical bodies embracing the value of beneficence have created an exemption for such products in the interests of public health. [16] The British statistics, as well as the findings of our study demonstrate that practicing Muslim patients harbour concerns about the halal nature of vaccines, and as such doctors need to be aware of concerns surrounding the prohibition and be able to effectively communicate the facts and exemptions of vaccine composition and manufacturing. This should include the referral of a patient on to more comprehensive sources should the need arise.

Conclusion

This investigation was undertaken to explore decision-making immunisation. Our study of this Islamic school community clearly demonstrated a perceived information gap with the information presented surrounding vaccinations and a consequent lack of confidence in their decision-making process. Qualitative and quantitative feedback obtained in this study provided evidence that the current information provided on vaccination is not catering to the needs of this Islamic community.

One limitation of our investigation was lack of access to a non-Islamic control group as a point of reference for the broader Australian community’s attitudes and knowledge. A broader information base would have clarified components of vaccine education generic to all communities and allowed tailoring education programs to the needs and concerns of individual communities. With respect to the Muslim community, there is scope for further inquiry into attitudes and awareness of general practitioners and nurses about the halal status of immunisations and other medical interventions, to triangulate the data and provide a basis for enhanced vaccine provider education.

The present study, however, provides evidence to encourage an increased role for council nurses in parental vaccine education, as well as identifying the desire of some Muslim parents for education on and confirmation of the ritual cleanliness of vaccines. By taking the time to inquire about and educate parents on all material risks, health professionals ensure confident, informed decision-making on the part of parents and a safe, healthy future for our children.

Acknowledgements

We are grateful for the assistance of our Academic Advisor, Monica Mercieca, the support of our Field Educators, Ms Rabia Jones and Ms Angela Florio, and all the staff and students of the Australian International Academy – King Khalid Campus.

Conflicts of interest

None declared.

Correspondence

M Bray: mrbra2@student.monash.edu

References

[1] Mohammadi N, Evans D, Jones T. Muslims in Australian Hospitals: clash of cultures. Int J Nurs Pract 2007;13(5):310–5.

[2] Easterbrook C, Maddern G. Porcine and bovine surgical products: Jewish, Muslim and Hindu perspectives. Arch Surg 2008;143(1):366-70.

[3] Elkins R. Muslims urged to refuse ‘un-Islamic’ vaccinations [internet]. 2007 Jan 28 [cited 2010 Aug 15]; Available from:

http://www.independent.co.uk/life-style/health-and-families/health-news/muslims-urged-to-refuse-unislamic-vaccinations-434027.html.

[4] Kapp C. Surge in polio spreads alarm in northern Nigeria. Lancet 2003;362(1):1631.

[5] Ahmad K. Pakistan struggles to eradicate polio. Lancet Infect Dis 2007;7(4):247.

[6] Tackling negative perceptions towards vaccination. Lancet Infect Dis 2007;7(4):235.

[7] Warraich H. Religious opposition to polio vaccination. Emerg Infect Dis 2009;15(6):978.

[8] Zwar N. Polio makes a comeback. Australian Doctor [internet]. 2006 Jan 9 [cited 2010 Aug 15]; Available from:

http://www.australiandoctor.com.au/clinical/therapy-update/polio-makes-a-comeback.

[9] Brooke D, Omeri A. Beliefs about childhood immunisation among Lebanese Muslim Immigrants in Australia. J Multicult Nurse Health 1999;10(3):229-36.

[10] Taylor H. Barriers to informed consent. Semin Oncol Nurs 1999;15(2):89-95.

[11] Marfé E. Immunisation: are parents making informed decisions? J Spec Pediatr Nurs 2007;19(5):20-2.

[12] Young R. Informed Consent and Patient Autonomy. In: Kuhse H, Singer P, editors. A Companion to Bioethics. Oxford: Wiley-Blackwell; 2010. P.379-89.

[13] Eldred BE, Dean AJ, McGuire TM, Nash AL. Vaccine components and constituents: responding to consumer concerns. Med J Aust 2006;184(4):170–5.

[14] Bashir A, Asif M, Lacey FM, Langley CL, Marriot K, Wilson A. Concordance in Muslim patients in primary care. Int J Pharm Prac 2001; 9(1):78.

[15] Saudi Ministry of Hajj. Riyadh: Hajj 2010 Health Requirements [Internet]. 2010 [cited 2010 Oct 16]. Available from: http://www.hajinformation.com/main/p3001.ht.

[16] Islamic Organization for Medical Sciences. The use of unlawful or juridically unclean substances in food and medicine [Internet]. 2009 [cited 2010 Aug 15] Available from: http://www.islamset.com/qa/index.html.

Categories
Articles Editorials

Ranking the league tables

University league tables are becoming something of an obsession. Their appeal is testament to the ‘at a glance’ approach used to convey a university’s standing, either nationally or internationally. League tables attract public attention and shape the behaviour of universities and policy makers. Their demand is a product of the increasing globalisation of higher education, tighter allocation of funding, and ultimately the recruitment of foreign students. Medical schools are not immune to this phenomenon, and are banished to a rung on a ladder year after year according to a formula that aggregates subjectively chosen indicators. While governments and other stakeholders are placing growing importance on the role of league tables, it is necessary to scrutinise the flaws in their methodology and reliability in measuring the quality of medical schools.

Academic league tables, the brainchild of Bob Morse, were developed for the US News and World Report 30 years ago. [1] They were pioneered to meet a perceived market need for more transparent, comparative data about educational institutions. [1-3] Despite being vilified by critics, several similar ranking systems emerged in other countries in response to the introduction of, or rise in, tertiary education tuition fees. [1-3] League tables have since garnered mass appeal and now feature as a staple component of the education media cycle. They often take on the form of ‘consumer guides’ produced by commercial publishing firms who seek a return for their product. [1]

Although in existence for less than a decade, the Times Higher Education (THE) World University Rankings, along with the Quacquarelli (QS) World University Rankings and Shanghai Jiao Tong University Academic Ranking of World Universities are considered the behemoths of international university rankings. They provide a snapshot of the top universities overall and by discipline. From 2004 to 2009 THE, a British publication, in association with QS, published the annual THE–QS World University Rankings, however, the two companies then parted ways due to differences over methodology. The following year, QS assumed sole publication of rankings produced with the original methodology, while THE developed a novel rankings approach in partnership with Thomson Reuters. Many countries also generate national rankings by pitting their universities against each other – Australia’s answer being the Good Universities Guide.

League tables employ various methodologies to rank universities. Most involve a three stage process: first, data is collected on indicators; second, the data for each indicator is scored; and third, the scores from each indicator are weighted and aggregated. [3] The THE rankings use thirteen performance indicators, grouped into five areas including teaching, research, citations, industry income and international outlook. [4] Teaching has a 30% weighting and constitutes a reputational survey (15%), PhD awards per academic (6%), undergraduates admitted per academic (4.5%), income per academic (2.25%) and PhD/Bachelor awards (2.25%). [4,5] QS also uses a similar construct to render their final rankings. In contrast, the Shanghai rankings are established solely on research credentials such as the number of Nobel- and Fields-winning alumni/faculty and highly cited researchers, and the number of non-review articles published in Nature and Science. [6]

The influence of ranking tables has grown to such an extent that various vested interests indulge in rankings for different reasons. [1-3,7-9] A 2006 international survey revealed that 63% of higher education leaders made strategic, organisational, managerial or academic decisions based on rankings. [7] This is not always for the benefit of students or staff, and sometimes simply reflects the desire of a senior team to appear to have had an easily-identifiable impact. It is claimed that rankings have also influenced national governments, particularly in the allocation of funding, quality assessment and efforts to create ‘world class’ universities. [8] Furthermore, there is limited evidence that employers use ranking lists as part of the selection of graduate recruits. [8]

Academic league tables are no strangers to criticism, reflecting methodological, pragmatic, moral and philosophical concerns. Critics argue that ranking lists have applied the metaphor of league tables from the world of sport; a simplistic and incapable tool for evaluating the complex systems of higher education. [3] Rankings are guided by ‘what sells in the market’ rather than the rigorous quality assurance practices of academic bodies.

The world’s main ranking systems bear little resemblance to each other, owing to the fact that they use different indicators and weightings to arrive at a measure of quality. [1-3,8,9,11] According to a study by Ioannidis et al., [10] the concordance between the 2006 rankings by Shanghai and the Times is modest at best, with only 133 universities holding positions in both of the top 200 lists. The publishers of these tables impose a specific definition of quality onto the institutions being ranked, by arbitrarily establishing a set of indicators and assigning each a weight with little theoretical basis. [1-3,8] Readers are left oblivious to the fact that many other legitimate indicators could have been adopted. To the reader, the author’s judgement is, in effect, final. Many academics are of the view that rankings do not take into account the important qualities of an educational institution that cannot be measured by weightings and numbers. [8]

Statistical discrepancies also compound the tenuous nature of league tables. Often institutions are ranked even when differences in the data are not statistically significant. [1-3,8] There have been many instances where data to be used in compiling ranking scores are missing or unavailable, especially in international comparisons. [1-3,8] Moreover, data availability is a source of bias, whereby publishers opt for convenient and readily-available date, at the expense of accuracy and relevancy. [1-3,8]

Another cause for concern is that rankings place a significant emphasis on research while minimising the role of education in universities. [5] Most educators would recognise that the indicators for quality teaching and learning are limited. [1-3,8] Various proxies for teaching ‘quality’ are used, including average student-staff ratios. [1-3,8,11] The lack of robust data relating to teaching quality is attributed to its difficult, expensive and time-consuming nature. [2] When considering that teaching quality is one of the key dimensions of medical education, its neglected importance severely compromises the meaning of any data produced by these tables.

The main mechanism for quality assurance and evaluation amongst medical schools at present is regular accreditation by national or regional accreditation bodies. [5] The Australian Medical Council (AMC) is responsible for setting out the principles and standards of Australian medical education, including assessment. The ‘one-size-fits-all’ approach of ranking tables is a futile means to effectively measure the quality of medical schools. Medical education is characterised by a range of unique indicators, for example, clinical teaching hours and global/rural health exposure. As a direct consequence of accreditation bodies, most medical schools deliver a consistent level of education and yield competent interns to practice in the Australian healthcare system. By contrast, league tables are over-simplified assessment tools for evaluating the quality of medical education, and even have the potential to harm the standards of education. [10]

Although league tables are not exalted and revered to the same degree as in the US or Europe, Australia is inadvertently heeding this imperious trend. League tables are nothing more than ‘popularity polls’, and should not become an instrument for measuring the quality of universities and medical education.

References

[1] Usher A, Savino M. A world of difference: a global survey of university league tables. Toronto (ON): Education Policy Institute; 2006 Jan. 63 p.

[2] Stella A, Woodhouse D. Ranking of higher education institutions. Melbourne: Australian Universities Quality Agency; 2006 Aug. 30 p.

[3] Marginson S. Global university rankings: where to from here. Asia Pacific Association for International Education. 2007 Mar 7-9; Singapore. Melbourne: Centre for the Study of Higher Education; 2007 Mar.

[4] Baty P. Rankings methodology. Times Higher Education; 2011 Oct 6. [updated 2012; cited 2012 Apr 7]. Available from: http://www.timeshighereducation.co.uk/world-university-rankings/2011-2012/analysis-rankings-methodology.html

[5] Harden RM, Wilkinson D. Excellence in teaching and learning in medical education. Med Teach. 2011;33:95-6.

[6] Liu NC, Cheng Y. The academic rankings of world universities. Higher Education in Europe. 2005 Jul;30(2);127-36.

[7] Hazelkorn E. Handle with care [Internet]. Time Higher Education; 2010 Jul 8. [updated 2010 Jul 8; cited 2012 Apr 7]. Available from: http://www.timeshighereducation.co.uk/story. asp?storycode=412342.

[8] Lee H. Rankings of higher education institutions: a critical review. Qual High Educ. 2008 Nov;14(3):187-207.

[9] Saisana M, D’Hombres B. Higher education rankings: robustness issues and critical assessment. Luxembourg: Office for Official Publications of the European Communities; 2008. 106 p.

[10] Ioannidis JPA, Patsopoulos NA, Kavvoura FK, Tatsioni A, Evangelou E, Kouri I, Contopoulos-Ioannidis DG, Liberopoulos G. International ranking systems for universities and institutions: a critical appraisal. BMC Med. 2007 Oct 25; 5(30).

[11] McGaphie WC, Thompson, JA. America’s best medical schools: a critique of the U.S. news and world report rankings. Acad Med. 2001 Oct; 76(10):985-92

Categories
Case Reports

Intra-vitreal bevacizumab in patients with Juvenile Vitelliform Dystrophy (Best Disease)

Figure 1. Right fundus of Case One, eighteen months prior to the time of presentation with decreased left visual acuity. A vitelliform macular lesion typical of Best disease is present.

Juvenile Vitelliform Dystrophy (Best disease) is a degenerative macular condition that is genetically inherited. In recent years monoclonal antibodies have been employed to help prevent the decline in vision associated with macular fluid. This report documents the use of intra-vitreal bevacizumab in two siblings (aged thirteen and fifteen) with Best Disease. This work studies the changes observed in visual acuity and macular oedema over a 39 and nineteen week period respectively.

Categories
Review Articles

Early impact of rotavirus vaccination

Background: Rotavirus is the most common cause of severe gastroenteritis in children and two vaccines to prevent rotavirus infection have been licensed since 2006. The World Health Organisation recommends the inclusion of rotavirus vaccination of infants in all national immunisation programs. Aim: To review current literature evaluating the global impact of rotavirus immunisation programs over the first two years of their implementation. Methods: A MEDLINE search was undertaken to identify relevant observational studies. Results: Eighteen relevant studies were identified which had been carried out in eight countries. Introduction of the vaccine was associated with a reduction in all-cause gastroenteritis hospitalisation rates of 12- 78% in the target group and up to 43% in older groups ineligible for the vaccine. Hospitalisation rates for confirmed rotavirus cases ranged between 46-87% in the target group. Mortality from all-cause gastroenteritis was reduced by 41% and 45% in two countries studied. Conclusions: Early research evaluating rotavirus immunisation programs suggests significant decreases in diarrhoeal disease rates extending beyond the immunised group. Further monitoring will allow vaccine performance to be optimised and for the long-term effect of vaccination programs to be assessed.