Categories
Feature Articles Articles

Graded exposure to neurophobia: Stopping it affect another generation of students

Neurophobia

Neurophobia has probably afflicted you at some stage during your medical school training, whether figuring out how to correlate signs elicited from examination with a likely diagnosis, or deciphering which tract has decussated at a particular level in the neuroaxis. The disease definition of neurophobia as the ‘fear of neural sciences and clinical neurology’ is a testament to its influence; affecting up to 50% of students and junior doctors at least once in their lifetime. [1] The severity of the condition ranges from simple dislike or avoidance of neurology to sub-par clinical assessment of patients with a neurological complaint. Neurophobia is often compounded by a crippling lack of confi dence in approaching and understanding basic neurological concepts.

According to the World Health Organisation, neurological conditions contribute to about 6.3% of the global health burden and account for as much as twelve percent of global mortality. [2] Given these figures, neurophobia persisting into postgraduate medical years may adversely infl uence the treatment that the significant proportion of patients who present with neurological complaints receive. This article will explore why neurophobia exists and some strategies in remedying it from both a student and teaching perspective.

Perceptions of neurology

One factor contributing to the existence of neurophobia is the perception of neurology within the medical community. The classic stereotype, so vividly depicted by the editor of the British Medical Journal: ‘the neurologist is one of the great archetypes: a brilliant, forgetful man with a bulging cranium….who….talks with ease about bits of the brain you’d forgotten existed, adores diagnosis and rare syndromes, and – most importantly – never bothers about treatment.’ [3] The description by Talley and O’ Connor is that of the neurologist being identified by the presence of hatpins in their expensive suits; and how they use the keys of an expensive motor car to elicit plantar reflexes further solidifies the mythology of the neurologist as a stereotype for another generation of Australian medical students. [4] Some have even proposed that neurologists thrive in a specialty known for its intellectual pursuits and exclusivity – a specialty where ‘only young Einsteins need apply.’ [5] Unfortunately, these stereotypes may only serve to perpetuate the reputation of neurology as a difficult specialty, which is complex and full of rare diagnoses (Figure 1).

However, everyone knows that stereotypes are almost always inaccurate. An important question is what do students really think about neurology? There have been several questionnaires asking this question to medical students across various countries and the results are strikingly similar in theme. Neurology is considered by students as the most difficult of the internal medicine specialities. Not surprisingly, it was also the specialty perceived by medical students as the one they had the least knowledge about and, understandably, were least confident in. [5-10] Yet such sentiments are also shared amongst residents, junior doctors and general practitioners in the United Kingdom (UK) and United States (US). [8-10] The persistence of this phenomenon after medical school is supported by the number of intriguing and difficult case reports published in the prestigious Lancet journal. Neurological cases (26%) appear at more than double the frequency of the next highest specialty, gastroenterology, (12%) as a proportion of total case reports in the Lancet from 2003 to 2008. [11] However, this finding may also be explained by the fact that in one survey, neurology was ranked as the most interesting of specialities by medical students, especially after they had completed a rotation within the specialty. [10] So whilst neurophobia exists, it is not outlandish to claim that many medical students do at least find neurology very interesting and would therefore actively seek to improve their understanding and knowledge.

The perception of neurological disease amongst students and the wider community can also be biased. Films such as The Diving Bell and the Butterfly (2007), which is about locked-in syndrome, are not only a compelling account of a peculiar neurological disease, capturing the imagination of the public, but they also highlights the absence of effective treatment following established cerebral infarction. Definitive cures for progressive illnesses, including multiple sclerosis and motor neuron disease are also yet to be discovered, but the reality is that there are numerous effective treatments for a variety of neurological complaints and diseases. Some students will thus incorrectly perceive that the joy gained from neurology only comes from the challenge of arriving at a diagnosis rather than from providing useful treatment to patients.

 

Other causes of neurophobia

Apart from the perception of neurology, a number of other reasons for students’ neurophobia and the perceived difficulty of neurology have been identified. [5-10] Contributory factors to neurophobia can be divided into pre-clinical and clinical exposure factors. Preclinical factors include inadequate teaching in the pre-clinical years, insufficient knowledge of basic neuroanatomy and neuroscience, as well as difficulty in correlating the biomedical sciences with neurological cases (especially localising lesions). Clinical exposure factors include the length of the neurology examination, a perception of complex diagnoses stemming from inpatients being a biased sample of neurology patients, limited exposure to neurology and a paucity of bedside teaching.

Preventing neurophobia – student and teacher perspective

It is clearly much better to prevent neurophobia from occurring than to attempt to remedy it once it has become ingrained. Addressing pre-clinical exposure factors can prevent its development early during medical school. Media reports have quoted doctors and students bemoaning the lack of anatomy teaching contact hours in Australian medical courses. [12, 13] Common sense dictates that the earlier and more frequent the exposure that students have with basic neurology in their medical programs (for example, in the form of introductory sessions on the brain, spinal cord and cranial nerves that are reinforced later down the track), the greater the chance of preventing neurophobia in their clinical years. It goes without saying that a fundamental understanding of neuroanatomy is essential to localising lesions in neurology. Clinically-relevant neurosciences should likewise receive emphasis in pre-clinical teaching.

Many neurology educators concur with students’ wishes for the teaching of basic science and clinical neurology to be effectively integrated . [14, 15] This needs to be a priority. The problem or case based learning model adopted by many undergraduate programs should easily accommodate this integration, using carefully selected cases that can be reinforced with continual assessments via written or observed clinical exams. [15] Neuroanatomy can be a complex science to comprehend. Therefore, more clinically-appropriate and student offerocused rules or tricks should be taught to simplify the concepts. The ‘rule of fours’ for brainstem vascular syndromes is one delightful example of such a rule. [16] This example of a teaching ‘rule’ would be more useful for students than memorising anatomical mnemonics, which favour rote learning over developing a deeper understanding of anatomical concepts. Given the reliance on more and more sophisticated neuroimaging in clinical neurology, correlating clinical neuroimaging with the relevant anatomical concepts must also be included in the pre-clinical years.

During the clinical years, medical students ideally want more frequent and improved bedside teaching in smaller tutorial groups. The feasibility of smaller groups is beyond the scope of my article but I will emphasise one style of bedside teaching that is most conducive to learning neurology. Bedside teaching allows the student to carry out important components of a clinical task under supervision, test their understanding during a clinical discussion and then reflect on possible areas of improvement during a debrief afterwards. This century-old style of bedside teaching, which has more recently been characterised in educational theory as the application of an experiential learning cycle (ELC) framework, works for students and as it did for their teachers when they themselves were students of neurology. [17, 18] The essential questions for a clinical teacher to ask during bedside tutorials are ‘why?’ and ‘so what?’ These inquiries will gauge students’ deeper understanding of the interactions between an illness and its neuro-anatomical correlations, rather than simply testing recall of isolated medical facts. [19]

There is also the issue of the inpatient population within the neurology ward. The overwhelming majority of patients are people who have experienced a stroke and, in large tertiary teaching hospitals, there will also be several patients with rare diagnoses and syndromes. This selection of patients is unrepresentative of the broad nature of neurological presentations and especially excludes patients whose conditions are non-acute and who are referred to outpatients’ clinics. Students are sometimes deterred by patients with rare syndromes that would not even be worth mentioning during a differential diagnosis list in an objective structured clinical examination. Therefore, more exposure to outpatient clinics would assist students to develop skills in recognising common neurological presentations. The learning and teaching of neurology at outpatients should, like bedside tutorials, follow the ELC model. [18] Outpatient clinics should be made mandatory within neurology rotations and rather than making students passive observers, as is commonplace, students should be required to see the patient beforehand (especially if the patient is a patient known to the neurologist with signs or important learning points that can be garnered in their history). A separate clinic room for the student is necessary for this approach, with the neurologist then coming in after a period of time, allowing the student to present their findings followed by an interactive discussion of relevant concepts. Next, the consultant can conduct the consultation with the student observing. Following feedback, the student can think about what can be improved and plan the next consultation, as described in the ELC model (Figure 2). Time constraints make teaching difficult in outpatient settings. However, with this approach, when the student is seeing the known patient by themselves, the consultant can see other (less interesting) patients in the clinic so in essence no time (apart from the teaching time) is lost. This inevitably means the student may miss seeing every second patient that comes to the clinic but in this case, sacrificing quantity for quality of learning may be more beneficial in combating neurophobia long term.

Neurology associations have developed curricula in the US and UK as a “must-know” guideline for students and residents. [20, 21] The major benefits of these endeavours are to set a minimum standard across medical schools and provide clear objectives to which students can aspire. This helps develop recognition of common neurological presentations and the acquisition of essential clinical skills. It is for this reason that the development of a uniform neurology curriculum adopted through all medical school programs across Australia may also alleviate neurophobia.

The responsibility to engage or seek learning opportunities in neurology, to combat neurophobia, nevertheless lies with the students. Students’ own motivation is vital in seeking improvement. It is often hardest to motivate students who find neurology boring and thus fail to engage with the subject. Nevertheless, interest often picks up once students feel more competent in the area. To help improve knowledge and skills in neurology, students can now use a variety of resources apart from textbooks and journals to complement their clinical neurology exposure. One growing trend in the US is the use of online learning and resources for neurology. A variety of online resources supplementing bedside learning and didactic teaching (e.g. lectures) is beneficial to students because of the active learning process they promote. This involves integrating the acquisition of information, placing it in context and then using it practically in patient encounters. [9] Therefore, medical schools should experiment with novel resources and teaching techniques that students will find useful – ‘virtual neurological patients’, video tutorials and neuroanatomy teaching computer programmes are all potential modern teaching tools. This new format of electronic teaching is one way to engage students who otherwise are uninterested in neurology. In conclusion, recognising the early signs of neurophobia is important for medical students and teachers alike. Once it is diagnosed, it is the responsibility of both student and   teacher to minimise the burden of disease.

Acknowledgements

The author would like to thank May Wong for editing and providing helpful suggestions for an earlier draftof the article.

Conflicts of interest

None declared.

Correspondence

B Nham: benjaminsb.nham@gmail.com

Categories
Feature Articles Articles

The ethics of euthanasia

Introduction

The topic of euthanasia is one that is shrouded with much ethical debate and ambiguity. Various types of euthanasia are recognised, with active voluntary euthanasia, assisted suicide and physicianassisted suicide eliciting the most controversy. [1] Broadly speaking, these terms are used to describe the termination of a person’s life to end their suffering, usually through the administration of drugs. Euthanasia is currently illegal in all Australian states, refl ecting the status quo of most countries, although, there are a handful of countries and states where acts of euthanasia are legally permitted under certain conditions.

Advocates of euthanasia argue that people have a right to make their own decisions regarding death, and that euthanasia is intended to alleviate pain and suffering, hence being ascribed the term “mercy killing.” They hold the view that active euthanasia is not morally worse than the withdrawal or withholding of medical treatment, and erroneously describe this practice as “passive euthanasia.” Such views are contested by opponents of euthanasia who raise the argument of the sanctity of human life and that euthanasia is equal to murder, and moreover, abuses autonomy and human rights. Furthermore, it is said that good palliative care can provide relief from suffering to patients and unlike euthanasia, should be the answer in modern medicine. This article will define several terms relating to euthanasia in order to frame the key arguments used by proponents and opponents of euthanasia. It will also outline the legal situation of euthanasia in Australia and countries abroad.

Defining euthanasia

The term “euthanasia” is derived from Greek, literally meaning “good death”. [1] Taken in its common usage however, euthanasia refers to the termination of a person’s life, to end their suffering, usually from an incurable or terminal condition. [1] It is for this reason that euthanasia was also coined the name “mercy killing”.

Various types of euthanasia are recognised. Active euthanasia refers to the deliberate act, usually through the intentional administration of lethal drugs, to end an incurably or terminally ill patient’s life. [2] On the other hand, supporters of euthanasia use another term, “passive euthanasia” to describe the deliberate withholding or withdrawal of life-prolonging medical treatment resulting in the patient’s death. [2] Unsurprisingly, the term “passive euthanasia” has been described as a misnomer. In Australia and most countries around the world, this practice is not considered as euthanasia at all. Indeed, according to Bartels and Otlowski [2] withholding or withdrawing life-prolonging treatment, either at the request of the patient or when it is considered to be in the best interests of the patient, “has become an established part of medical practice and is relatively uncontroversial.”

Acts of euthanasia are further categorised as “voluntary”, “involuntary” and “non-voluntary.” Voluntary euthanasia refers to euthanasia performed at the request of the patient. [1] Involuntary euthanasia is the term used to describe the situation where euthanasia is performed when the patient does not request it, with the intent of relieving their suffering – which, in effect, amounts to murder. [3] Non-voluntary euthanasia relates to a situation where euthanasia is performed when the patient is incapable of consenting. [1] The term that is relevant to the euthanasia debate is “active voluntary euthanasia”, which collectively refers to the deliberate act to end an incurable or terminally ill patient’s life, usually through the administration of lethal drugs at his or her request. The main difference between active voluntary euthanasia and assisted suicide is that in assisted suicide and physician-assisted suicide, the patient performs the killing act. [2] Assisted suicide is when a person intentionally assists a patient, at their request, to terminate his or her life. [2] Physician-assisted suicide refers to a situation where a physician intentionally assists a patient, at their request, to end his or her life, for example, by the provision of information and drugs. [3]

Another concept that is linked to end-of-life decisions and should be differentiated from euthanasia is the doctrine of double effect. The doctrine of double effect excuses the death of the patient that may result, as a secondary effect, from an action taken with the primary intention of alleviating pain. [4] Supporters of euthanasia may describe this as indirect euthanasia, but again, this term should be discarded when considering the euthanasia debate. [3]

Legal situation of active voluntary euthanasia and assisted suicide

In Australia, active voluntary euthanasia, assisted suicide and physician-assisted suicide are illegal (see Table 1). [1] In general, across all Australian states and territories, any deliberate act resulting in the death of another person is defined as murder. [2] The prohibition of euthanasia and assisted suicide is established in the criminal legislation of each Australian state, as well as the common law in the common law states of New South Wales, South Australia and Victoria. [2]

The prohibition of euthanasia and assisted suicide in Australia has been the status quo for many years now. However, there was a period when the Northern Territory permitted euthanasia and physician-assisted suicide under the Rights of Terminally Ill Act (1995). The Act came into effect in 1996 and made the Northern Territory the first place in the world to legally permit active voluntary euthanasia and physicianassisted suicide. Under this Act, competent terminally ill adults who were aged 18 or over, were able to request a physician to help them in dying. This Act was short-lived however, after the Federal Government overturned it in 1997 with the Euthanasia Laws Act 1997. [1,2] The Euthanasia Laws Act 1997 denied states the power to legislate to permit euthanasia or assisted suicide. [1] There have been a number of attempts in various Australian states, over the past decade and more recently, to legislate for euthanasia and assisted suicide, but all have failed to date, owing to a majority consensus against euthanasia. [1]

A number of countries and states around the world have permitted euthanasia and/or assisted suicide in some form; however this is often under specific conditions (see Table 2).

Arguments for and against euthanasia

There are many arguments that have been put forward for and against euthanasia. A few of the main arguments for and against euthanasia are outlined below.

For

Rights-based argument

Advocates of euthanasia argue that a patient has the right to make the decision about when and how they should die, based on the principles of autonomy and self-determination. [1, 5] Autonomy is the concept that a patient has the right to make decisions relating to their life so long as it causes no harm to others. [4] They relate the notion of autonomy to the right of an individual to control their own body, and should have the right to make their own decisions concerning how and when they will die. Furthermore, it is argued that as part of our human rights, there is a right to make our own decisions and a right to a dignified death. [1]

Beneficence

It is said that relieving a patient from their pain and suffering by performing euthanasia will do more good than harm. [4] Advocates of euthanasia express the view that the fundamental moral values of society, compassion and mercy, require that no patient be allowed to suffer unbearably, and mercy killing should be permissible. [4]

The difference between active euthanasia and passive euthanasia

Supporters of euthanasia claim that active euthanasia is not morally worse than passive euthanasia – the withdrawal or withholding of medical treatments that result in a patient’s death. In line with this view, it is argued that active euthanasia should be permitted just as passive euthanasia is allowed.

James Rachels [12] is a well-known proponent of euthanasia who advocates this view. He states that there is no moral difference between killing and letting die, as the intention is usually similar based on a utilitarian argument. He illustrates this argument by making use of two hypothetical scenarios. In the first scenario, Smith anticipates an inheritance should anything happen to his six-year-old cousin, and ventures to drown the child while he takes his bath. In a similar scenario, Jones stands to inherit a fortune should anything happen to his six-year-old cousin, and upon intending to drown his cousin, he witnesses his cousin drown on his own by accident and lets him die. Callahan [9] highlights the fact that Rachels uses a hypothetical case where both parties are morally culpable, which fails to support Rachels’ argument.

Another of his arguments is that active euthanasia is more humane than passive euthanasia as it is “a quick and painless” lethal injection whereas the latter can result in “a relatively slow and painful death.” [12]

Opponents of euthanasia argue that there is a clear moral distinction between actively terminating a patient’s life and withdrawing or withholding treatment which ends a patient’s life. Letting a patient die from an incurable disease may be seen as allowing the disease to be the natural cause of death without moral culpability. [5] Life-support treatment merely postpones death and when interventions are withdrawn, the patient’s death is caused by the underlying disease. [5]

Indeed, it is this view that is strongly endorsed by the Australian Medical Association, who are opposed to voluntary active euthanasia and physician-assisted suicide, but does not consider the withdrawal or withholding of treatment that result in a patient’s death as euthanasia or physician-assisted suicide. [1]

Against

The sanctity of life

Central to the argument against euthanasia is society’s view of the sanctity of life, and this can have both a secular and a religious basis. [2] The underlying ethos is that human life must be respected and preserved. [1]

The Christian view sees life as a gif offerrom God, who ought not to be off ended by the taking of that life. [1] Similarly the Islamic faith says that “it is the sole prerogative of God to bestow life and to cause death.” [7] The withholding or withdrawal of treatment is permitted when it is futile, as this is seen as allowing the natural course of death. [7]

Euthanasia as murder

Society views an action which has a primary intention of killing another person as inherently wrong, in spite of the patient’s consent. [8] Callahan [9] describes the practice of active voluntary euthanasia as “consenting adult killing.”

Abuse of autonomy and human rights

While autonomy is used by advocates for euthanasia, it also features in the argument against euthanasia. Kant and Mill [3] believe that the principle of autonomy forbids the voluntary ending of the conditions necessary for autonomy, which would occur by ending one’s life.

It has also been argued that patients’ requests for euthanasia are rarely autonomous, as most terminally ill patients may not be of a sound or rational mind. [10]

Callahan [9] argues that the notion of self-determination requires that the right to lead our own lives is conditioned by the good of the community, and therefore we must consider risk of harm to the common good.

In relation to human rights, some critics of euthanasia argue that the act of euthanasia contravenes the “right to life”. The Universal Declaration of Human Rights highlights the importance that, “Everyone has the right to life.” [3] Right to life advocates dismiss claims there is a right to die, which makes suicide virtually justifi able in any case. [8]

The role of palliative care

It is often argued that pain and suffering experienced by patients can be relieved by administering appropriate palliative care, making euthanasia a futile measure. [3] According to Norval and Gwynther [4] “requests for euthanasia are rarely sustained after good palliative care is established.”

The rights of vulnerable patients

If euthanasia were to become an accepted practice, it may give rise to situations that undermine the rights of vulnerable patients. [11] These include coercion of patients receiving costly treatments to accept euthanasia or physician-assisted suicide.

The doctor-patient relationship and the physician’s role

Active voluntary euthanasia and physician-assisted suicide undermine the doctor-patient relationship, destroying the trust and confi dence built in such a relationship. A doctor’s role is to help and save lives, not end them. Casting doctors in the role of administering euthanasia “would undermine and compromise the objectives of the medical profession.” [1]

Conclusion

It can be seen that euthanasia is indeed a contentious issue, with the heart of the debate lying at active voluntary euthanasia and physicianassisted suicide. Its legal status in Australia is that of a criminal off ence, conferring murder or manslaughter charges according to the criminal legislation and/or common law across Australian states. Australia’s prohibition and criminalisation of the practice of euthanasia and assisted suicide refl ects the legal status quo that is present in most other countries around the world. In contrast, there are only a few countries and states that have legalised acts of euthanasia and/or assisted suicide. The many arguments that have been put forward for and against euthanasia, and the handful that have been outlined provide only a glimpse into the ethical debate and controversy surrounding the topic of euthanasia.

Conflicts of interest

None declared.

Correspondence

N Ebrahimi: nargus.e@hotmail.com

References

[1] Bartels L, Otlowski M. A right to die? Euthanasia and the law in Australia. J Law Med. 2010 Feb;17(4):532-55.

[2] Walsh D, Caraceni AT, Fainsinger R, Foley K, Glare P, Goh C, et al. Palliative medicine. 1st ed. Canada: Saunders; 2009. Chapter 22, Euthanasia and physician-assisted suicide; p.110-5.

[3] Goldman L, Schafer AI, editors. Goldman’s Cecil Medicine. 23rd ed. USA: Saunders; 2008. Chapter 2, Bioethics in the practice of medicine; p.4-9.

[4] Norval D, Gwyther E. Ethical decisions in end-of-life care. CME. 2003 May;21(5):267-72.

[5] Kerridge I, Lowe M, Stewart C. Ethics and law for the health professions. 3rd ed. New South Wales: Federation Press; 2009.

[6] Legemaate J. The dutch euthanasia act and related issues. J Law Med. 2004 Feb;11(3):312-23.

[7] Bulow HH, Sprung CL, Reinhart K, Prayag S, Du B, Armaganidis A, et al. The world’s major religions’ points of view on end-of-life decisions in the intensive care unit. Intensive Care Med. 2008 Mar;34(3):423-30.

[8] Somerville MA. “Death talk”: debating euthanasia and physician-assisted suicide in Australia. Med J Aust. 2003 Feb 17;178(4):171-4.

[9] Callahan D. When self-determination runs amok. Hastings Cent Rep. 1992 Mar- Apr;22(2):52-55.

[10] Patterson R, George K. Euthanasia and assisted suicide: A liberal approach versus the traditional moral view. J Law Med. 2005 May;12(4):494-510.

[11] George R J, Finlay IG, Jeff rey D. Legalised euthanasia will violate the rights of vulnerable patients. BMJ. 2005 Sep 24;331(7518):684-5.

[12] Rachels J. Active and passive euthanasia. N Engl J Med. 1975 Jan 9;292(2):78-80.

Categories
Articles Guest Articles

Global inequities and the international health scene – Gustav Nossal


All young people should be deeply concerned at the global inequities that remain, and nowhere is this more clearly seen than in international health. Particularly we in the lucky country need to be mindful of this as we enjoy some of the best health standards in the world (with the notable exception of Aboriginal and Torres Strait Islander Australians). After decades of neglect, there are rays of hope emerging over the last 10-15 years. This brief essay seeks to outline the dilemma and to give some pointers to future solutions.

Mortality statistics

A stark example of the health gap is shown in Table 1, which shows that life expectancy at birth has risen markedly in the richer countries in the last 50 years, but has actually gone backwards in some countries, the situation being worst in Sub-Saharan Africa. As a result, life expectancy is now less than half of that in industrialised countries.

Deaths in children under five is widely used as a rough and ready measure of the health of a community, and also of the effectiveness of health services. Table 2 shows some quite exceptional reductions in the richer countries over half a century, but a bleak picture in many developing countries. India is doing reasonably well, presumably as a result of rapid economic growth, although the good effects are slow to trickle down to the rural poor. The Table shows the toll of communicable diseases and it is clear that at least two-thirds of these premature deaths are preventable.

We can total up these deaths, and note that the total comes to 20 million in 1960 and less than 8 million in 2010. Much of this improvement is due to international aid. One can do some optimistic modelling, and if we project the downward trend to 2025, deaths will be around 4.5 million. This would mean a total of 27 million extra child deaths prevented, chiefly though better treatment of pneumonia, diarrhoea and malaria; better newborn care practices; and the introduction of several new vaccines.

A final chilling set of statistics is presented in Table 3. This concerns the risk of a mother dying in childbirth. As can be seen, this is now exceedingly rare in industrialised countries. With few exceptions, those rare deaths are in mothers who have some underlying serious disease not connected with their pregnancy. In contrast, deaths in childbirth are still common in poor countries. Once again, the chief causes, obstructed labour, haemorrhage and sepsis, are largely preventable. It is unconscionable that a woman is 400 times more likely to die in childbirth than in the safest country. In some villages with high birth rates and high death rates a woman’s lifetime chance of dying from a pregnancy complication is one in seven!

International aid is increasing but must go higher

Properly deployed and in full partnership with the developing country, international aid can really help. At the prompting of the former Prime Minister of Canada, Lester Pearson, the United Nations mandated that the rich countries should devote 0.7% of their gross national income (GNI) to development assistance. Only five countries have reached or exceeded that goal, namely Denmark, Norway, Sweden, The Netherlands and Luxembourg. The global total is only 0.32% of GNI, or US$128.5 billion in 2010. Australia, presently at $4.8 billion, is pledged to go to 0.5% of GNI by 2015. The health component of aid varies from 7-15%.

Major new programmes speed progress in health

In the last 10-15 years, and for the first time, major health programmes have come forward where the budgets are measured in billions rather than millions. One with which I am particularly familiar is the GAVI Alliance, a global alliance for vaccines and immunisation. I had the honour of being involved in the “pre-history” of GAVI when I acted as the Chairman of the Strategic Advisory Council of the Bill and Melinda Gates Children’s Vaccine Program from 1997-2003. Alerted to the fact that Bill and Melinda Gates wished to make a major donation in the field of vaccines a working party with representatives from the World Health Organization (WHO), UNICEF, The World Bank, and the Gates and Rockefeller Foundations engaged in a series of intense discussions with all stakeholders throughout 1998 and 1999, prominently including the Health Ministers of developing countries. GAVI was launched at the World Economic Forum in Davos in January 2000 with an initial grant of $750 million from the Gates Foundation. Its purpose is to bring vaccines to the 72 poorest countries in the world, including newer vaccines, and to sponsor research and development of still further vaccines. As regards the six traditional childhood vaccines, namely those against diphtheria, tetanus, whooping cough, poliomyelitis, measles and BCG (for tuberculosis), 326 million additional children have been immunised and the coverage has been increased from 66% to 82%. Some 5.5 million deaths have been averted. Sturdy progress has been made in deploying vaccines against hepatitis B, one form of meningitis and yellow fever. More ambitiously, programmes are now being rolled out against pneumonia, the worst form of viral diarrhoea, cervical cancer and german measles. The budget of the GAVI Alliance is now over $1 billion per year, but it will have to rise as further vaccines are included. There are still 19 million children unimmunised each year. One GAVI strategy is to demand some co-payment from the affected country, requiring it to give a higher priority to health and encouraging sustainability.

Two separate large programmes are addressing the problem of HIV/ AIDS, arguably the worst pandemic the world has ever faced. They are the Global Fund for AIDS, TB and Malaria and PEPFAR, the US President’s Emergency Fund for AIDS Relief. Together these programmes spend an astonishing US$12 billion per year. As a result, highly active antiviral therapy (HAART) is reaching 6.5 million people in low and middle income countries, not only prolonging their lives indefinitely but also lowering the virus load in their blood, thus diminishing their capacity to transmit the virus. There is good evidence that the epidemic has peaked with the number of new cases going down each year. In addition, special effort is going into the prevention of mother to child transmission of HIV.

The search for an AIDS vaccine continues. An encouraging but vexing result emerged from a clinical trial of Sanofi -Pasteur’s vaccine in Thailand, involving 16,000 volunteers. The vaccine gave 31.2% protection from HIV infection, clearly not sufficient to go forward with mass immunisation, but enough to warrant further investigation in what has previously been a rather discouraging field.

Progress in malaria has been substantial. Insecticide-impregnated bednets turn out to be a powerful weapon, causing a 5% lowering of mortality where they are used. The Global Fund has distributed 240 million of these, and it is planned to reach a total of 700 million, an astonishing effort. Chemotherapy has been increased, including IPT, intermittent preventive therapy, where a whole population of children receives antimalarials every six months. IPT is also useful in pregnant women. A malaria vaccine is in the late phases of clinical trial. Produced by GlaxoSmithKline, it is known as RTS,S and has proven about 50% eff ective. It is targeted at the surface of that life-form of the parasite, known as sporozoite, which leaves the mosquito’s salivary gland and is injected under the skin when the mosquito feeds. Most experts believe that the final, definitive malaria vaccine will also need to target the liver cell stage, where the parasite goes underground, the blood cell stage, where it multiplies extensively in red blood cells, and perhaps the sexual stages. Good progress is being made in research in all these areas.

Tuberculosis remains a formidable foe particularly as resistance to anti- tuberculous drugs is developing. That being said, the Global Fund is treating 8.7 million tuberculosis patients with DOTS (directly-observed therapy, short term, to assure compliance). Sadly, short term means six months, which is quite a burden. Extensive research is seeking newer drugs able to act in a shorter time frame. As regards vaccines, unfortunately it is clear that the birth dose of BCG, which does a good job of preventing the infant manifestations of TB, namely tuberculous meningitis and widespread miliary tuberculosis, is ineff ective in preventing the much more common pulmonary tuberculosis of adolescents and young adults. An impressive body of research is attempting to develop new TB vaccines. Three are in Phase II clinical trial and at least eight in Phase I trial. The chronic nature of tuberculosis makes this a slow and expensive exercise.

The challenge of global eradication of poliomyelitis

Following the triumph of global eradication of smallpox, WHO set itself the challenge of eradicating poliomyelitis. When I was young, this was a most feared disease, with its capacity to kill and maim. The Salk vaccine and then later the oral Sabin live attenuated vaccine brought the disease under control in the industrialised countries with remarkable speed. A dedicated effort in Latin America did the same. But in Africa and the Indian subcontinent it was a different story. For this reason, a major partnership was launched in 1988 between the voluntary organisation Rotary International, WHO and UNICEF, with help from many others, to eradicate polio globally. Five strategies underpinned the venture. The Sabin oral polio vaccine was used to cut costs and ease administration, as oral drops rather than an injection was needed. High routine infant immunisation rates were encouraged. To get to the hard to reach children, national immunisation days were instituted, where all children under five were lined up and given the drops, regardless of previous immunisation history. Strong emphasis was placed on surveillance of all cases of paralysis with laboratory confirmation of suspected cases.

Finally, as control approached, a big effort was made to quell every little outbreak, with two extra doses of vaccine two weeks apart around the index case. As a result of this work, polio cases were reduced by over 99%. In 2011, there were only 650 confirmed cases in the whole world. India deserves special praise. Despite the large population and widespread poverty, the last case in India occurred on 13 January, 2011. There are now only three countries in which polio has never been eradicated, namely Pakistan, Afghanistan and Nigeria. Unfortunately, three countries have re-established polio transmission after prior eradication: Chad, DR Congo and Angola. Furthermore, sporadic cases are occurring in other countries following importation, though most of those mini-outbreaks are quickly controlled. We are at a pivotal point in this campaign. It is costing about $1 billion per year to maintain the whole global apparatus while the public health burden is currently quite small. Cessation of transmission was targeted for end 2012; this deadline is unlikely to be met. But failure to reach the end goal would constitute the most expensive public health failure in history. If we can get there, the economic benefits of eradication have been estimated at US$40-50 billion.

Some further vaccine challenges are listed in Table 4. In a twenty year framework success in most of these is not unrealistic. The dividends would be enormous; finding the requisite funds will be a daunting task.

Conclusions

This essay focuses on infections and vaccines, my own area of expertise, but plentiful pathways for progress exist in other areas. New drugs for all the above diseases; clever biological methods of vector control; improved staple crops with higher micronutrients and protein through genetic technologies; stratagems for improved antenatal care and obstetrics; a wider array of contraceptive measures tailored to particular cultures; in time thrusting approaches to noncommunicable diseases including cardiovascular disease, diabetes, obesity, hypertension and their consequences; and greater recognition of the importance of mental health with depression looming as a very grave problem. As young people contemplating a career in medicine, I commend all of these areas to you. In particular, consider spending some months or a few years in joining this battle to provide better health to all the world’s citizens. There are plenty of opportunities and the relevant travel will certainly prove enriching. A new breeze is blowing through global health. The thought that we can build a better world has taken firm hold. It is your generation, dear readers, who can turn dreams into realities and make the twenty-first century one truly to remember.

References

[1] World Life Expectancy: Country Health Profiles [Electronic Database]; 2012 [cited 2012 15 March 2012]. Available from: http://www.worldlifeexpectancy.com/world-healthrankings.

[2] Mortality rate, under-5 (per 1,000) [Electronic Database]: World Bank, United Nations; 2012 [cited 2012 15 March 2012]. Available from: http://data.worldbank.org/indicator/SH.DYN.MORT

[3] Hogan MC, Foreman KJ, Naghavi M, Ahn SY, Wang M, Makela SM, et al. Maternal mortality for 181 countries, 1980-2008: a systematic analysis of progress towards Millennium Development Goal 5. Lancet 2010;375(9726):1609-23.


Categories
Articles Guest Articles

The role of medical students in innovation – Fiona Wood

When thinking about the role of medical students in innovation, my mind drifts back to my early days in St Thomas’ Hospital Medical School, London 1975. It was exciting because I could see for the first time that I had a role in the world that was useful. Let’s face it, until then it is pretty well a one-way street, with education being handed to us and us taking it; but here, I could see a chance to grow and to contribute in a way I had not been able to do so. Then came the big question: How?

My “research career” started early: I knew that I needed a CV that was interesting to get a sniff of a surgical job, so not all out of pure curiosity. My Bachelor of Medical Science equivalent was both interesting and frustrating, but most importantly, it taught me that I could question. More to the point, that I could work out the strategies to answer the questions. Exploring the evolution of the brain from worms to elephants was fascinating, but did I find the point where the central nervous system (CNS) and peripheral nervous system (PNS) transitioned? No! But I did get to go on an anthropology field trip to Kenya and Tanzania and work collecting fossils on the Leaky camp, which was a huge bonus.

So, when I got to my Obstetrics and Gynaecology term, I decided that India was a good place to investigate malnutrition in pregnancy. Looking back, I find it hard to fi gure how I put the trip together with the funding and the equipment. But a long shaggy dog story short, my friend Jenny and I tripped off to a government hospital and measured anthropometric measurements, HbA1c as a measure of carbohydrate metabolism, and RBP, a protein with a high essential amino acid content, to measure the protein metabolism (measured on gel plates by yours truly as I got back). It was a great trip, a huge learning on lots of levels and it still remains with me.

Having submitted the work to the British Journal of Nutrition, it was sent back for revisions. However, I didn’t have the understanding to realise that was good and that I just needed to revise and send my changes back. Instead, totally deflated I put it in a file and tried hard to forget that I had failed at the last fence. I clearly was never successful in forgetting! Finishing is key, otherwise you have used resources, yours and others’, and selfishly not added to the body of knowledge. Sounds harsh, but even negatives should be published: How many resources are wasted in repetition? Finishing is essential for all of us. Anyone can start, but learn the value of finishing– I learnt early and it is with me still.

I will fast forward a little to the early days of my surgical training, when I saw amazing things being done by the plastic surgeons, and I was hooked. At the time, microsurgery was gaining momentum, tissue expansion was being explored, and tissue culture for clinical skin replacement were all in the mix. Yes, heady times indeed! One surgeon told me, “Medicine is 5% fact. The rest, well, opinion based on experience. The aim is to find the facts, hold on to them, and build the body of evidence.” Someone else once told me, “Believe nothing of what you hear and only half of what you see!”

Regarding tissue expansion, it was said casually by my consultant that, “It’s all well and good to create skin with characteristics similar to the defect, but the nerves are static and so the quality of innervation will decrease as the skin is expanded.” Really? I wasn’t so sure. So, off to find a friendly neurophysiologist who helped me design an experiment, and taught me how to do single axon recordings on T11 of a rodent model. We proved for the first time that the peripheral nerve field was not static but responded to the changes in the skin with forces applied.

We are now familiar with neural plasticity. Understanding CNS and PNS plasticity remains a corner stone of my research efforts exploring the role in healing the skin. How do we harness the capacity to self organise back to our skin shape, not scar shape, on a microscopic level? How can we think ourselves whole? Questions that stretch and challenge us are always the best ones!

The spray on skin cell story for me started in 1985 in Queen Victoria Hospital, East Grinstead, when I saw scientists growing skin cells. Another wow moment. So, I read all I could get my hands on. That is the starting point, to know what is out there. There is no point in reinventing the wheel! In 1990, as a Registrar in Perth, with the help of the team in Monash, Professor John Masterton and Joanne Paddle, our first patient was treated in Western Australia. By 1993, Marie Stoner and I had a lab in Perth funded by Telethon, exploring the time taken to expand the number of a patient’s skin cells to heal the wounds as rapidly as possible. By 1995, we were delivering in a spray instead of sheets. We then developed a kit to harvest cells for bedside use, using the wound as the tissue culture environment. This is now in clinical trials around the world, some funded by the US Armed Forces Institute of Regenerative Medicine. That is another lesson: The work never stops, it simply evolves.

So, asking questions is the starting point. Then, finding out how to answer the question, who can help support, do some work, fund it, etc., are all valid questions. BUT you must also ask, what direction do you go when there are so many questions? Go in the direction that interests you. Follow your passion. Not got one? Then expose yourself  to clinical problems until you meet a patient you want to help like no other, in a subject area that gets you out of bed in the morning. Then, you will finish, maybe not with the answer, but with a contribution to the body of knowledge such that we all benefit.

Do medical students have a role? A group of highly competitive intellectual problem-solvers? Absolutely, if they choose to. I would say start now, link with positive energy, keep your ears and eyes open, and always learn from today to make sure tomorrow is better, and that we pass on medical knowledge and systems we are proud of.

Categories
Case Reports Articles

Use of olanzapine in the treatment of acute mania: Comparison of monotherapy and combination therapy with sodium valproate

Introduction: The aim of this article is to review the literature and outline the evidence, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. Case study: GR, a 55 year old male with no previous psychiatric history was assessed by the Consultation and Liaison team and diagnosed with an acute manic episode. He was placed under an involuntary treatment order and was prescribed olanzapine 10mg once daily (OD). After failing to respond adequately to this treatment, sodium valproate 500mg twice daily (BD) was added to the regimen. Methods: A literature search was conducted using Medline Ovid and NCBI Pubmed databases. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy and olanzapine AND mania were used. Results: Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo. There were no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate was more efficacious than olanzapine monotherapy. Conclusion: There is no evidence currently available to support the use of combination olanzapine/ sodium valproate as a more efficacious treatment than olanzapine alone.

Case report

GR is a 55 year old Vietnamese male with no previous psychiatric history who was seen by the Consultation and Liaison Psychiatry team at a Queensland hospital after referral from the Internal Medicine team. He was brought into the Emergency Department the previous day by his ex-wife after noticing increasing bizarre behaviour and aggressiveness. He had been discharged from hospital one week earlier after bilateral knee replacement surgery twenty days prior to his current admission. GR was assessed thoroughly for delirium caused by a general medical condition, with all investigations showing normal results.

GR was previously working as an electrician, but is currently unemployed and is on a disability benefit due to a prior back injury. He currently acts as a carer for his ex-wife who resides with him at the same address. He was reported to be irritable, excessively talkative with bizarre ideas, and sleeping for less than two hours each night for the past four nights. He has no other past medical history apart from hypertension which is currently well controlled with candesartan 10mg OD. He is allergic to meloxicam with an unspecified reaction.

On assessment, GR was dressed in his nightwear, sitting on the edge of his bed. He was restless and erratic in his behaviour with little eye contact. Speech was loud, rapid and slightly pressured. Mood was unable to be established as GR did not provide a response on direct questioning. Affect was expansive, elevated and irritable. Grandiose thought was displayed with flight of ideas. There was no evidence of perceptual disturbances, in particular any hallucinations or delusions. Insight and judgement was extremely poor. GR was assessed to have a moderate risk of violence. There was no risk of suicide or self harm or risk of vulnerability.

After a request and recommendation for assessment, GR was diagnosed with an acute manic episode in accordance with Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR) criteria and placed under an involuntary treatment order. He was prescribed olanzapine 10mg OD. After failing to respond adequately to this treatment, sodium valproate 500mg BD was added to the regimen. Improvement with the addition of the new medication was seen within a number of days.

Introduction

A manic episode, as defined by the DSM–IV-TR, is characterised by a distinct period of abnormally and persistently elevated, expansive or irritable mood lasting at least one week (or any duration if hospitalisation is required) and is associated with a number of other persistent symptoms including grandiosity, decreased need for sleep, talkativeness, distractibility and psychomotor agitation, causing impaired functioning and not accounted for by another disorder. [1] Mania tends to have an acute onset and it is these episodes that define the presence of bipolar disorder. Bipolar I Disorder is characterised by mania and major depression, or mania alone, and Bipolar II Disorder is defined by hypomania and major depression. [1] The pharmacological management of acute mania involves primary treatment of the pathologically elevated mood. A number of medications are recommended including lithium, anti-epileptics either sodium valproate or carbamazepine and second generation antipsychotics such as olanzapine, quetiapine, risperidone, or ziprasidone. [2] Suggested approaches to patients with mania who fail to respond to a single medication include optimising the current drug; switching to a different drug or using drugs in combination. [2] GR was initially managed with olanzapine 10mg OD and then after failing to respond adequately, sodium valproate 500mg BD was added. This raises the following question: Is the use of combination therapy of olanzapine and sodium valproate more efficacious than olanzapine monotherapy?

Objective

The objective of this article was to review the literature and outline the evidence that is available, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. The issue of long term outcome and efficacy of these two therapies is outside the scope of this particular report.

Data collection

In order to address the question identified in the objective, a literature search was conducted using Medline Ovid and NCBI Pubmed databases with limits set to only include articles that were written in English and available as full text journals subscribed to by James Cook University. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy AND olanzapine AND mania were used. A number of articles were also identified through the related articles link provided by the NCBI Pubmed Database. A number of articles including randomised controlled trials (Level II Evidence) and meta-analyses (Level I Evidence) were reviewed, however no study was found that compared the use of olanzapine as a monotherapy with the use of combined therapy of olanzapine and sodium valproate.

Discussion

Efficacy of olanzapine as a monotherapy

Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. The first, by Tohen et al. in 1999 [3], was a random assignment, double blind, placebo controlled parallel group study involving a sample of 139 patients who met the DSM-IV-TR criteria for either a mixed or manic episode with 70 assigned to olanzapine 10mg OD and 69 to placebo. Both treatment groups were similar in their baseline characteristics and severity of illness with therapy lasting for three weeks. After the first day of treatment, the daily dosage could be increased or decreased by 5mg each day within the allowed range of 5-20mg/day. The use of lorazepam as a concurrent medication was allowed up to 4mg/day. [3] Patients were assessed at baseline and at the end of the study. The Young Mania Rating Scale was used as the primary efficacy measure with a change in total score from baseline to endpoint.

The study found those treated with olanzapine showed a greater mean improvement in total scores on the Young Mania Rating Scale with a difference of -5.38 points (95% CI -10.31-0.93). [3] Clinical response (decrease of 50% or more from baseline score) was also seen in 48.6% of patients receiving olanzapine compared to 24.2% of those assigned to placebo. [3] Improvement was also seen in other measures such as the severity of mania rating on the Clinical Global Impression – Bipolar version and total score on the Positive and Negative Symptom Scale. [3]

A second randomised, double blinded placebo controlled study was conducted by Tohen et al. in 2000. [4] This four week trial had a similar methodology with identical criteria for inclusion, primary efficacy measure and criteria for clinical response. It was, however, designed to also address some of limitations of the first trial, particularly the short treatment period, and to further determine the efficacy and safety of olanzapine in the treatment of acute mania. [4] The study design, method and assessment were clearly outlined. The study involved 115 patients and experienced a -6.65 point mean improvement in the Young Mania Rating Scale score and also showed a statistically significant greater clinical response in the olanzapine group compared to the placebo group. [4] Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo in a number of subgroups including mania versus mixed episode and psychotic-manic episode versus non-psychotic. [3,4]

The efficacy of olanzapine as monotherapy has also been compared to a number of other first line medications including lithium, haloperidol and sodium valproate. Two studies were identified that evaluated the efficacy of olanzapine and sodium valproate for the treatment of acute/mixed mania. Both demonstrated olanzapine to be an effective treatment. [5,6] Tohen et al. (2002) [5] showed olanzapine to have a superior improvement in mania rating scores and clinical improvement when compared to sodium valproate, however, this may have been affected by differences in dosage regimens between the study and mean model dosages. [7] Zajecka (2002) [6] described no significant differences between the two medications. In comparison to lithium, a small trial by Beck et al. in 1999 [8] described no statistically significant differences between the two medications. Similar rates of remission and response were shown in a twelve week double blinded study comparing olanzapine and haloperidol for the treatment of acute mania. [9]

The evidence presented from these studies suggests olanzapine at a dosage range of 5-20mg/day is an efficacious therapy in the treatment of acute manic episodes when compared to placebo and a number of other medications.

Efficacy of combination therapy of olanzapine and sodium valproate

As mentioned previously, there was no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate were more efficacious than olanzapine monotherapy. One study by Tohen et al. in 2002 [10] was identified that investigated the efficacy of olanzapine in combination with sodium valproate for the treatment of mania, however this was in comparison to sodium valproate monotherapy rather than olanzapine.

This study was a six week double-blind, placebo controlled trial that evaluated patients with failure to respond to two weeks of monotherapy with sodium valproate or lithium. 344 patients were randomised to receive either combination therapy with olanzapine or continued monotherapy with placebo. [10] Efficacy was measured by use of the Young Mania Rating Scale with results showing combination therapy with olanzapine and sodium valproate showed greater improvement in total scores as well as clinically significant improved clinical response rates when compared to sodium valproate monotherapy. [10] This improvement was demonstrated by almost all measures used in the study. However, assignment to valproate or lithium therapy was not randomized with a larger number of patients receiving valproate monotherapy. This was noted as a limitation of the study. [10] The lack of an olanzapine monotherapy group within this study also prevents exploration of a postulated synergistic effect between olanzapine and the mood stabilisers such as sodium valproate. [10]

The study by Tohen et al. (2002) [10] does show that olanzapine when combined with the use of sodium valproate shows superior efficacy for the treatment of manic episodes than sodium valproate alone which may indicate that combination therapy may be more effective than monotherapy. Whilst suggestive that a patient not responding to initial therapy may benefit from the addition of a second medication, these study results cannot be generalised to compare olanzapine monotherapy and sodium valproate/olanzapine combination therapy.

Conclusion

When first line monotherapy for the treatment of acute manic episodes fails, the therapeutic guidelines recommend combination therapies as an option to improve response to therapy. [2] However there is no evidence currently available to support or disprove the use of combination olanzapine/sodium valproate as a more efficacious treatment than olanzapine alone. As no studies have been conducted addressing this specific question, the ability to comment about the appropriateness of the management of GR’s acute manic episode is limited.

This review has revealed a need for further studies to be undertaken evaluating the effectiveness of combination therapy for the treatment of acute manic episodes. In order to answer the question raised, it is essential that a trial be conducted with a large sample size; placebo controlled involving monotherapy with olanzapine and combination therapy in order to ascertain what approach is most effective. Another potential area for future research is for further assessment of what approach is best for those patients who fail to respond to initial monotherapy (increase current dose, change drugs or addition of medications) and then to identify whether characteristics of the patient such as whether they are experiencing a manic or mixed episode has any infl uence on the effectiveness of particular pharmacotherapies. This information would provide more evidence on which to base future recommendations.

There is clear evidence that supports the efficacy of olanzapine monotherapy in the treatment of acute mania as well as evidence suggesting combined therapy with sodium valproate is also an effective treatment; however a comparison between the two approaches to management was unable to be made. When evidence is lacking, it then becomes appropriate to consider the progress of the patient in order to assess the efficacy of the current management plan, as GR experienced considerable improvement, this may indicate that his current therapy is suitable for his condition.

Consent declaration

Informed consent was obtained from the patient for the original case report.

Conflicts of interest

None declared.

Correspondence

H Bennet: hannah.bennett@my.jcu.edu.au


Categories
Case Reports Articles

Ovarian torsion in a 22-year old nulliparous woman

Ovarian torsion is the fifth most common gynaecological emergency with a reported prevalence of 2.7% in all cases of acute abdominal pain. [1] It is defined as the partial or complete rotation of the adnexa around its ovarian vascular axis that may cause an interruption in the ovarian blood flow. [2] Ischaemia is therefore, a possible consequence and this may lead to subsequent necrosis of the ovary and necessitate resection. As symptoms of ovarian torsion are non-specific and variable, this condition remains a diagnostic challenge with potential implications for future fertility. [3] Consequently, clinical suspicion and timely intervention are crucial for ovarian salvage.

This case report illustrates the multiple diagnoses that may be incorrectly ascribed to the variable presentations of ovarian torsion. Furthermore, a conservative treatment approach is described in a 22-year old nulliparous woman, with the aim of preserving her fertility.

Case report

A 22 year old nulliparous woman presented to the emergency department in the middle of her regular 28 day menstrual cycle with sudden onset of right iliac fossa pain. The pain was post-coital, of a few hours duration and radiating to the back. The pain was described as constant, severe and sharp, and associated with episodes of emesis. Similar episodes of pain were experienced in the previous few weeks. These were, however, shorter in duration and resolved spontaneously. She was otherwise well and had no associated gastrointestinal or genitourinary symptoms. She had no past medical or surgical history and specifically was not using the oral contraceptive pill as a form of contraception. She was in a two year monogamous relationship, did not experience any dyspareunia and denied any prior sexually transmitted diseases. Her cervical smears were up to date and had been consistently reported as normal.

On examination, she was afebrile with a heart rate of 90 beats per minute (bpm) and a blood pressure of 126/92 mmHg. Her abdomen was described as “soft” but she displayed voluntary guarding particularly in the right iliac fossa. There was no renal angle tenderness and bowel sounds were present.

Speculum examination did not demonstrate any vaginal discharge and bimanual pelvic examination demonstrated cervical excitation with significant discomfort in the right adnexa.

Urinalysis did not suggest a urinary tract infection due to the absence of protein or blood in the urine sample. The corresponding urine pregnancy test was negative. Her blood tests confirmed the negative urine pregnancy test. There was a mild leukocytosis, and the CRP was normal.

Pelvic ultrasound demonstrated bilaterally enlarged ovaries that contained multiple echogenic masses measuring 31mm, 14.4mm, and 2mm on the right side, and 6mm, 17mm and 2mm on the left side. Blood supply to both ovaries was described as determined by blood flow Doppler. There was a small amount of free fluid in the pouch of Douglas. The report suggested there were no features suggestive of acute appendicitis and that the findings were interpreted as bilateral endometriomas. Initially her pain was unresponsive to narcotic analgesics but she was later discharged home with simple analgesics as her symptoms improved.

Two days later she represented to the hospital with an episode of post-coital vaginal bleeding and uncontrolled ongoing severe lower abdominal pain. She was now febrile with a temperature of 38.2°C and a heart rate of 92 bpm. Her blood pressure was 114/66 mmHg. Repeat blood tests revealed a slightly raised CRP of 110 mg/L and a WCC of 11.5 x109/L. Abdominal and pelvic examinations elicited guarding and severe tenderness. On this occasion endocervical and high vaginal swabs were taken and she was treated for pelvic inflammatory disease based on her raised temperature and elevated CRP.

Subsequently, a repeated pelvic ultrasound showed bilaterally enlarged ovaries similar to the previous ultrasound. On this occasion, the ultrasound findings were interpreted as bilateral ovarian dermoids. No comment was made on ovarian blood flow, but in the right iliac fossa a tubular blind-ended, non-compressible, hyperaemic structure measuring up to 8mm in diameter was described. These latter findings were considered consistent with appendicitis.

The patient was admitted and the decision was made for an emergency laparoscopy.

Intraoperative findings revealed an 6cm diameter partially torted left  ovary containing multiple cysts, and an 8cm dark haemorrhagic oedematous torted right ovary (Figure 1). There was a haemoperitoneum of 100 mL. Of note there was a normal appearing appendix and no evidence of adhesions, infection or endometriosis throughout the pelvis.

Laparoscopically, the right ovary was untwisted and three cystic structures, suggestive of ovarian teratomas were removed intact from the left ovary. The nature of these cystic structures was confirmed by the subsequent histopathology report of mature cystic teratomas. During this time the colouration to the right ovary was re-established. Even though the ultrasound scan suggested cystic structures within the right ovary, due to the oedematous nature of this ovary and the haematoperitoneum that appeared to have arisen from this ovary, no attempt was made at this time to reduce the size of the ovary by cystectomy.

The postoperative period was uneventful and she was discharged home on the following day. She was well two weeks post-operation and her port sites had healed as expected. Due to the possibility of further cystic structures within her right and left ovary, a repeat pelvic ultrasound was organised in four months. The patient was reminded of her high risk of re-torsion and advised to represent early if there were any further episodes of abdominal pain.

The repeat ultrasound scan confirmed the presence of two ovarian cystic structures within the left ovary measuring 3.5cm and 1.3cm in diameter as well as a 5.5cm cystic structure in the right ovary. The ultrasound scan features of these structures were consistent with ovarian dermoids. She is currently awaiting an elective laparoscopy to perform bilateral ovarian cystectomies of these dermoid structures.

Discussion

Ovarian torsion can occur at any age with the greatest incidence in women 20-30 years of age. [4] About 70% of ovarian torsion occurs on the right side, which is hypothesised to occur due the longer uteroovarian ligament on this side. In addition, the limited space due to the presence of the sigmoid colon on the left side is also thought to contribute to the laterality incidence. [1] This is consistent with this case report in which there was partial torsion on the left side and complete torsion on the right side.

Risk factors for ovarian torsion include pregnancy, ovarian stimulation, previous abdominal surgery, and tubal ligation. [1,4] However, torsion is frequently associated with ovarian pathologies that result in enlarged ovaries. The most frequent encountered pathology is that of an ovarian dermoid, although other structures include parameso/tubal cysts, follicular cysts, endometriomas and serous/mucinous cystadenoma. [5] In this case report, despite the suggestion of endometriomas and tubo-ovarian masses secondary to presumed pelvic inflammatory disease, bilateral ovarian dermoids were the actual cause of ovarian enlargement. The incidence of bilateral ovarian dermoids is 10-15%. [6,7]

The diagnosis of ovarian torsion is challenging as the clinical parameters yield low sensitivity and specificity. Abdominal pain is reported in the majority of patients with ovarian torsion, but the characteristics of this pain are variable. Sudden onset pain occurs in 59-87%, sharp or stabbing in 70%, and pain radiating to the flank, back or groin in 51% of patients. [4,8] Patients with incomplete torsion may present with severe pain separated by asymptomatic periods. [9] Nausea and vomiting is common in 59-85% of cases and a low grade fever in 20%. [4,8] Other non-specific symptoms including non-menstrual vaginal bleeding and leukocytosis, reported in about 4.4% and 20% of cases, respectively. [4] In this case report the patient presented with such non-specific symptoms. These symptoms are common to many other differential diagnoses of an acute abdomen, including: ectopic pregnancy, ruptured ovarian cyst, pelvic inflammatory disease, gastrointestinal infection, appendicitis, and diverticulitis. [4] In fact, the patient was initially incorrectly diagnosed as having bilateral endometriomas and together with ultrasound scan features, appendicitis was considered.

Acute appendicitis is the most common differential diagnosis in patients with ovarian torsion. Fortunately, this usually results in an operative intervention. Therefore, if a misdiagnosis has occurred, the gynaecologist is usually summoned to deal with the ovarian torsion. Conversely, gastrointestinal infection and pelvic inflammatory disease are non-surgical misdiagnoses that may result in delayed surgical intervention. [10] Consequently, it is not surprising that in one study, ovarian torsion was only considered in the admitting differential diagnosis of 19-47% of patients with actual ovarian torsion. [4] In this present case report, the patient had variable symptoms during the course of her presentations and ovarian torsion was not initially considered.

Imaging is frequently used in the management of an acute abdomen. In gynaecology, ultrasound has become the routine investigation for potential pelvic pathologies, and colour Doppler studies have been used to assess ovarian blood supply. However, the diagnostic contribution of ultrasound scan to the diagnosis of ovarian torsion remains controversial. [2] Non-specific ultrasound findings include heterogeneous ovarian stroma, “string of pearls” sign, and free fluid in the cul de sac. [2,12] However, ovarian enlargement of more than 4cm is the most consistent ultrasound feature in ovarian torsion, the greatest risk occurring in cysts measuring 8-12 cm. [2,11]

Furthermore, the use of ultrasound scan Doppler results in highly variable interpretations and some studies disagree on its usefulness. [1,2] Because cessation of venous flow precedes the interruptions in arterial flow, the presence of blood flow on ultrasound scan Doppler studies indicates probable viability of the ovary rather than the absence of ovarian torsion. [2,13] In the presented case, both ovaries demonstrated blood flow two days prior to the patient receiving an operation to de-tort her left ovary. However, it is possible that complete ovarian torsion actually occurred after the last ultrasound was performed.

Other imaging modalities, such as contrast CT and MRI, are rarely useful when the ultrasound findings are inconclusive. Thus, direct visualisation by laparoscopy or laparotomy is the gold standard to confirm the diagnosis of ovarian torsion.

Laparoscopy is the surgical approach of choice as it has the advantages of a shorter hospital stay and reduced postoperative pain requirements. [14,15] Although laparoscopy is frequently preferred in younger patients, the surgical skill in dealing with these ovarian masses may require a laparotomy. Furthermore, in patients where there is a suspicion of malignancy, for example, a raised CA125 (tumour marker) in the presence of endometriomas, a laparotomy may be appropriate. [16] Eitan et al. reported a 22% incidence of malignancy in 27 postmenopausal patients with adnexal torsion. [16,17]

Traditionally, radical treatment by adnexectomy was the standard approach to ovarian torsion in cases of ovarian decolouration/necrosis. This was due to the fear of pulmonary embolism from untwisting of a potentially thrombosed ovarian vein. This approach obviously resulted in the loss of the ovary and potential reduction in fertility. More recently this approach has been challenged. A more conservative treatment that consists of untwisting the adnexa followed by cystectomy or cyst aspiration has been reported. [1]

Rody et al. [5] suggest conservative management of ovarian torsion regardless of the macroscopic appearance of the ovary. Their large literature review reported no severe complications, such as embolism or infection, even after the detorsion of “necrotic-looking” ovaries. In support of this, animal studies suggest that reperfusion of ischaemic ovaries even after 24 hours, with a time limiting interval of 36 hours, results in ovarian viability as demonstrated histologically. [18]

This ovary sparing approach after detorsion of ischaemic ovaries is considered safe and effective in both adults and children. [19,20] A cystectomy is usually performed on suspected organic cysts for histological examination. In the case of difficult cystectomy due to ischaemic oedematous ovary, some authors recommend a reexamination 6-8 weeks following the acute episode and secondary surgery at this later time if necessary. [5,19,20] In this case report, detorsion alone of the haemorrhagic left ovary was sufficient to resolve the pain, allowing a second laparoscopic procedure to be arranged in order to remove the causative pathology.

Summary points on ovarian torsion

1. Ovarian torsion is difficult to diagnose clinically and on ultrasound.

2. Clinical suspicion of ovarian torsion determines the likelihood of operation.

3. Laparoscopy is the surgical approach of choice.

4. Detorsion is safe and may be preferred over excision of the torted ovary.

What did I learn from this case and my reading?

1. Accurate diagnosis of ovarian torsion is difficult.

2. Suspicion of ovarian torsion should be managed, like testicular torsion, as a surgical emergency.

3. An early laparoscopy/laparotomy should be considered in order to avoid making an inaccurate diagnosis that may significantly impact on a woman’s future fertility.

Acknowledgements

The authors would like to acknowledge the Graduate School of Medicine, University of Wollongong for the opportunity to undertake a selective rotation in the Obstetrics and Gynaecology Department at the Wollongong Hospital. In addition, we would like to extend a special thank you to Ms. Sandra Carbery (Secretary to A/Prof Georgiou) and the Wollongong Hospital library staff for their assistance with this research project.

Consent declaration

Consent to publish this case report (including figure) was obtained from the patient.

Conflict of interest

None declared.

Correspondence

H Chen: hec218@uowmail.edu.au

Categories
Review Articles Articles

The future of personalised cancer therapy, today

With the human genome sequenced a decade ago and the concurrent development of genomics, pharmacogenetics and proteomics, the field of personalised cancer treatment appears to be a maturing reality. It is recognised that the days of ‘one-sizefi ts-all’ and ‘trial and error’ cancer treatment are numbered, and such conventional approaches will be refined. The rationale behind personalised treatment is to target the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patients’ genome. That said, a number of key challenges, both scientific and non-scientific, must be overcome if we are to fully exploit knowledge of cancer genomics to develop targeted therapeutics and informative biomarkers. The progress of research has yet to be translated to substantial clinical benefits, with the exception of a handful of drugs (tamoxifen, imatinib, trastuzumab). It is only recently that new targeted drugs have been integrated into the clinical armamentarium. So the question remains: Will there be a day when doctors no longer make treatment choices based on population-based statistics but rather on the specific characteristics of individuals and their tumours?

Introduction

In excess of 100,000 new cases of cancer were diagnosed in Australia in 2010, and the impact of cancer care on patients, their carers, and the Australian society is hard to ignore. Cancer care itself consumes $3.8 billion per year in Australia, constituting close to one-tenth of the annual health budget. [1] As such, alterations to our approach to cancer care will have wide-spread impacts on the health of individuals as well as on our economy. The first ‘golden era’ of cancer treatment began in the 1940s, with the discovery of the effectiveness of the alkylating agent, nitrogen mustard, against non-Hodgkin’s lymphoma. [2] Yet the landmark paper that demonstrated cancer development required more than one gene mutation was published only 25 years ago. [3] With the discovery of the human genome sequence, [4] numerous genes have been implicated in the development of cancer. Data from The Cancer Genome Atlas (TCGA) [5] and the International Cancer Genome Consortium (ICGC) [6] reveal that even within a cancer subtype, the mutations driving oncogenesis are diverse.

The more we learn about the molecular basis of carcinogenesis, the more the traditional paradigm of chemotherapy ‘cocktails’ classified by histomorphological features appears inadequate. In many instances, this classification system correlates poorly with treatment response, prognosis and clinical outcome. Patients within a given diagnostic category receive the same treatment despite biological heterogeneity, meaning that some with aggressive disease may be undertreated, and some with indolent disease may be overtreated. In addition, these generalised cytotoxic drugs have many side eff ects, a low specifi city, low concentration being delivered to tumours, and the development of resistance, which is an almost universal feature of cancer cells.

In theory, personalised treatment involves targeting the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patient’s genome. The outgrowth of innovations in cancer biotechnology and computational science has enabled the interrogation of the cancer genome and examination of variation in germline DNA. Yet there remain many unanswered questions about the efficacy of personalised treatment and its applicability in clinical practice, which this review will address. The transition from morphology-based to a genetics-based taxonomy of cancer is an alluring revolution, but not without its challenges.

This article aims to outline the current methods in molecular profiling, explore the range of biomarkers available, examine the application of biomarkers in cancers common to Australia, such as melanoma and lung cancer, and to investigate the implications and limitations of personalised medicine in a 21st century context.

Genetic profiling of the cancer genome

We now know that individual tumour heterogeneity results from the gradual acquirement of genetic mutations and epigenetic alterations (changes in DNA expression that occur without alterations in DNA sequence). [7,8] Chromosomal deletions, rearrangements, and gene mutations are selected out during tumour development. These defects, known as ‘driver’ mutations, ultimately modify protein signalling networks and create a survival advantage for the tumour cell. [8-10] As such, pathway components vary widely among individuals leading to a variety of genetic defects between individuals with the same type of cancer.

Such heterogeneity necessitates the push for a complete catalogue of genetic perturbations involved in cancer. This need for a large-scale analysis of gene expression has been realised by current high throughput technologies such as DNA array technology. [11,12] Typically, a DNA array is comprised of multiple rows of complementary DNA (cDNA) samples lined up in dots on a small silicon chip. Today, arrays for gene expression profiling can accommodate over 30,000 cDNA samples. [13] Pattern recognition software and clustering algorithms promote the classification of tumour tissue specimens with similar repertoires of expressed genes. This has led to an explosion of genome-wide association studies (GWAS) which have identified new chromosomal regions and DNA variants. This information has been used to develop multiplexed tests that hunt for a range of possible mutations in an individual’s cancer, to assist clinical decision-making. The HapMap aims to identify the millions of single nucleotide polymorphisms (SNPs), which are single nucleotide differences in the DNA sequence, which may confer individual differences in susceptibility to disease. The HapMap has identified low-risk genes for breast, prostate and colon cancers. [14] TCGA and ICGC have begun cataloguing significant mutation events in common cancers. [5,6] OncoMap provides such an example, where alterations in multiple genes are screened by mass spectrometry. [15]

The reproduction and accuracy of microarray data needs to be addressed cautiously. ‘Noise’ from analysing thousands of genes can lead to false predictions and, as such, it is difficult to compare results across microarray studies. In addition, cancer cells alter their gene expression when extrapolated from their environment, potentially yielding misleading results. The clinical utility of microarrays is difficult to determine, given the variability of the assays themselves as well as the variability between patients and between the laboratories performing the analyses.

Types of cancer biomarkers

This shif offerrom entirely empirical cancer treatment to stratified and eventually personalised approaches requires the discovery of biomarkers and the development of assays to detect them (Table 1). With recent technological advances in molecular biology, the range of cancer biomarkers has expanded, which will aid the implementation of effective therapies into the clinical armamentarium (Figure 1). However, during the past two decades, fewer than twelve biomarker assays have been approved by the US Food and Drug Administration (FDA) for monitoring response, surveillance or the recurrence of cancer. [16]

Early detection biomarkers

Most current methods of early cancer detection, such as mammography or cervical cytology, are based on anatomic changes in tissues or morphologic changes in cells. Various molecular markers, such as protein or genetic changes, have been proposed for early cancer detection. For example, PSA is secreted by prostate tissue and has been approved for the clinical management of prostate cancer. [17] CA-125 is recognised as an ovarian cancer-specific protein. [18]

Diagnostic biomarkers

Examples of commercial biomarker tests include the Oncotype DX biomarker test and MammaPrint test for breast cancer. Oncotype DX is designed for women newly diagnosed with oestrogen-receptor (ER) positive breast cancer which has not spread to lymph nodes. The test calculates a ‘recurrence score’ based on the expression of 21 genes. Not covered by Medicare, it will cost US$4,075 for each woman. One study found that this test persuaded oncologists to alter their treatment recommendations for 30% of their patients. [19]

Prognostic biomarkers

The tumour, node, metastasis (TNM)-staging system is the standard for prediction of survival in most solid tumours based on clinical, gross and pathologic criteria. Additional information can be provided with prognostic biomarkers, which indicate the likelihood that the tumour will return in the absence of any further treatment. For example, for patients with metastatic nonseminomatous germ cell tumours, serum-based biomarkers include α-fetoprotein, human chorionic gonadotropin, and lactate dehydrogenase.

Predictive biomarkers

Biomarkers can also prospectively predict response (or lack of response) to specific therapies. The widespread clinical usage of ER and progesterone receptors (PR) for treatment with tamoxifen, and human epidermal growth factor receptor-2 (HER-2) for treatment with trastuzumab, is evidence of the usefulness of predictive biomarkers. Epidermal growth factor receptor (EGFR) is overexpressed in multiple cancer types. EGFR mutation is a strong predictor of a favourable outcome if treated with EGFR tyrosine kinase inhibitors such as gefitinib in non-small cell lung carcinoma (NSCLC) and anti-EGFR monoclonal antibodies such as cetuximab or panitumumab in colorectal cancer. [20] Conversely, the same cancers with KRAS mutations are associated with primary resistance to EGFR tyrosine kinase inhibitors. [21,22] This demonstrates that biomarkers, such as KRAS mutation status, can predict which patient may or may not benefit from anti-EGFR therapy (Figure 2).

Pharmacodynamic biomarkers

Determining the correct dosage for the majority of traditional chemotherapeutic agents presents a challenge because most drugs have a narrow therapeutic index. Pharmacodynamic biomarkers, in theory, can be used to guide dose selection. The magnitude of BCR–ABL kinase activity inhibition was found to correlate with clinical outcome, possibly justifying the personalised selection of drug dose. [23]

The role of biomarkers in common cancers

Biomarkers currently have a role in the prediction or diagnosis of a number of common cancers (Table 2).

Breast Cancer

Breast cancer can be used to illustrate the contribution of molecular diagnostics to personalised treatment. Discovered in the 1970s, tamoxifen was the first targeted cancer therapy against the oestrogen signalling pathway. [8] Approximately three quarters of breast cancer tumours express hormone receptors for oestrogen and/or progesterone. Modulating either the hormone ligand or the receptor has been shown to be effective in treating hormone receptorposi tive breast cancer for over a century. Although quite effective for a subset of patients, this strategy has adverse partial oestrogenic eff ects in the uterus and vascular system, resulting in an increased risk of endometrial cancer and thromboembolism. [9,10] Alternative approaches to target the ligand production instead of the ER itself was hypothesised to be more effective with fewer side effects. Recent data suggest that the use of specific aromatase inhibitors (anastrozole, letrozole and exemestane), which block the formation of endogenous oestrogen, may be superior in both the adjuvant [24] and advanced disease settings. [25]

Lung Cancer

Lung cancer is the most common cause of cancer-related mortality affecting both genders in Australia. [26] Many investigators are using panels of serum biomarkers in an attempt to increase sensitivity of prediction. Numerous potential DNA biomarkers such as the overactivation of oncogenes, including K-ras, myc, EGFR, and Met, or the inactivation of tumour suppressor genes, including p53 and Rb, are being investigated. Gefitinib was found to be superior to carboplatin– paclitaxel in EGFR-mutant non-small cell lung cancer cases [20] and to improve progression-free survival, with acceptable toxicity, when compared with standard chemotherapy. [27]

Melanoma

Australia has the highest skin cancer incidence in the world. [28] Approximately two in three Australians will be diagnosed with skin cancer before the age of 70. [29] Currently, the diagnosis and prognosis of primary melanoma is based on histopathologic and clinical factors. In the genomic age, the number of modalities for identifying and subclassifying melanoma is rapidly increasing. These include immunohistochemistry of tissue sections and tissue microarrays and molecular analysis using RT-PCR, which can detect relevant multidrug resistance-associated protein (MRP) gene expression and characterisation of germ-line mutations. [30] It is now known that most malignant melanomas have a V600E BRAF mutation. [31] Treatment of metastatic melanoma with PLX4032 resulted in complete or partial tumour regression in the majority of patients. Responses were observed at all sites of disease, including the bone, liver, and small bowel. [32]

Leukaemia

Leukaemia has progressed from being seen merely as a disease of the blood to one that consists of 38 different subtypes. [33] Historically a fatal disease, chronic myeloid leukaemia (CML) has been redefined by the presence of the Philadelphia chromosome. [34] In 1998, imatinib was marketed as a tyrosine kinase inhibitor. This drug has proven to be so effective that patients with CML now have mortality rates comparable to those of the general population. [35]

Colon Cancer

Cetuximab was the first anti-EGFR monoclonal antibody approved in the US for the treatment of colorectal cancer, and the first agent with proven clinical efficacy in overcoming topoisomerase I resistance. [22] In 2004, bevacizumab was approved for use in the first-line treatment of metastatic colorectal cancer in combination with 5-fluorouracil-based chemotherapy. Extensive investigation since that time has sought to define bevacizumab’s role in different chemotherapy combinations and in early stage disease. [36]

Lymphoma

Another monoclonal antibody, rituximab, is an anti-human CD20 antibody. Rituximab alone has been used as the first-line therapy in patients with indolent lymphoma, with overall response rates of approximately 70% and complete response rates of over 30%. [37,38] Monoclonal antibodies directed against other B-cell-associated antigens and new anti-CD20 monoclonal antibodies and anti-CD80 monoclonal antibodies (such as galiximab) are being investigated in follicular lymphoma. [39]

Implication and considerations of personalised cancer treatment

Scientific considerations

Increasing information has revealed the incredible complexity of the cancer tumourigenesis puzzle; there are not only point mutations, such as nucleotide insertions, deletions and SNPs, but also genomic rearrangements and copy number changes. [40-42] These studies have documented a pervasive variability of these somatic mutations, [7,43] so that thousands of human genomes and cancer genomes need to be completely sequenced to have a com¬plete landscape of causal mutations. And what about epigenetic and non-genomic changes? While there is a lot of intense research being conducted on the sorts of molecular biology techniques discussed, none have been prospectively validated in clinical trials. In clinical practice, what use is a ‘gene signature’ if it provides no more discriminatory value than performance status or TNM-staging?

Much research has so far been focused on primary cancers; what about metastatic cancers, which account for considerable mortality? The inherent complexity of genomic alterations in late-stage cancers, coupled with interactions that occur between tumour and stromal cells, means that most often we are not measuring what we are treating. If we choose therapy based on the primary tumour, but we are treating the metastasis, we are likely giving the wrong therapy. Despite our increasing knowledge about metastatic colonisation, we still hold little understanding of how metastatic tumour cells behave as solitary disseminated entities. Until we identify optimal predictors for metastases and an understanding of the establishment of micrometastases and activation from latency, personalised therapy should be used sagaciously.

In addition, from a genomic discovery, it is difficult, costly and timeconsuming to deliver to patients a new targeted therapy with suitable pharmacokinetic properties, safety and demonstrable efficacy in randomised clinical trials. The first cancer-related gene mutation was discovered nearly thirty years ago – a point mutation in the HRAS gene that causes a glycine-to-valine mutation at codon twelve. [44,45] The subsequent identification of similar mutations in the KRAS family [46- 48] ushered in a new field of cancer research activity. Yet it is only now, three decades later, that KRAS mutation status is affecting cancer patient management as a ‘resistance marker’ of tumour responsiveness to anti-EGFR therapies. [21]

Ethical and Moral Considerations

The social and ethical implications of genetic research are significant, in fact 3% of the budget for the Human Genome Project is allocated for the same reason. These worries range from “Brave New Worldesque” fears about the beginnings of “genetic determinism” to invasions of “genetic privacy”. An understandable qualm regarding predictive genetic testing is discrimination. For example, if a person is discovered to be at genetically-predisposed to developing cancer, will employers be allowed to make such individuals redundant? Will insurance companies deny claims on the same basis? In Australia, the Law Reform Commission’s report details the protection of privacy, protection against unfair discrimination and maintaining ethical standards in genetics, of which the majority was accepted by the Commonwealth. [49,50] In addition, the Investment and Financial Services Association states that no applicant will be required to undergo a predictive genetic test for life insurance. [51] Undeniably, the potentially negative psychological impact of testing needs to be balanced against the benefits of detection of low, albeit significant, genetic risk. For example, population-based early detection testing for ovarian cancer is hindered by an inappropriately low positive predictive power of existing testing regimes.

As personalised medicine moves closer to becoming a reality, it raises important questions about health equality. Such discoveries are magnifying the disparity in the accessibility of cancer care for minority groups and the elderly, evidenced by their higher incidence rates and lower rates of cancer survival. This is particularly relevant in Australia, given the pre-existing pitfalls of access to medical care for Indigenous Australians. Even when calibrating for later presentations and remoteness, there have still been significant survival disparities between the Indigenous and non-Indigenous populations. [52] Therefore, a number of questions remain. Will personalised treatment serve only to exacerbate the health disparities between the developing and developed world? Even if effective personalised therapies are proven through clinical trials, how will disadvantaged populations access this care given their difficulties in accessing the services that are currently available?

Economic Considerations

The next question that arises is: Who will pay? At first glance, stratifying patients may seem unappealing to the pharmaceutical industry, as it may mean trading the “blockbuster” drug offered to the widest possible market for a diagnostic/therapeutic drug that is highly effective but only in a specific patient cohort. Instead of drugs developed for mass use (and mass profi t), drugs designed through pharmacogenomics for a niche genetic market will be exceedingly expensive. Who will cover this prohibitive cost – the patient, their private health insurer or the Government?

Training Considerations

The limiting factor in personalised medicine could be the treating doctor’s familiarity with utilising genetic information. This can be addressed by enhancing genetic ‘literacy’ amongst doctors. The role of genetics and genetic counselling is becoming increasingly recognised, and is now a subspecialty within the Royal Australian College of Physicians. If personalised treatment improves morbidity and mortality, the proportion of cancer survivors requiring follow-up and management will also rise, and delivery of this service will fall on oncologists and general practitioners, as well as other healthcare professionals. To customise medical decisions for a cancer patient meaningfully and responsibly on the basis of the complete profile of his or her tumour genome, a physician needs to know which specific data points are clinically relevant and actionable. For example, the discovery of BRAF mutations in melanoma [32] have shown us the key first step in making this a reality, namely the creation of a clear and accessible reference of somatic mutations in all cancer types.

Downstream of this is the education that medical universities provide to their graduates in the clinical aspects of genetics. In order to maximise the application of personalised medicine it is imperative for current medical students to understand how genetic factors for cancer and drug response are determined, how they are altered by genegene interactions, and how to evaluate the significance of test results in the context of an individual patient with a specific medical profile. Students should acquaint themselves with the principles of genetic variation and how genome-wide studies are conducted. Importantly, we need to understand that the same principles of simple Mendelian genetics cannot be applied to the genomics of complex diseases such as cancer.

Conclusion

The importance of cancer genomics is evident in every corner of cancer research. However, its presence in the clinic is still limited. It is undeniable that much important work remains to be done in the burgeoning area of personalised therapy; from making sense of data collected from the genome-wide association studies and understanding the genetic behaviour of metastatic cancers to regulatory and economic issues. This leaves us with the parting question, are humans just a sum of their genes?

Conflicts of interest

None declared.

Correspondence

M Wong: may.wong@student.unsw.edu.au

Categories
Articles Review Articles

Is Chlamydia trachomatis a cofactor for cervical cancer?

Introduction

The most recent epidemiological publication on the worldwide burden of cervical cancer has reported that cervical cancer (0.53 million cases) was the third most common female cancer reported in 2008 after breast (1.38 million cases) and colorectal cancer (0.57 million cases). [1] Cervical cancer is the leading source of cancer-related death among women in Africa, Central America, South-Central Asia and Melanesia, indicating that it remains a major public health problem in spite of effective screening methods and vaccine availability. [1]

The age-standardised incidence of cervical cancer in Australian women (20-69 years) has decreased by approximately 50% from 1991 (the year the National Cervical Screening Program was introduced) to 2006 (Figure 1). [2,3] Despite this drop, the Australian Institute of Health and Welfare estimated an increase in cervical cancer incidence and mortality for 2010 by 1.5% and 9.6 % respectively. [3]

Human papillomavirus (HPV) is required but not sufficient to cause invasive cervical cancer (ICC). [4-6] Not all women with a HPV infection progress to develop ICC. This implies the existence of cofactors in the pathogenesis of ICC such as smoking, sexually transmitted infections, age at first intercourse and number of lifetime sexual partners. [7] Chlamydia trachomatis (CT) is the most common bacterial sexually transmitted infection (STI) and it has been associated with the development of ICC in many case-controlled and population based studies. [8-11] However, a clear cause-and effect relationship has not been elucidated between CT infection, HPV persistence and progression to ICC as an end stage. This article aims to review the literature for evidence that CT acts as a cofactor in the development of ICC and HPV establishment. The understanding of CT as a risk factor for ICC is crucial as it is amenable to prevention.

Aim: To review the literature to determine if an infection with Chlamydia trachomatis (CT) acts as a confounding factor in the pathogenesis of invasive cervical cancer (ICC) in women. Methods: Web-based Medline and the Australian Institute of Health and Welfare (AIHW) search for key terms: cervical cancer (including neoplasia, malignancy and carcinoma), chlamydia, human papillomavirus (HPV) and immunology. The search was restricted to English language publications on ICC (both squamous and adenocarcinoma) and cervical intraepithelial neoplasia (CIN) between 1990-2010. Results: HPV is essential but not sufficient to cause ICC. Past and current infection with CT is associated with squamous cell carcinoma of the cervix of HPV-positive women. CT infection induces both protective and pathologic immune responses in the host that depend on the balance between Type-1 helper cells versus Type-2 helper cell-mediated immunity. CT most likely behaves as a cervical cancer cofactor by 1) invading the host immune system and 2) enhancing chronic inflammation. These factors increase the susceptibility of a subsequent HPV infection and build HPV persistence in the host. Conclusion: Prophylaxis against CT is significant in reducing the incidence of ICC in HPVposi tive women. GPs should be raising awareness of the association between CT and ICC in their patients.

Evidence for the role of HPV in the aetiology and pathogenesis of cervical cancer

HPV is a species-specific, non-enveloped, double stranded DNA virus that infects squamous epithelia and consists of the major protein L1 and the minor capsid protein L2. More than 130 HPV types have been classified based on their genotype and HPV 16 (50-70% of cases) and HPV 18 (7-20% cases) are the most important players in the aetiology of cervical cancer. [6,12] Genital HPV transmission is usually spread via skin-to-skin contact during sexual intercourse but does not require vaginal or anal penetration, which implies that condoms only offer some protection against CIN and ICC. [6] The risk factors for contracting HPV infection are early age at first sexual activity, multiple sexual partners, early age at first delivery, increased number of pregnancies, smoking, immunosuppression (for example, human immunodeficiency virus or medication), and long-term oral contraceptive use. Social customs in endemic regions such as child marriages, polygamy and high parity use may also increase the likelihood of contracting HPV. [13] More than 80% of HPV infections are cleared by the host’s cellular immune response, which starts about three months from the inoculation of virus. HPV can be latent for 2-12 months post-infection. [14]

Molecular Pathogenesis

HPV particles enter basal keratinocytes of mucosal epithelium via binding of virions to the basal membrane of disrupted epithelium. This is mediated via heparan surface proteoglycans (HSPGs) found in the extracellular matrix and cell surface of most cells. The virus is then internalised to establish an infection mainly via a clathrin-dependent endocytic mechanism. However, some HPV types may use alternative uptake pathways to enter cells, such as a caveolae-dependent route or the involvement of tetraspanin-enriched domains as a platform for viral uptake. [15] The virus replicates in nondividing cells that lack the necessary cellular DNA polymerases and replication factors. Therefore, HPV encodes proteins that reactivate cellular DNA synthesis in noncycling cells, inhibit apoptosis, and delay the differentiation of the infected keratinocyte, to allow viral DNA replication. [6] The integration of viral genome in the host DNA causes deregulation of E6 and E7 oncogenes of high-risk HPV (HPV 16 and 18) but not of low risk HPV (HPV 6 and 11). This results in the expression of E6 and E7 oncogenes throughout the epithelium resulting in aneuploidy and karotypic chromosomal abnormalities that accompany keratinocyte immortalisation. [5]

Natural History of HPV infection and cervical cancer

Low risk HPV infections are usually cleared by cellular immunity coupled with seroconversion and antibodies against major coat protein L1. [5,6,12] Infection with high-risk HPV is highly associated with the development of squamous cell and adenocarcinoma of the cervix, which is confounded by other factors such as smoking and STIs. [4,9,10] The progression of cervical cancer in response to HPV is schematically illustrated in Figure 2.

Chlamydia trachomatis and the immune response

CT is a true obligate intracellular pathogen and is the most common bacterial cause of STIs. It is associated with sexual risk-taking behaviour and leads to asymptomatic and therefore undiagnosed genital infections due to the slow growth cycle of CT. [16] A CT infection is targeted by innate immune cells, T cells and B cells. Protective immune responses control the infection whereas pathological responses lead to chronic inflammation that causes tissue damage. [17]

Innate immunity

The mucosal epithelium of the genital tract provides first line of host defence. If CT is successful in entering the mucosal epithelium, the innate immune system is activated through the recognition of pathogen-associated molecular patterns (PAMPs) such as the Toll-like receptors (TLRs). Although CT lipopolysaccharides can be recognised by TLR4, TLR2 is more crucial for signalling pro-inflammatory cytokine production. [18] This leads to the production of pro-inflammatory cytokines such as interleukin-1 (IL-1), IL-6, tumour necrosis factor-a (TNF-a) and granulocyte-macrophage colony-stimulating factor (GMCSF). [17] In addition, chemokines such as IL-8 can increase recruitment of innate-immunity cells such as macrophages, natural killer (NK) cells, dendritic cells (DCs) and neutrophils that in turn produce more proinflammatory cytokines to restrict CT growth. Infected epithelial cells release matrix metalloproteases (MMPs) that contribute to tissue proteolysis and remodelling. Neutrophils also release MMPs and elastases that contribute to tissue damage. NK cells produce interferon (IFN)–gamma that drives CD4 T cells toward the Th1-mediated immune response. The infected tissue is infi ltrated by a mixture of CD4, CD8, B cells, and plasma cells (PCs). [17,19,20] DCs are essential for processing and presenting CT antigens to T cells and therefore linking innate and adaptive immunity.

Adaptive Immunity

Both CD4 and CD8 cells contribute to control of CT infection. In 2000, Morrison et al. showed that B cell-deficient mice, depleted of CD4 cells, are unable to clear CT infection. [21] However, another study in 2005 showed that passive transfer of chlamydia-specific monoclonal antibodies into B-cell deficient and CD4 depleted cells restored the ability of these mice to control a secondary CT infection. [22] This indicates a strong synergy between CD4 and B cells in the adaptive immune response to CT. B cells produce CT-specific antibodies to combat the pathogens. In contrast, CD8 cells produce IL-4, IL-5 and IL- 13 that do not appear to protect against chlamydia infection and may even indirectly enhance chlamydia load by inhibiting the protective CD4 response. [23] A similar observation was made by Agrawal et al. who examined cervical lymphocyte cytokine responses of 255 CT antibody–positive women with or without fertility disorders (infertility and multiple spontaneous abortions) and of healthy control women negative for CT serum IgM or IgG. [20] The study revealed a significant increase in CD4 cells in the cervical mucosa of fertile women, compared with those with fertility disorders and with negative control women. There was a very small increase in CD8 cells in cervical mucosa of CT infected women in both groups. The results showed that cervical cells from the women with fertility disorders secreted higher levels of IL- 1b, IL-6, IL-8, and IL-10 in response to CT; whereas, cervical cells from antibody-positive fertile women secreted significantly higher levels of IFN-gamma and IL-12. This suggests that a skewed immune response toward Th1 prevalence protects against chronic infection. [20]

The pathologic response to CT can result in inflammatory damage within the upper reproductive tract due to either failed or weak Th1 action resulting in chronic infection or an exaggerated Th1 response. Alternatively, chronic infection can occur if Th2 response dominates Th1 immune response and result in autoimmunity and direct cell damage which in turn will enhance tissue inflammation. Inflammation also increases the expression of human heat shock protein (HSP), which induce production of IL-10 via autoantibodies leading to CT associated pathology such as tubal blockage and ectopic pregnancies. [24]

Evidence that Chlamydia trachomatis is a cofactor for cervical cancer

Whilst it has been established that HPV is a necessary factor in the development of cervical cancer, it is still unclear why the majority of women infected with HPV do not progress to ICC stage. Several studies in the last decade have focused on the role of STIs in the pathogenesis of ICC and discovered that CT infection is consistently associated with squamous cell ICC.

In 2000, Koskela et al. performed a large-scale case-controlled study within a cohort of 530,000 Nordic women to evaluate the role of CT in the development of ICC. [10] One-hundred and eighty-two women with ICC (diagnosed during a mean follow-up of five years after serum sampling) were identified via linking data files of three Nordic serum banks and the cancer registries of Finland, Norway and Sweden. Microimmunofl uorescence (MIF) was used to detect CT-specific IgGs and HPV16-, 18- and 33-specific IgG antibodies were determined by standard ELISAs. Serum antibodies to CT were associated with an increased risk for cervical squamous-cell carcinoma (HPV and smoking adjusted odds ratio (OR), 2.2; 95% confi dence interval (CI), 1.3–3.5). The association remained also after adjustment for smoking both in HPV16-seronegative and seropositive cases (OR, 3.0; 95% CI, 1.8–5.1; OR, 2.3, 95% CI, 0.8–7.0 respectively). This study provided sero-epidemiologic evidence that CT could cause squamous cell ICC. However the authors were unable to explain the biological association between CT and squamous cell ICC.

Many more studies emerged in 2002 to investigate this association between CT and ICC even further. Smith et al. performed a hospital case-controlled study of 499 ICC women from Brazil and 539 from Manila that revealed that CT seropositive women have a twofold increase in squamous ICC (OR, 2.1; 95% CI, 1.1-4.0) but not adenocarcinoma or adenosquamous ICC (OR, 0.8; 95% CI, 0.3-2.2). [8] Similarly, Wallin et al. conducted a population based prospective study of 118 women who developed cancer after having a normal pap smear (average of 5.6 years later). [25] Women were followed up for 26 years. PCR analysis for CT and HPV DNA showed that the relative risk for ICC associated with past CT, adjusted for concomitant HPV DNA positivity, was 17.1. They also concluded that the presence of CT and of HPV was not interrelated.

In contrast, another study examining the association between CT and HPV in women with cervical intraepithelial neoplasia (CIN) found that there is an increase in CT rate in HPV-positive women (29/49) as compared to HPV-negative women (10/80), (p<0.001). [26] However, no correlation between HPV and CT co-infection was found and the authors suggested that the increased CT infectivity rate in HPVposi tive women is presumably due to HPV-related factors, including modulation of the host’s immunity. In 2004, a case-controlled study of 1,238 ICC women and 1100 control women in 7 countries coordinated by the International Agency for Research on Cancer (IARC), France also supported the findings of previous studies. [7]

Strikingly, a very recent study in 2010 confirmed that there was no association between CT infection, as assessed by DNA or IgG, and risk of cervical premalignancy, after controlling for carcinogenic HPVposi tive status. [11] The authors have justified the difference in results from previous studies by criticising the retrospective nature of the IARC study, which meant that HPV and CT status at relevant times were not available. [7] However, other prospective studies have also identified the association between CT and ICC development. [9,25] Therefore, the results from this one study remain isolated from practically every other study that has found an association between CT and ICC in HPV infected women.

Consequently, it is evident that CT infection has a role in confounding squamous cell ICC in HPV infected women but it is not an independent    cause for ICC as previously suggested by Koskela et al. [10] Previous cause-and-eff ect association between CT and HPV are most likely from CT infection increasing the susceptibility to HPV. [9,11,27] The mechanisms by which CT can act as a confounder for ICC relate to CT induced inflammation (associated with metaplasia) and invasion of the host immune response, which increases susceptibility to HPV infection and enhances HPV persistence in the host. CT can directly degrade RFX-5 and USF-1 transcription factors that induce expression of MHC class I and MHC class II respectively. [17,28] This prevents recognition of both HPV and CT by CD4 and CD8 cells, thus preventing T-cell effector functions. CT can also suppress IFN-gamma-induced MHC class II expression by selective disruption of the IFN-gamma signalling pathways, hence evading host immunity. [28] Additionally, as discussed above, CT induces inflammation and metaplasia of infected cells, which predisposes them as target cells for HPV. CT infection may also increase access of HPV to the basal epithelium and increases HPV viral load. [16]

Conclusion

There is sufficient evidence to suggest that CT infection can act as a cofactor in squamous cell ICC development due to consistent positive correlations between CT infection and ICC in HPV positive women. CT invades the host immune response due to chronic inflammation and it is presumed that it prevents the clearance of HPV from the body, thereby increasing the likelihood of developing ICC. More studies are needed to establish the clear biological pathway linking CT to ICC to support the positive correlation found in epidemiological studies. An understanding of the significant role played by CT as a cofactor in ICC development should be exercised to maximise efforts in CT prophylaxis, starting at the primary health care level. Novel public health strategies must be devised to reduce CT transmission and raise awareness among women.

Conflicts of interest

None declared.

Correspondence

S Khosla: surkhosla@hotmail.com

Categories
Review Articles Articles

Ear disease in Indigenous Australians: A literature review

Introduction

The Australian Indigenous versus non-Indigenous mortality gap is worse in Australia than in any other Organisation for Economic Coopera tion and Development nation with disadvantaged Indigenous populations, including Canada, New Zealand, and the USA. [1] This gap reached a stark peak of seventeen years in 1996-2001. [2] Otitis media affects 80% of Australian children by the age of three years, being one of the most common diseases of childhood. [3]

Whilst ear diseases and their complications are now rarely a direct cause for mortality, especially since the advent of antimicrobial therapy and the subsequent reduction in extracranial and intracranial complications, [4] the statistics of ear disease nevertheless illustrate the unacceptable disparity between the health status of these two populations cohabiting a developed nation, and are an indictment of the poor living conditions in Indigenous communities. [5] Moreover, the high prevalence of ear disease among Aboriginal and Torres Strait Islanders is associated with secondary complications that represent significant morbidity within this population, most notably conductive hearing loss, which affects up to 67% of school-age Australian Indigenous children. [6]

This article aims to illustrate the urgent need for the development of appropriate strategies and programs, which are founded on evidencebased research and also integrate cultural consideration for, and design input from, the Indigenous communities, in order to reduce the medical and social burden of ear disease among Indigenous Australians.

Methodology

This review covered recent literature concerning studies of ear disease in the Australian Indigenous population. Medical and social science databases were searched for recent publications from 2000-2011. Articles were retrieved from The Cochrane Library, PubMed, Google Scholar and BMJ Journals Online. Search terms aimed to capture a broad range of relevant studies. Medical textbooks available at the medical libraries of Notre Dame University (Western Australia) and The University of Western Australia were also used. A comprehensive search was also made of internet resources; these sources included the websites of The Australian Department of Health and Ageing, the World Health Organisation, and websites of specific initiatives targeting ear disease in the Indigenous Australian population.

Peer reviewed scientific papers were excluded from this review if ear disease pertaining to Indigenous Australians was not a major focus of the paper. Studies referred to in this review vary widely in type by virtue of the multi-faceted topic addressed and include both qualitative and quantitative studies. For the qualitative studies, those that contributed new information or covered areas that had not been fully explored in quantitative studies were included. Quantitative studies with weaknesses arising from small sample size, few factors measured or weak data analysis were included only when they provided insights not available from more rigorous studies.

Overview and epidemiology

The percentage of Australian Indigenous children suff ering otitis media and its complications is disproportionately high; up to 73% by the age of twelve months. [7] In the Australian primary healthcare settng, Aboriginal and Torres Strait Islander children are five times more likely to be diagnosed with severe otitis media than non-Indigenous children. [8]

Chronic suppurative otitis media (CSOM) is uncommon in developed societies and is generally perceived as being a disease of poverty. The World Health Organisation (WHO) states that a prevalence of CSOM greater than or equal to 4% indicates a massive public health problem of CSOM warranting urgent attention in targeted populations. [9] CSOM affects Indigenous Australian children up to ten times this proportion, [5] and fifteen times the proportion of non-Indigenous Australian children, [8] thus reflecting an unacceptably great dichotomy of the prevalence and severity of ear disease and its complications between Indigenous and non-Indigenous Australians.

Comparisons of the burden of mortality and the loss of disabilityadjusted life years (DALYs) have been attempted between otitis media (all types grouped together) and illnesses of importance in developing countries. These comparisons show that the burden of otitis media is substantially greater than that of trachoma, and comparable with that of polio, [9] with permanent hearing loss accounting for a large proportion of this DALY burden.

Whilst there are some general indications that the health of Indigenous Australian children has improved over the past 30 years, such as increased birth weight and lower infant mortality, there is evidence to suggest that morbidities associated with infections such as respiratory infections and otitis media have not changed. [10-12]

Middle Ear disease: Pathophysiology and host risk factors

The disease process of otitis media is a complex and dynamic continuum. [10] Hence there is inconsistency throughout the medical community regarding defi nitions and diagnostic criteria for this disease, and controversy regarding what constitutes “gold standard” treatment. [7,13] In order to form a discussion about the high prevalence of middle ear diseases in Indigenous Australians, one must first establish an understanding of their aetiology and pathogenicity. Host-related risk factors for otitis media include young age, high rates of nasopharyngeal colonisation with potentially pathogenic bacteria, eustachian tube dysfunction and palato-facial abnormalities, lack of passive immunity and acquisition of respiratory tract infections in the early stages of life. [7,9,10,14,15]

Streptococcus pneumoniae, Haemophilus influenzae and Moraxella catarrhalis are the recognised major pathogens of otitis media. However, this disease has a complex, polymicrobial aetiology, with at least fifteen other genera having been identified in middle ear eff usions. [11] The organisms involved in CSOM are predominantly opportunistic organisms, especially Pseudomonas aeruginosa, which is associated with approximately 20-50% of CSOM in both Aboriginal, Torres Strait Islander and non-Indigenous children. [10]

Relatively new findings in otitis media pathogenicity have included the identification of Alloiococcus otitidis and human metapneumovirus. [13] A. otitidis in particular, a slow-growing aerobic gram positive bacterium, has been identified in as many as 20-30% of middle ear eff usions in children with CSOM. [13,16,17] The importance of interaction between viruses and bacteria (with the major identified viruses being adenovirus, rhinovirus, polyomavirus and more recently human metapneumovirus) is well recognised in the pathogenicity of otitis media. [13,18,19] High identification rates of viral-bacterial co-infection found in asymptomatic children with otitis media (42% Indigenous and 32% non-Indigenous children) underscore the potential value in preventative strategies targeted at specific pathogens. [19] The role of biofi lms in otitis media pathogenesis has been of great interest since a fluorescence in-situ hydridisation study detected biofi lms in 92% of middle ear mucosal biopsies from 26 children with recurrent otitis media or otitis media with eff usion. [20] This suggested an explanation for the persistence and recalcitrance of otitis media, as bacteria growing in biofi lm are more resistant to antibiotics than planktonic cells. [20]

However, translating all this knowledge into better health outcomes – by means of individual clinical treatment and community preventative strategies – is not straightforward. A more thorough understanding of the polymicrobial pathogenesis is needed if more effective therapies for otitis media are to be achieved.

Some research has been involved in the possibility of a genetic predisposition to otitis media, based on its high prevalence observed across several Indigenous populations around the world, including the Indigenous Australian, Inuit, Maori and Native American peoples. [10] However, whilst the suggestion that genetic factors may play a role in otitis media susceptibility is a worthwhile area of further research, its emphasis should not overlook the significance of poverty, which generally exists throughout colonised Indigenous populations worldwide and is a major public health risk factor. It should be remembered that socioeconomic status is a major determinant of disparities in Indigenous health, irrespective of genetics or ethnicity.

Environmental risk factors

The environmental risk factors for otitis media are well recognised and extensively documented. They include season, inadequate housing, overcrowding, poor hygiene, lack of breastfeeding, pacifier use, poor nutrition, exposure to cigarette or wood-burning smoke, poverty and inadequate or unavailable health care. [5,7,9,10,21]

Several recent studies have examined the impact of overcrowding and poor housing conditions on the health of Indigenous children, with a particular focus on upper respiratory tract infections and ear disease. [22-24] The results of these studies reinforced the belief that elements of the household and community environment are important underlying determinants of the occurrence of common childhood conditions, which impair child growth and development, contribute to the risk of chronic disease and to the seventeen year gap in life expectancy between Aboriginal and Torres Strait Islander people and non-Indigenous Australians. [22, 23] Interestingly, one study’s findings identified the potential need for interventions which could target factors that negatively impact the psychosocial status of carers and which could also target health-related behaviour, including maintenance of household and personal hygiene. [22]

Raised levels of stress and poor mental health associated with the psycho-spatial elements of overcrowded living (that is, increased interpersonal contact, lack of privacy, loss of control, high demand, noise, lack of sleep) may therefore be considered as having a negative impact on the health of dwellers, especially those whose health largely depends on care from others, such as the elderly and young children, who are more susceptible to disease. Urgent attention is needed to improve housing and access to clean running water, nutrition and quality of care, and to give communities greater control over these improvements.

Exposure to environmental smoke is another significant, yet potentially preventable, risk factor for respiratory infections and otitis media in Indigenous children. [25,26] Of all the environmental risk factors for otitis media mentioned above, environmental smoke exposure is arguably the most readily amenable to modification. A recent randomised controlled trial tested the efficacy of a family-centred tobacco control program, aimed at reducing the incidence of respiratory disease among Indigenous children in Australia and New Zealand. It was found that interventions aimed at encouraging smoking cessation as well as reducing exposure of Indigenous children to environmental smoke had the potential for significant benefit, especially when the intervention designs included culturally sound, intensive family-centred programs that emphasised capacitybuilding of the Indigenous community. [25] Such studies testify to the potentially high levels of interest, cooperativeness, pro-activeness and compliance demonstrated by Indigenous communities regarding public health interventions, given the study design is culturally appropriate and accepts that Indigenous people need to be meaningfully engaged in preventative health efforts.

Preventative strategies

The advent of the 7-valent pneumococcal conjugate vaccine has seen a substantial decrease in invasive pneumococcal disease. However, changing patterns of antibiotic resistance and pneumococcal serotype replacement have been documented since the introduction of the vaccine, and large randomised controlled trials have shown its reduction of risk of acute otitis media and tympanic membrane perforation to be minimal. [13,27] One retrospective cohort study’s data suggested that the pneumococcal immunisation program may be unexpectedly increasing the risk of acute lower respiratory infection (ALRI) requiring hospitalisation among vaccinated children, especially after administration of the 23vPPV booster at eighteen months of age. [28] These findings warrant re-evaluation of the pneumococcal immunisation program and further research into alternative medical prevention strategies.

Swimming pools in remote communities have been associated with reduced prevalence of tympanic membrane perforations (as well as pyoderma), indicating the long term benefits associated with reduction in chronic disease burden and improved educational and social outcomes. [6] No outbreaks of infectious diseases have occurred in the swimming pool programmes to date and water quality is regularly monitored according to government regulations. On the condition that adequate funding continues to maintain high safety and environmental standards of community swimming pools, their net effect on community health will remain positive and worthwhile.

Treatment: Current guidelines and practices, potential future treatments

Over the last ten years there has been a general tendency to reduce immediate antibiotic treatment for otitis media for children aged over two years, with the “watchful waiting” approach having become more customary among primary care practitioners. [7] The current therapeutic guidelines note that antibiotic therapy provides only modest benefit for otitis media, with sixteen children requiring treatment at first presentation to prevent one child experiencing pain at two to seven days. [29] Routine antibiotics are recommended only for infants less than six months and for all Aboriginal and Torres Strait Islander children at the initial presentation of acute otitis media. [8] Current guidelines acknowledge that suppurative complications of otitis media are common among Indigenous Australians; hence specific therapeutic guidelines apply to these patients. [30] For those patients in whom antibiotics are indicated, a twice-daily regimen, five day course of amoxicillin is the antibiotic agent of choice. Combined therapy with a seven day course of higher-dose amoxicillin and clavulanate is recommended for poor response to amoxicillin or patients in high-risk populations for amoxicillin-resistant Streptococcus pneumoniae. For CSOM, topical ciprofl oxacin drops are now approved for use in Aboriginal and Torres Strait Islander children, since a study in 2003 contributed to their credibility in the treatment of CSOM. [31,32]

Treatment failure with antibiotics has been observed in some Aboriginal and Torres Strait Islander communities due to poor adherence to the twice-daily regimen of five and seven day courses of amoxicillin. [33] The reasons for non-adherence remain unclear. They may relate to language barriers (misinterpretation or non-comprehension of instructions regarding antibiotic use), storage (lacking a home fridge in which to keep the antibiotics), shared care of the child patient (rather than one guardian) or remoteness (reduced access to healthcare facility and reduced likelihood of follow-up). Treatment failure with antibiotics has also been noted in cases of optimal compliance in Indigenous communities, indicating that poor clinical outcomes may also be due to organism resistance and/or pathogenic mechanisms. [11]

A recent study compared the clinical effectiveness of a single-dose azithromycin treatment with the recommended seven day course of amoxicillin among Indigenous children with acute otitis media in rural and remote communities in the Northern Territory. [33] Whilst azithromycin was found to be more effective at eradicating otitis media pathogens than amoxicillin, azithromycin treatment was associated with an increase in carriage of azithromycin-resistant Streptococcus pneumoniae. Another recent study investigated the antimicrobial susceptibility of Moraxella catarrhalis isolated from a cohort of children with otitis media in the Kalgoorlie-Boulder region of Western Australia. [34] It was found that a large proporstion of strains were resistant to ampicillin and/or co-trimoxazole. Findings from studies such as these indicate that the current therapeutic guidelines, which recommend amoxicillin as the antibiotic of choice for treatment of otitis media, may require revision.

Overall, further research is needed to determine which antibiotics best eradicate otitis media pathogens and reduce bacterial load in the nasopharynx in order to achieve better clinical outcomes. Recent studies indicate that currently recommended antibiotics may need to be reviewed in light of increasing rates of resistant organisms and emerging evidence of new organisms.

Social ramifications associated with ear disease

There is substantial evidence to demonstrate that ear disease has a significant negative impact on the developmental future of Aboriginal and Torres Strait Islander children. [35] Children who are found to have early-onset otitis media (under twelve months) are at high risk of developing long-term speech and language problems secondary to conductive hearing loss, with the specific areas of cognition thought to be affected being auditory processing, attention, behaviour, speech and language. [36] Between 10% and 67% of Indigenous Australian school age children have perforated tympanic membranes, and 14% to 67% have some degree of hearing loss. [37]

Sub-optimal hearing can be a serious handicap for Indigenous children who begin school with delayed oral skills, especially if English is not their first language. Learning the phonetics and grammar of a second language with the unrecognised disability of impaired hearing renders the classroom experience a difficult and unpleasant one for the student, resulting in reduced concentration and increased distractibility, boredom and non-attendance. Truancy predisposes to anti-social behaviour, especially among adolescents, who by this age tend to no longer have infective ear disease but do have established permanent hearing loss. [38] Poor engagement in education and employment, alcohol-fuelled interpersonal violence, domestic violence, and communication difficulties with police and in court have all been linked to the disadvantage of hearing loss and the eventuation of becoming involved in the criminal justice system. [39]

In the Northern Territory, where the Indigenous population accounts for only 30% of the general population, 82% of the 1100 inmates in Northern Territory correctional facilities in the year 2010 were found to be Aboriginal or Torres Strait Islander. [40] Two recent studies conducted within the past two years investigated the prevalence of hearing loss among inmates in Northern Territory correctional facilities. They found that more than 90% of Australian Indigenous inmates had a significant hearing loss of >25dB. [39] A third study in a youth detention centre in the Northern Territory demonstrated that as many as 90% of Australian Indigenous youth in detention may have hearing loss, [41] whilst yet another study found that almost half the female Indigenous inmates at a Western Australian prison had significant hearing loss, almost ten-fold that of the non-Indigenous inmates. [37]

The fact that the Northern Territory study of adult inmates showed a comparatively low prevalence of hearing loss among Indigenous persons who weren’t imprisoned (33% not imprisoned compared with 94% imprisoned) [39] demonstrates a strong correlation between the high prevalence of hearing loss and the over-representation of Indigenous people in Australian correctional facilities. Although this area warrants further research, the data from these studies demonstrate that the higher prevalence of hearing loss among Indigenous inmates suggests that ear disease and hearing loss may have played a role in many Aboriginal and Torres Strait Islander people becoming inmates.

Changes and developments for the future

As we have discussed throughout this article, the unacceptably high burden of ear disease among Indigenous Australians is due to a myriad of medical, biological, socio-cultural, pedagogical, environmental, logistical and political factors. All of these contributing factors must be addressed if a reduction in the morbidity and social ramifications associated with ear disease among Indigenous Australians is to be achieved. The great dichotomy in health service provision could eventually be eradicated if there is the political will and sufficient, specific funding.

Addressing these factors will require the integration of multi- disciplinary efforts from medical researchers, health care practitioners, educational professionals, correctional facilities, politicians, and most importantly the members of Indigenous communities. The latter’s active involvement in, and responsibility for, community education, prevention and medical management of ear disease are imperative to achievement of these goals.

The Government’s response to a recent federal Senate inquiry into Indigenous ear health included $47.7 million over four years to support changes to the Australian Government’s Hearing Services Program (HSP). This was in addition to other existing funds available to eligible members of the hearing-impaired, such as the More Support for Students with Disabilities Initiative and the Better Start for Children With a Disability intervention. [42] Whilst this addition to the federal budget may be seen as a positive step in the Government’s agenda to ameliorate the burden of ear health among the Indigenous Australian population, it will not serve any utility if the funding is not sustainably invested and effectively implemented along the appropriate avenues, which should:

1. Specifically target and reduce identified risk factors of otitis media.

2. Support the implementation of effective, evidence-based, public health prevention strategies, and encourage community control over improvements to education, employment opportunities, housing infrastructure and primary healthcare services.

3. Support constructive and practical multidisciplinary research into the areas of pathogenicity, diagnosis, treatment, vaccines, risk factors and prevention strategies of otitis media.

4. Support and encourage training and employment for healthcare and educational professionals in regional and remote areas. These professionals include doctors, audiologists, speech pathologists, and teachers, and all of these professions should off er programs that increase the number of practising Aboriginal and Torres Strait Islander clinicians and teachers.

5. Adequately fund ear disease prevention and medical treatment programs, including screening programs, so that they may expand, increase in their number and their efficacy. Such services should concentrate on prevention education, accurate diagnosis, antibiotic treatment, surgical intervention (where applicable) and scheduled follow-up of affected children. An exemplary program is Queensland’s “Deadly Ears” program. [43]

6. Support the needs of students and inmates with established hearing loss in the educational and correctional environments, for example, through provision of multidisciplinary healthcare services and the use of sound field systems with wireless infrared technology.

7. Support community and family education regarding the effects of hearing loss on speech, language and education.

All of these objectives should be fulfi lled by cost-effective, sustainable, culturally-sensitive means. It is of paramount importance that these objectives should be well-received by, and include substantial input from, Indigenous members of the community. Successful implementation of these objectives reaching the grass-roots level (thus avoiding the so-called “trickle-down” effect) will not only require substantially increased resources, but also the involvement of Indigenous community members in intervention design and deliverance.

Conclusion

Whilst there remains a continuous need for valuable research in the area of ear disease, it appears that failure to apply existing knowledge is currently more of a problem than a dearth of knowledge. The design, funding and implementation of prevention strategies, community education, medical services and programs, and modifications to educational and correctional settings should be the current priorities in the national agenda addressing the burden of ear disease among Aboriginal and Torres Strait Islander people.

Acknowledgements

Thank you to Dr Matthew Timmins and Dr Greg Hill for providing

feedback on this review.

Conflicts of interest

None declared.

Correspondence

S Hill: shillyrat@hotmail.com

 

Categories
Review Articles Articles

Suxamethonium versus rocuronium in rapid sequence induction: Dispelling the common myths

Rapid sequence induction (RSI) is a technique used to facilitate endotracheal intubation in patients at high risk of aspiration and for those who require rapid securing of the airway. In Australia, RSI protocols in emergency departments usually dictate a predetermined dose of an induction agent and a neuromuscular blocker given in rapid succession. Suxamethonium, also known as succinylcholine, is a depolarising neuromuscular blocker (NMB) and is commonly used in RSI. Although it has a long history of use and is known for producing good intubating conditions in minimal time, suxamethonium possesses certain serious side effects and contraindications (that are beyond the scope of this article).

If there existed no alternative NMB, then the contraindications associated with suxamethonium would be irrelevant – yet there exists a suitable alternative. Rocuronium, a non-depolarising NMB introduced into Australia in 1996, has no known serious side effects or contraindications (excluding anaphylaxis). Unfortunately, many myths surrounding the properties of rocuronium have propagated through the anaesthesia and emergency medicine communities, and have resulted in some clinicians remaining hesitant to embrace this drug as a suitable alternative to suxamethonium for RSI. This essay aims to dispel a number of these myths through presenting the evidence currently available and thus allowing physicians to make informed clinical decisions that have the potential to significantly alter patient outcomes. It is not intended to provide a clear answer to the choice of NMB in RSI, but rather to encourage further debate and discussion on this controversial topic under the guidance of evidence-based medicine.

One of the more noteworthy differences between these two pharmacological agents is their duration of action. The paralysis induced by suxamethonium lasts for five to ten minutes, while rocuronium has a duration of action of 30-90 minutes, depending on the dose used. The significantly shorter duration of action of suxamethonium is often quoted by clinicians as being of great significance in their decision to utilise this drug. In fact, some clinicians are of the opinion that by using suxamethonium, they insert a certain ‘safety margin’ into the RSI protocol under the belief that the NMB will ‘wear off ’ in time for the patient to begin spontaneously breathing again in the case of a failed intubation. Benumof et al. (1997) [1] explored this concept by methodically analysing the extent of haemoglobin desaturation (SpO2) following administration of suxamethonium 1.0mg/kg in patients with a non-patent airway. This study found that critical haemoglobin desaturation will occur prior to functional recovery (that is, return of spontaneous breathing).

In 2001, a study by Heier et al. [2] was conducted, involving twelve healthy volunteers aged 18 to 45 years who were all pre-oxygenated to an end-tidal oxygen concentration >90% (after breathing a FiO2 of 1.0 for three minutes). Following the administration of thiopental and suxamethonium 1.0mg/kg, no assisted ventilation was provided and the oxygen saturation levels were closely monitored. The results demonstrated that one third of the patients included in the study desaturated to SpO2 <80% (at which point they received assisted ventilation during the trial). As the authors clearly stated, the study participants were all young, healthy and slim individuals who received optimal pre-oxygenation, yet still a significant proportion suff ered critical haemoglobin desaturation before spontaneous ventilationresumed. In a real-life scenario, particularly in the patient population who require RSI, an even higher number of patients would be expected to display significant desaturation due to their failing health and the limited time available to provide pre-oxygenation. Although one may be inclined to argue that the results would be altered by reducing the dose of suxamethonium, Naguib et al. [3] affirmed that, while reducing the dose from 1.0mg/kg to 0.6mg/kg did slightly reduce the incidence of SpO2 <90% (from 85% to 65%), it did not shorten the time to spontaneous diaphragmatic movements. Therefore, the notion that the short duration of action of suxamethonium can be relied upon to improve safety in RSI is not supported and should not be trusted as a reliable means to rescue a “cannot intubate, cannot ventilate” situation.

Having demonstrated that differences in the duration of action should not sway one in the false belief of improving safety in RSI, let us compare the effect of the two drugs on oxygen saturation levels if apnoea was to occur following their administration. As suxamethonium is a depolarising agent, it has the side effect of muscle fasciculations following administration, whereas rocuronium, a non-depolarising agent, does not. It has long been questioned whether or not the existence of fasciculations associated with the use of suxamethonium alters the time to onset of haemoglobin desaturation if the airway was unable to be secure in a timely fashion and thus prolonged apnoea occurred.

This concept was explored by Taha et al. [4] who divided enrolled participants in the study into three groups: lidocaine/fentanyl/ rocuronium, lidocaine/fentanyl/suxamethonium and propofol/ suxamethonium. Upon measuring the time to onset of haemoglobin desaturation (deemed to be SpO2 <95%), it was discovered that both groups receiving suxamethonium developed significantly faster desaturation than the group receiving rocuronium. By analysing the differences between the two groups receiving suxamethonium, one discovers a considerable difference in results, with the lidocaine/ fentanyl group having a longer onset to desaturation than the propofol group. Since lidocaine and fentanyl are recognised to decrease (but not completely attenuate) the intensity of suxamethonium-induced fasciculations, these results suggested that the fasciculations associated with suxamethonium do result in a quicker onset to desaturation compared to rocuronium.

Another recent study by Tang et al. [5] provides further clarification on this topic. Overweight patients with a BMI of 25-30 who were undergoing elective surgery requiring RSI were enrolled in the study. Patients were given either 1.5mg/kg suxamethonium or 0.9mg/ kg rocuronium and no assisted ventilation was provided following induction until SpO2 <92% (designated as the ‘Safe Apnoea Time’). The time taken for this to occur was measured in conjunction with the time required to return the patient to SpO2 >97% following introduction of assisted ventilation with FiO2 of 1.0. The authors concluded that suxamethonium not only made the ‘Safe Apnoea Time’ shorter but also prolonged the recovery time to SpO2 >97% compared to rocuronium. In summary, current evidence suggests that the use of suxamethonium results in a faster onset of haemoglobin desaturation than rocuronium, most likely due to the increased oxygen requirements associated with muscle fasciculations.

Since RSI is typically used in situations where the patient is at high risk of aspiration, the underlying goal is to secure the airway in the minimal amount of time possible. Thus, the time required for the NMB to provide adequate intubating conditions is of great importance, with a shorter time translating into better patient outcomes, assuming all other factors are equal. Suxamethonium has long been regarded as the ‘gold-standard’ in this regard, yet recent evidence suggests that the poor reputation of rocuronium in regards to the time required is primarily due to inadequate dosing. Recommended doses for suxamethonium tend to be reliably stated as 1.0-1.5mg/kg, [6] whereas rocuronium dosages have often been quoted as 0.6mg/kg, which, as will be established below, is inadequate for use in RSI.

A prospective, randomised trial study published by Sluga et al. [7] in 2005 concluded that, upon comparing intubating conditions following administration of either 1.0mg/kg suxamethonium or 0.6mg/kg rocuronium, there was a significant improvement in conditions with suxamethonium at 60 seconds post-administration. Another study [8] examined the frequency of good and excellent intubating conditions with rocuronium (0.6mg/kg and 1.0mg/kg) or suxamethonium (1.0mg/kg). Upon comparison of the groups receiving rocuronium, the 1.0mg/kg group had a consistently greater frequency of both good and excellent intubating conditions at 50 seconds. While the rocuronium 1.0mg/kg and suxamethonium 1.0mg/kg groups had a similar frequency of acceptable intubating conditions, there was a higher incidence of excellent conditions in the suxamethonium group. A subsequent study [9] confirmed this finding, with the intubating physician reporting a higher degree of overall satisfaction with the paralysis provided with suxamethonium 1.7mg/kg when compared to rocuronium 1.0mg/kg. In other words, it appears that the higher dose of 1.0mg/kg of rocuronium produces better intubating conditions than 0.6mg/kg, yet it does not do so to the same extent as suxamethonium.

If no evidence were available comparing an even higher dose of rocuronium, the argument for utilising suxamethonium in RSI would defi nitely be strengthened by the articles presented above. However, a retrospective evaluation of RSI and intubation from an emergency department in Arizona, United States provides further compelling evidence. [10] The median doses used were suxamethonium 1.65mg/ kg (n=113) and rocuronium 1.19mg/kg (n=214) and the study authors state there was “no difference in success rate for first intubation attempt or number of attempts regardless of the type of paralytic used or the dose administered.” To add further weight to this issue, a Cochrane Review in 2008 titled “Rocuronium versus succinylcholine for rapid sequence induction intubation” combined 37 studies for analysis and concluded that “no statistical difference in intubating conditions was found when [suxamethonium] was compared to 1.2mg/kg rocuronium.” [11] Hence, there exists sufficient evidence that with adequate dosing, rocuronium (1.2mg/kg) is comparable to suxamethonium in time to onset of intubating conditions and thus this argument cannot be used to aid in selecting an appropriate neuromuscular blocker for RSI. In recent times, particularly here in Australia, there have been questions posed regarding a supposedly increased risk of anaphylaxis to rocuronium. Rose et al. [12] from Royal North Shore Hospital in Sydney addressed this query in a paper in 2001. They found that the incidence of anaphylaxis to any NMB will be determined by its market share. Since the market share (that is, number of uses) of rocuronium is increasing, the cases of anaphylaxis are also increasing – but importantly, they are only increasing “in proportion to usage.” Of note, the authors state that rocuronium should still be considered a drug of “intermediate risk” of anaphylaxis, compared to suxamethonium which is “high risk”. Although not addressed in this paper, there are additional factors that have the potential to alter the incidence of anaphylaxis, such as geographical variation that may be related to the availability of pholcodine in cough syrup. [13]

Before the focus of this paper shifts to a novel agent that has the potential to significantly alter the decision of selecting between suxamethonium versus rocuronium in RSI, there remains a pertinent issue that needs to be discussed. It appears as though one of the key properties of suxamethonium is its brief duration of only five to ten minutes and many clinicians tend to quote this as an important aspect, with the Cochrane Review itself stating that “succinylcholine was clinically superior as it has a shorter duration of action,” despite finding no statistical difference otherwise. [11]

The question that needs to be posed is whether this is truly an advantage of a NMB used in RSI. Patients who require emergency intubation often have a dire need for a secure airway to be established – simply allowing the NMB to “wear off ” and the patient to begin spontaneously breathing again does nothing to alter their situation. One must consider that, even if the clinician was aware of the evidence against relying on suxamethonium’s short duration of action to rescue them from a failed intubation scenario, the decision to initiate further measures (that is, progress to a surgical airway) would be delayed in such a scenario. If rocuronium, with its longer duration of action, was used, would clinicians then feel more compelled to ‘act’ rather than ‘wait’ in this rare scenario, knowing that the patient would remain paralysed? If rescue techniques such as a surgical airway were instigated, would the awakening of the patient (due to suxamethonium terminating its effect) be a hindrance? Although the use of rocuronium presents the risk of a patient requiring prolonged measures to maintain oxygenation and ventilation in a “cannot intubate, can ventilate” scenario, paralysis would be reliably maintained if a surgical airway was required.

No discussion on the debate of suxamethonium versus rocuronium would be complete without mentioning a new drug that appears to hold great potential in this arena – sugammadex. A γ-cyclodextrin specifically designed to encapsulate rocuronium and thus cause disassociation from the acetylcholine receptor, it acts to reverse the effects of neuromuscular blockade from rocuronium. In addition to its action on rocuronium, sugammadex also appears to have some crossover effect on vecuronium, another steroidal non-depolarising NMB. While acetylcholinesterase inhibitors are often used to reverse NMBs, they act non-specifically on both muscarinic and nicotinic synapses and cause many unwanted side effects. If they are given before there is partial recovery (>10% twitch activity) of neuromuscular blockade, they do not shorten the time to 90% recovery and thus are ineffective against profound block.

Sugammadex was first administered to human volunteers in 2005 with minimal side effects. [14] It displayed great potential in achieving quick recovery from rocuronium-induced paralysis within a few minutes. Further trials were conducted, including by de Boer et al. [15] in the Netherlands. Neuromuscular blockade was induced with rocuronium 1.2mg/kg and doses ranging from 2.0 to 16.0mg/kg of sugammadex given. With recovery of the train-of-four ratio to 0.9 designated as the primary outcome, the authors found that successive increases in the dose of sugammadex resulted in decreased time required to reverse profound blockade at five minutes following administration of rocuronium, with sugammadex 16mg/kg giving a mean recovery time of only 1.9 minutes compared to the placebo recovery time of 122.1 minutes. In a review article, Mirakhur [16] further supported the use of high-dose sugammadex (16mg/kg) in a situation requiring rapid recovery of neuromuscular blockade.

With an effective reversal agent for rocuronium presenting a possible alternative to suxamethonium in rapid sequence inductions, Lee et al. [17] closely examined the differences in time to termination of effect. They studied 110 patients randomised to either rocuronium 1.2mg/kg or suxamethonium 1mg/kg. At three minutes following administration of rocuronium, 16mg/kg sugammadex was given. The results of this study confirmed the potential of sugammadex and its possible future role in RSI, as the study group given rocuronium and sugammadex (at three minutes) recovered significantly faster than those given suxamethonium (mean recovery time to first twitch 10% = 4.4 and 7.1 minutes respectively). The evidence therefore suggested that administering sugammadex 16mg/kg at three minutes after rocuronium 1.2mg/kg resulted in a shorter time to reversal of neuromuscular blockade compared to spontaneous recovery with suxamethonium. While sugammadex has certainly shown great potential, it remains an expensive drug and there still exist uncertainties regarding repeat dosing with rocuronium following reversal with sugammadex, [18] as well as the need to suitably educate and train staff on its appropriate use, as demonstrated by Bisschops et al. [19] It is also important to note that for sugammadex to be of use in situations where reversal of neuromuscular blockade is required, the full reversal dose (16mg/kg) must be readily available. Nonetheless, it appears as if sugammadex may revolutionise the use of rocuronium not only in RSI, but also for other forms of anaesthesia in the near future.

As clinicians, we should strive to achieve the best patient outcomes possible. Without remaining abreast of the current literature, our exposure to new therapies will be limited and, ultimately, patients will not always be provided with the high level of medical care they desire and deserve. I urge all clinicians who are tasked with the difficult responsibility of establishing an emergency airway with RSI to consider rocuronium as a viable alternative to suxamethonium and to strive to understand the pros and cons associated with both agents, in order to ensure that an appropriate choice is made on the basis of solid evidence-based medicine.

Conflicts of interest

None declared.

Correspondence

S Davies: sjdav8@student.monash.edu

References

[1] Benumof JL, Dagg R, Benumof R. Critical haemoglobin desaturation will occur before return to an unparalysed state following 1 mg/kg intravenous succinylcholine. Anesthesiology. 1997; 87:979-82.

[2] Heier T, Feiner JR, Lin J, Brown R, Caldwell JE. Hemoglobin desaturation after succinylcholine-induced apnea. Anesthesiology. 2001; 94:754-9.

[3] Neguib M, Samarkandi AH, Abdullah K, Riad W, Alharby SW. Succinylcholine dosage and apnea-induced haemoglobin desaturation in patients. Anesthesiology. 2005; 102(1):35-40.

[4] Taha SK, El-Khatib MF, Baraka, AS, Haidar YA, Abdallah FW, Zbeidy RA, Siddik-Sayyid SM. Effect of suxamethonium vs rocuronium on onset of oxygen saturation during apnoea following rapid sequence induction. Anaesthesia. 2010; 65:358-361.

[5] Tang L, Li S, Huang S, Ma H, Wang Z. Desaturation following rapid sequence induction using succinylcholine vs. rocuronium in overweight patients. Acta Anaesthesiol Scand. 2011; 55:203-8.

[6] El-Orbany M, Connolly LA. Rapid Sequence Induction and Intubation: Current Controversy. Anaesth Analg. 2010; 110(5):1318-24.

[7] Sluga M, Ummenhofer W, Studer W, Siegemund M, Marsch SC. Rocuronium versus succinylcholine for rapid sequence induction of anesthesia and endotracheal intubation a prospective, randomized trial in emergent cases. Anaesth Analg. 2005; 101:1356-61.

[8] McCourt KC, Salmela L, Mirakhur RK, Carroll M, Mäkinen MT, Kansanaho M, Kerr C, Roest GJ, Olkkola KT. Comparison of rocuronium and suxamethonium for use during rapid sequence induction of anaesthesia. Anaesthesia. 1998; 53:867-71.

[9] Laurin EG, Sakles JC, Panacek EA, Rantapaa AA, Redd J. A comparison of succinylcholine and rocuronium for rapid-sequence intubation of emergency department patients. Acad Emerg Med. 2000; 7:1362-9.

[10] Patanwala AE, Stahle SA, Sakles JC, Erstad BL. Comparison of Succinylcholine and Rocuronium for First-attempt Intubation Success in the Emergency Department. Acad Emerg Med. 2011; 18:11-14.

[11] Perry JJ, Lee JS, Sillberg VAH, Wells GA. Rocuronium versus succinylcholine for rapid sequence induction intubation. Cochrane Database Syst Rev. 2008:CD002788.

[12] Rose M, Fisher M. Rocuronium: high risk for anaphylaxis? Br J Anaesth. 2001; 86(5):678-82.

[13] Florvaag E, Johansson SGO. The pholcodine story. Immunol Allergy Clinic North Am. 2009; 29:419-27.

[14] Gijsenbergh F, Ramael S, Houwing N, van Iersel T. First human exposure of Org 25969, a novel agent to reverse the action of rocuronium bromide. Anaesthesiology. 2005; 103:695- 703.

[15] De Boer HD, Driessen JJ, Marcus MA, Kerkkamp H, Heeringa M, Klimek M. Reversal of rocuronium-induced (1.2 mg/kg) profound neuromuscular block by sugammadex. Anesthesiology. 2007; 107:239-44.

[16] Mirakhur RK. Sugammadex in clinical practice. Anaesthesia. 2009; 64:45-54.

[17] Lee C, Jahr JS, CandiottKA, Warriner B, Zornow MH, Naguib M. Reversal of profound neuromuscular block by sugammadex administered three minutes after rocuronium. Anesthesiology. 2009; 110:1020-5.

[18] Cammu G, de Kam PJ, De Graeve K, van den Heuvel M, Suy K, Morias K, Foubert L, Grobara P, Peeters P. Repeat dosing of rocuronium 1.2 mg/kg after reversal of neuromuscular block by sugammadex 4.0 mg/kg in anaesthetized healthy volunteers: a modelling-based pilot study. British Journal of Anaesthesia. 2010; 105(4):487-92.

[19] Bisschops MM, Holleman C, Huitink JM. Can sugammadex save a patient in a simulated ‘cannot intubate, cannot ventilate’ situation? Anaesthesia. 2010; 65:936-41.