Categories
Feature Articles Articles

Legalising medical cannabis in Australia

Introduction

Cannabis was first used in China around 6,000 years ago and is one of the oldest psychotropic drugs known to man. [1] There are several species of cannabis, the most common are Cannabis sativa, Cannabis indica and Cannabis ruderalis. [2] The two main products that are derived from cannabis are, hashish – the thick resin of the plant, and marijuana – the dried flowers and leaves of the plant. [1] Cannabis contains over 460 chemicals and 60 cannabinoids (chemicals that activate cannabinoid receptors in the body). [1,2] The major psychotropic constituent of cannabis is known as delta-9-tetrahydrocanabinol (THC); others include cannabinol and cannabidiol (CBD).

Cannabinoids exert their effect throughout the body through binding with specific cannabinoid receptors. There are two types of cannabinoid receptors found in the body: CB1 and CB2. Both are G-protein coupled plasma membrane receptors. The CB1 receptors are mostly found in the central nervous system, whilst CB2 receptors are mainly associated with immunological cells and lymphoid tissue such as the spleen, tonsils, and thymus. [3,4]

Delta-9-tetrahydrocanabinol (THC) and other cannabinoids are strongly lipophilic and are rapidly distributed around the body. [5] Because of its strong lipophilic nature cannabinoids accumulate in adipose tissue and have a long elimination time of up to 30 days, although the psychoactive effects generally wear off after a few hours. Medical cannabis can be dispensed and taken in a variety of ways including as a herbal cigarette, ingestible forms, or as herbal tea. However, marijuana cigarettes are commonly preferred as they provide higher bioavailability. [5]

Medical marijuana users represent a large range of ages, levels of education attainment, employment statuses, and racial groups. [6] A Californian study examining medical marijuana use showed 76.6% of medical marijuana users use seven grams of marijuana or less per week. Therefore, the majority of medical marijuana users are likely to consume amounts equivalent to mild to moderate recreational cannabis use. [6]

The rescheduling of cannabis in Australia draws strong debate and opinions from both sides of the issue. This article will provide an overview of the most popular arguments for and against the legalisation of cannabis for medicinal purposes.

The legal status of medical marijuana in Australia

Currently cannabis is a Schedule nine drug in all Australian states and territories, placing it in the same category as drugs like heroin and lysergic acid diethylamide (LSD). [7] Legally, drug scheduling in Australia is a state issue, however, all states abide by the federal government’s scheduling of cannabis as a Schedule nine drug, as per the Standard for the Uniform Scheduling of Medicines and Poisons. [7,8] The use of a drug which is classified as Schedule nine for recreational or medical purposes is illegal and a criminal offence. Research into cannabis in Australia is highly restricted. [7] Cannabis use is common in Australia with approximately 40% of Australians aged fourteen years or older saying they have used cannabis, and over 300,000 Australians using it daily. A number of Australians are already self-medicating with cannabis for a range of complaints including chronic pain, depression, arthritis, nausea, and weight loss. However, these people risk legal action from authorities, particularly if cultivating their own cannabis. Moreover, if they purchase cannabis from a dealer they also face quality and supply issues. [9] Proponents of medicinal cannabis envisage a system similar to other drugs of dependence, like opiates, whereby holders of a valid prescription will be able to purchase or access the drug but where recreational use of the drug would still be illegal.

A number of countries have decriminalised cannabis for medical purposes such as the Netherlands, Israel, eighteen states in the United States of America (USA), Canada and Spain. [10] However, the drugs known as Dronabinol (containing just THC), Nabilone (containing a synthetic analogue of THC) and Nabiximols (a spray containing CBD and THC) are available as Schedule eight drugs throughout Australia. [7] Schedule eight drugs are drugs which have a high likelihood for abuse and dependence and require regulation in their distribution and possession. [7,8] However, some claim that these preparations lack many of the cannabinoids found in natural cannabis plants and this leads to different physiological and therapeutic effects compared to natural cannabis. [4] Public support in Australia for medical cannabis is very high with a survey finding 69% of the public supporting the use of medical cannabis and 75% supporting more research into the medical potential of the drug. [11]

Arguments for legalisation

Pain relief

Marijuana has potent analgesic properties which can be used in pain relief for a variety of conditions that can cause intense pain, such as cancer pain and acquired immunodeficiency syndrome (AIDS). Marijuana may even provide superior pain relief when compared to opiates and opioids. A parallel study in the United Kingdom (UK) compared the use of a THC and CBD extract, a THC only extract, and a placebo in the treatment of cancer pain. It found that a THC:CBD mixture (such as that found naturally in the cannabis plant) is efficacious for the relief of advanced cancer pain that is not adequately controlled  by opiates alone. [4] Therefore, the legalisation of medical cannabis could relieve pain and improve the quality of life for severely ill patients suffering from a range of painful conditions.

Appetite stimulation

It has been proven in animal studies that THC can have a stimulating effect on appetite and lead to an increase in food intake. [2,5,12,13] There are a large number of clinical applications for THC for appetite stimulating purposes. Conditions which can cause cachexia (uncontrolled wasting) such as HIV/AIDS, cancer, multiple sclerosis (MS), could be treated with THC to stimulate the patient’s appetite, increase food uptake, restore weight, and improve the strength and wellbeing of the patient. [2,13] Human trials in the 1980s which involved healthy control subjects inhaling cannabis found that the cannabis caused an increase caloric intake of 40%. [14] The legalisation of cannabis for medical purposes could help improve both health outcomes and the quality of life in patients suffering from a range of conditions.

Anti-emetic

In Canada and the USA, dronabinol and nabilone have been used in the treatment of chemotherapy induced nausea and vomiting since the 1980s. [15] A systematic literature review found that cannabinoids were more effective than established anti-emetic drugs in its treatment. [16] By reducing or eliminating the often debilitating and painful symptoms of chemotherapy induced nausea and vomiting, medical cannabis is hoped to improve the quality of life for the patient and their family. A reduction in the vomiting and pain associated with chemotherapy may also cause an increase in adherence to chemotherapy by cancer patients and result in better patient outcomes. [2,13] Therefore, the rescheduling of cannabis could prove as a highly effective anti-emetic for cancer patients and could even improve their prognosis by encouraging adherence to chemotherapy.

Neurological conditions

Cannabis may also be used to lessen or alleviate an entire range of symptoms associated with MS and other neurological conditions. A Canadian randomised, placebo-controlled trial investigating the use of smoked cannabis for MS-related muscle spasticity found that cannabis was superior to the placebo in reducing pain and spasticity. [17] Another trial conducted in Canada on the use of smoked cannabis for chronic neuropathic pain found that an inhalation of the cannabis formulation prepared for the trial three times daily for five days reduced pain intensity, improved the quality of sleep, and had minimal adverse effects. [18] The legalisation of medical cannabis could therefore make a significant difference to the lives of the thousands of Australians suffering MS and many other neurological conditions.

Safety and overdosing

Cannabis may also serve as an incredibly safe alternative to groups of medications prescribed for pain, such as opiates. The CB1 cannabinoid receptors are the main receptors responsible for the analgesic effects of cannabis and are located in the brain. However, they are in very low numbers in the regions of the brain stem co-ordinating cardiovascular and respiratory control. [2,19] This means it is essentially impossible to overdose from cannabis. [5] However, opioid receptors are located in this brain stem region hence signifying that opioids can interfere with cardiovascular and respiratory functions and lead to death. [19,20] Prescription opioid deaths are a small but concerning issue throughout developed nations. For example, in 2005 there were over 1,000 deaths related to prescribed oxycodone in the US. [20] Medical marijuana may offer a safer alternative form of pain relief as it removes the risk of accidental overdose which can lead to death.

Arguments against legalisation

Cannabis use and psychiatric disorders

A longstanding argument against the medical use of cannabis has been that exposure to cannabis can lead to the development of psychiatric disorders, namely schizophrenia. A Scottish systematic review of eleven studies investigating the link between marijuana use and schizophrenia supported this view and found that cannabis use did appear to increase the risk of schizophrenia. [21] Another study has also shown that there is an association between heavy cannabis use and depression. [22] Further effects of cannabis induced psychosis can include self-harming and self-mutilating behaviours. [23,24] The relationship between cannabis use and mental health issues appears to be dose-related with higher amounts of marijuana use related to more severe psychiatric complications. [21,22] Some say it would be unethical to prescribe a patient cannabis when there is a risk of the patient developing mental illness and potentially harming themselves or others. Particularly, when there are other drugs available without such adverse effects on mental health and stability.

Public safety

With the legalisation of medicinal cannabis come clear public safety concerns, particularly in the areas of vehicle and pedestrian safety. Cannabis affects the brain and can cause feelings of disorientation, altered visual perception, hallucinations, sleepiness, and poorer psychomotor control. [5,19] A study conducted in California found that on a particular evening, 8.5% of motorists had THC in their system and that holders of a medical cannabis permit were significantly more likely to test positive to THC than those who did not hold a permit. [25] It has also been shown that drivers using cannabis had about three to seven times the risk of being in a motor car collision than drivers who were not using cannabis. [26] However, it is interesting to note that those driving under the influence of alcohol were at a higher risk of having a collision than those using cannabis. [27] Also interesting is the fact that after medical marijuana programs were instituted in the US, traffic fatalities decreased nine percent across the sixteen states which had programs at the time of the study; this is believed to be due to a substitution of alcohol with marijuana. [27,28] Pedestrians and cyclists who are prescribed cannabis are also at higher risks of being injured in a collision. In terms of non-vehicular injuries, an American study showed cannabis use was associated with an increased risk of injuries from causes such as falls, lacerations, and burns. [29] Hence the legalisation of medical cannabis not only poses a risk to the personal safety of the patient but also to the physical safety of the wider community.

Drug diversion

Given the popularity of cannabis as a recreational drug, there is always the risk of wide scale drug diversion occurring–that is people without a prescription for cannabis gaining access to the drug. This can happen in a number of ways like a patient sharing their medication with others or the patient selling their medication. In America diversion of medical marijuana is an increasing issue, particularly amongst adolescents. A study in the state of Colorado in the USA found that out of a group of 164 adolescents at a substance abuse treatment facility, 74% had used someone else’s medical marijuana, with the mean number of times the adolescents had used someone else’s medical marijuana being over 50. [30] Illicit use of cannabis for non-medical purposes exposes people to the damaging physical, mental and social impacts of drug use. There are clear questions surrounding how this would be prevented and how young adults especially could be prevented from accessing cannabis due to diversion. It must be asked whether it is ethically responsible to reschedule marijuana given that such a large number of other people, particularly adolescents, will have access to someone else’s medical cannabis.

Addictiveness and dependence

The legalisation of cannabis as a medication has the potential to cause patients to develop an addiction to the drug and possible dependence. Cannabis dependence is a recognised psychiatric disorder and it is estimated that over one in ten people who try marijuana will become addicted to it at some point. [31] Although dependence to marijuana may be lower than other drugs like, heroin, cocaine and alcohol, users can still face withdraw symptoms including sleep difficulty, cravings, sweating and irritability. [5,32] With the potential of people becoming addicted to medical cannabis and with scarring consequences for their personal life some say it is ethically questionable to subject people to it in the first place.

Availability of cannabinoid and synthetic cannabinoid drugs

Those opposed to cannabis being rescheduled for medical purposes claim that the availability of cannabinoid and synthetic cannabinoid drugs already in Australia namely, Dronabinol, Nabilone and Nabiximols, makes legalising medical cannabis unnecessary. They state that these drugs contain many of the same compounds as cannabis and can be used for treating many of the same conditions. [13] For example, Dronabinol has been shown to be effective in relieving patients suffering chronic pain which is not fully relieved by opiates. [33] Therefore, the cannabinoid medications which are already available in Australia may provide many of the same therapeutic benefits offered by cannabis, and as such rescheduling cannabis would be unnecessary.

Carcinogenicity

Whether marijuana smoke causes cancer, in particular lung cancer, is a subject of much debate and research itself. It is well established that tobacco smoke is carcinogenic and deeply damaging to overall health. [34] With marijuana containing many of the same carcinogens as tobacco, and often being in cigarette form, it is not unreasonable to expect similarly adverse results with cannabis use. [5,35] However, a case-control study by Hashibe et al. [36] found that there was not a strong relationship between marijuana use, even in heavy amounts, and the incidence of oesophageal, pharyngeal, laryngeal, and lung cancer. Some evidence exists, showing that cannabinoids may in fact kill some cancers such as gliomas, lymphomas, lung cancer and leukemia. [37-39] Despite evidence that marijuana smoke contains mutagenic and carcinogenic chemicals, epidemiologically this was not confirmed to be the case. [35,36] Overall the carcinogenic or cancer-fighting properties of marijuana remain unclear and contradictory. More long-term, large population research should be conducted so that the seemingly contradictory nature of the drug can be better understood. [36]

Recommendation

Cannabis offers exciting possibilities for patients afflicted by cancer, HIV/AIDS, MS, chronic pain and other debilitating conditions. Although medical marijuana programs face several obstacles, the benefits offered by medical marijuana and the positive impact this drug could have on the lives of thousands of patients and their families make a strong case for its consideration. The potential drawbacks can be minimised or even overcome through a number of measures, including: the close medical supervision of patients (e.g., proper patient education and monitoring), the creation of  appropriate infrastructure (e.g., medical marijuana dispensaries, as seen in California) and the creation of laws and policies that are not only supportive of medical marijuana patients but will also, minimise the risk the drug poses to the public (e.g., strict penalties for medical marijuana diversion).

Conflict of interest

None declared.

Correspondence

H Smith: hamish.smith@y7mail.com

References

[1] McKim WA. Drugs and behavior: An introduction to behavioral pharmacology 4th ed. Upper Saddle River: Prentice-Hall Publishing; 2000.

[2] Amar MB. Cannabinoids in medicine: A review of their therapeutic potential. Journal of Ethnopharmacology. 2002;105(1):1–25.

[3] Sylvaine G, Sophie M, Marchand J, Dussossoy D, Carrièr D, Caryon P, et al. Expression of central and peripheral cannabinoid receptors in human immune tissues and leukocyte subpopulations. Eur J Biochem. 2005 Apr;232(1):54–61.

[4] Johnson JR, Burnell-Nugent M, Lossignol D, Gaena-Morton ED, Potts R, Fallon MT. Multicenter, double-blind, randomized, placebo-controlled, parallel-group study of the efficacy, safety, and tolerability of THC: CBD extract and THC extract in patients with intractable cancer-related pain. J Pain Symptom Manage. 2010 Feb;39(2):167-79.

[5] Ashton CH. Pharmacology and effects of cannabis: A brief review. Br J Psychiatry. 2001 Mar:178(2):101-6.

[6] Reinarman C, Nunberg H, Lanthier F, Heddleston T. Who are medical marijuana patients? Population characteristics from nine California assessment clinics. Journal of Psychoactive Drugs. 2011 Jul 43;2:128-35.

[7] National Drugs and Poisons Schedule Committee. Standard for the uniform scheduling of drugs and poisons No. 23. Canberra (ACT): Therapeutic Goods Administration; 2008 Aug. 432 p.

[8] Moulds RF. Drugs and poisons scheduling. Aust Prescr [Internet]. 1997 [Cited 7 Feb 2013];20:12-13. Available from: http://www.australianprescriber.com/magazine/20/1/12/3#qa

[9] Swift W, Hall W, Teeson M. Cannabis use and dependence among Australian adults: Results from the national survey of mental health and wellbeing. Addiction. 2001 May;96(5):737–48.

[10] Marijuana Policy Project (MPP). The Eighteen States and One Federal District With Effective Medical Marijuana Laws. Washington (DC): MPP; 2012 Dec. 19 p.

[11] National Institute of Health and Welfare. The National Drug Strategy Household Survey 2011. Australian Government: Canberra. 2012.

[12] Mechoulam R, Berry EM, Avraham Y. Endocannabinoids, feeding and suckling- from our perspective. Int J Obes. 2006 Apr; 30(1):24-8.

[13] Robson P. Therapeutic aspects of cannabis and cannabinoids. Br J Psychiatry. 2001 Feb;178:107-15.

[14] Foltin RW, Fischman MW, Byrne MF. Effects of smoked marijuana on food intake and body weight of humans living in a residential laboratory. Appetite. 1988 Aug;11(1):1-14.

[15] Sutton IR, Daeninck P. Cannabinoids in the management of intractable chemotherapy-induced nausea and vomiting and cancer-related pain. J Support Oncol. 2006 Nov;4(10):531-5.

[16] Tramèr MR, Carroll D, Campbell FA. Cannabinoids for control of chemotherapy induced nausea and vomiting: Quantitative systematic review. BMJ. 2001 Jul;323(7303):16-21.

[17] Corey-Bloom J, Wolfson T, Gamst A. Smoked cannabis for spasticity in multiple sclerosis: A randomized, placebo-controlled trial. CMAJ. 2012 Jul;184(10):1143-50.

[18] Ware MA, Wang T, Shapiro S. Smoked cannabis for chronic neuropathic pain: A randomized controlled trial. CMAJ. 2010 Oct;182(14):694-701.

[19] Adams IB, Martin BR. Cannabis: Pharmacology and toxicology in animals and humans. Addiction. 1996 Nov;91(11):1585-1614.

[20] Paulozzi LJ, Ryan GW. Opioid analgesics and rates of fatal drug poisoning in the United States. Am J Prev Med. 2006 Dec;31(6):506-11.

[21] Semple DM, McIntosh AM, Lawrie SM. Cannabis as a risk factor for  psychosis: Systematic review. J Psychopharmacol. 2005 Mar;19(2):187-194.

[22] Degenhardt L, Hall W, Lynskey M. Exploring the association between cannabis use and depression. Addiction. 2003 Nov;98(11):1493-1504.

[23] Khan MK, Usmani MA, Hanif SA. A case of self amputation of penis by cannabis induced psychosis. J Forensic Leg Med. 2012 Aug;19(6):355-7.

[24] Serafini G, Pompili M, Innamorati M, Rihmer Z, Sher L, Girardi P. Can cannabis increase the suicide risk in psychosis? A critical review. Curr Pharm Des. Jun 2012;18(32):5165-87.

[25] Johnson MB, Kelley-Baker T, Voas RB, Lacey JH. The prevalence of cannabis-involved driving in California. Drug Alcohol Depend. Jun 2012;123(1-3):105-9.

[26] Ramaekers JG, Berghaus G, van Laar M, Drummer OH. Dose related risk of motor vehicle crashes after cannabis use. Drug Alcohol Depend. 2004 Feb;73(2):109-19.

[27] Sewell RA, Poling J, Sofuoglu M. The effect of cannabis compared with alcohol on driving. Am J Addict. 2009;18(3):185-93.

[28] Anderson DM, Rees DI. Medical marijuana laws, traffic fatalities, and alcohol consumption. Denver (CO): Institute for the Study of Labor. 2011  Nov. 28 p.

[29] Gerberich SG, Sidney S, Braun BL, Tekawa IS, Tolan KK, Quesenberry CP. Marijuana use and injury resulting in hospitalisation. Annals of Epidemiology. 2003 Apr;13(4):230-7.

[30] Salomonsen-Sautel S, Sakai JT, Thurstone C, Corley R, Hopfer C. Medical marijuana use among adolescents in substance abuse treatment. J Am Acad Child Adolesc Psychiatry. 2012 Jul;51(7):694–702.

[31] Hall W, Degenhardt L, Lynskey M. The health and psychological effects of cannabis use. Canberra (ACT): Commonwealth Department of Health and Ageing; 2001.

[32] Budney AJ, Vandrey R, Moore BA, Hughes JR. Comparison of tobacco and marijuana withdrawal. J Subst Abuse Treat. 2008 Dec;35(4):362-68.

[33] Narang S, Gibson D, Wasan AD, Ross EL, Michna E, Nedeljkovic SS, et al. Efficacy of dronabinol as an adjuvant treatment for chronic pain patients on opioid therapy. The Journal of Pain. 2008 Mar;9(3):254-64

[34] Thun MJ, Henley SJ, Calle EE. Tobacco use and cancer: An epidemiologic perspective for geneticists. Oncogene. 2002 Oct;21(48):7307-7325.

[35] Hecht SS, Carmella SG, Murphy SE, Foiles PG, Chung FL. Carcinogen biomarkers related to smoking and upper aerodigestive tract cancer. J Cell Biochem Suppl. 1993 Feb;17(1):27-35.

[36] Hashibe M, Morgenstern H, Cui Y, Tashkin DP, Zhang ZF, Cozen W, et al. Marijuana use and the risk of lung and upper aerodigestive tract cancers: Results of a population-based case-control study. Cancer Epidemiol Biomarkers. 2006 Oct;15(1):1829.

[37] Munson AE, Harris LS, Friedman MA, Dewey WL, Carchman RA. Antineoplastic activity of cannabinoids. J Natl Cancer Inst. 1975 55:597-602.

[38] McKallip RJ, Lombard C, Fisher M, Martin BR, Ryu S, Grant S, et al. Targeting CB2 cannabinoid receptors as a novel therapy to treat malignant lymphoblastic disease. Blood. 2002;100:627-34.

[39] Sanchez C, de Ceballos ML, del Pulgar TG, Rueda D, Corbacho C, Velasco G, et al. Inhibition of glioma growth in vivo by selective activation of the CB(2) cannabinoid receptor. Cancer Res 2001, 61:5784-89.

 

 

Categories
Articles Letters

Adult pertussis vaccinations as a preventative method for infant morbidity and mortality

Pertussis, or whooping cough, is a potentially fatal respiratory illness caused by the Bordetella pertussis bacteria. It commonly occurs in infants who have not completed their primary vaccination schedule. [1]

Since 2001, Australia’s coverage rate with the three primary doses of the diphtheria, tetanus and acellular pertussis-containing vaccine (DTPa) at twelve months has been greater than 90%. [2] Despite this high coverage rate, there has been a sharp increase in the incidence of pertussis. In 2008, the Victorian Government received notification of a 56% increase in reported cases (1,644 cases in 2008 compared to 1054 cases in 2007). That same year, New South Wales also reported over 7,500 cases, more than tripling their 2007 total. [3] Given these startling statistics, we must ask ourselves why we are seeing such a significant rise in the incidence of pertussis.

One well researched explanation for this increase is that the pertussis vaccine is not conferring lifelong immunity. A North American study investigating the effectiveness of the pertussis vaccine found that there was a significant increase in laboratory-confirmed cases of clinical pertussis in children aged eight to 13 years. This correlated to the interval after the end of the preschool vaccinations. [4] Other studies have suggested that immunity can wane anywhere between three to 12 years post vaccination, creating ambiguity as to when we become susceptible again. [5,6] This limitation is due to the current non-existence of a clear serologic marker correlating with protection from pertussis. Approximately two years after vaccination, pertussis toxin antibodies have reached minimal levels; however, protection from the disease remains. This suggests immunity is multifactorial. [5]

Despite this, there is widespread agreement that adults with waning immunity and who are in close contact with non-immune infants are a major source of transmission. [6,7] In 2001, a study was published which analysed the source of infection in 140 infants under the age of twelve months who had been hospitalised for pertussis. In the 72 cases where the source of infection could be identified, parents were the source in 53% of cases and siblings accounted for another 22%. [8] The Australian paediatric surveillance unit study of 110 hospitalised infants with pertussis demonstrated adults to be the source in 68% of cases, 60% of which were the parents of the infant in question. [9] Other potential sources that have been identified include grandparents and paediatric health workers. [6]

Since the establishment in 2001 of the international collaboration, the Global Pertussis Initiative (GPI), strategies to decrease the incidence of pertussis have been extensively discussed, with particular emphasis on reducing adult transmission to unprotected infants. [6] In general it has been noted that the control of pertussis requires an increase in immunity in all age groups, especially in adults. [10] Although the GPI agrees that universal adult vaccination would be an effective strategy to protect non-immune infants, this would be too difficult to implement. [2,8] Furthermore, we must be aware that the success of herd immunity is dependent upon the level of population coverage and also the degree of contact between the infected and the non-immune infants. [11]

Due to the difficulties with implementing universal adult vaccinations, more targeted vaccination strategies have been proposed. [10] The concept of a ‘cocoon’ strategy, in which adults in close contact with unprotected infants are given booster vaccinations, [11] has been trialed throughout Australia in various forms. [12] This strategy is simpler to implement, as new parents and family members are easier to access via their contact with health services and their motivation to protect their children. [6] Moreover, because of this motivation, it may be reasonable to assume new parents would be willing to pay for this vaccine out of their own pockets, reducing the economic burden of the increased use of vaccines on our health system.

One model has suggested routine adult vaccination every ten years from the age of 20 years, combined with the ‘cocoon’ strategy of vaccination, would best reduce the rate of infant pertussis infections. However, to date there are no clinical data confirming this strategy to be effective. [11] Furthermore, this particular model is unlikely to receive public funding due to the large expense required.

Another strategy, recently recommended by the Advisory Committee on Immunisation Practices (ACIP), is that of implementing maternal vaccinations. The ACIP reviewed data in 2011 that showed preliminary evidence that there were no adverse effects after the administration of the pertussis vaccine to pregnant women. This strategy would significantly reduce the risk of infection to infants before they were even born. [13]

As one can see, the question of how to increase immunity in our community is complex, given that current strategies are expensive and difficult to implement. As infant deaths from pertussis are easily avoidable, developing effective preventive strategies should be of high priority.

Conflict of interest

None declared.

Correspondence

T Trigger: talia.trigger@my.jcu.edu.au

References

[1] World Health Organisation. Pertussis vaccines: WHO position paper. WHO. 2010; 40: 385-400.
[2] Chuk LR, Lambert SB, May ML, Beard F, Sloots T, Selvey C et al. Pertussis in infants: how to protect the vulnerable. Commun Dis Intell. 2008; 32(4): 449-455.
[3] Fielding J, Simpson K, Heinrich-Morrison K, Lynch P, Hill M, Moloney M et al. Investigation of a sharp increase in notified cases of pertussis in Victoria during 2008. Victorian Infectious Diseases Bulletin. 2009; 12(2): 38-42.
[4] Witt MA, Katz PH, Witt DJ. Unexpectedly limited durability of immunity following acellular pertussis vaccination in pre-adolescents in a north American outbreak. Clinical Infectious Diseases. 2012; 54(12): 1730-1735.
[5] Wendelboe AM, Van Rie A, Salmaso S, Englund J. Duration of immunity against pertussis after natural infection or vaccination. The Paediatric Infectious Disease Journal. 2005; 24(5).
[6] Forsyth KD, Campins-Marti M, Caro J, Cherry J, Greenberg D, Guiso N et al. New pertussis vaccination strategies beyond infancy: recommendations by the global pertussis initiative. Clinical Infectious Diseases. 2004; 39: 1802-1809.
[7] Spratling R, Carmon M. Pertussis: An overview of the disease, immunization, and trends for nurses. Pediatric Nursing. 2010; 36(5): 239-243.
[8] Jardine A, Conaty SJ, Lowbridge C, Staff M, Vally H. Who gives pertussis to infants? Source of infection for laboratory confirmed cases less than 12 months of age during an epidemic, Sydney, 2009. Commun Diss Intell. 2010; 34(2): 116-121.
[9] Wood N, Quinn HE, McIntyre P, Elliott E. Pertussis in infants: preventing deaths and hospitalisations in the very young. Jounal of Paediatrics and Child Health. 2008; 44(4): 161-165.
[10] Hewlett EL, Edwards KM. Pertussis – not just for kids. The New England Journal Of Medicine. 2005; 352(12): 1215-1223.
[11] McIntyre P, Wood N. Pertussis in early infancy: disease burden and preventive strategies. Current Opinion In Infectious Diseases. 2009; 22: 215-223.
[12] Australian Government Department of Health and Ageing. Pertussis. Australian Immunisation Handbook 9th Edition [Internet]. 2008[cited 2013 Feb19]; 227-239. Available from: http://www.immunise.health.gov.au/internet/immunise/publishing.nsf/Content/23041983E698DFB7CA2574E2000F9A05/$File/3.14%20Pertussis.pdf
[13]. Advisory Committee on Immunization Practices (ACIP). Updated recommendations for use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine (Tdap) in pregnant women and persons who have or anticipate having close contact with an infant aged <12 months. Centers for Disease Control and Prevention Morbidity and Mortality Weekly Report. 2011; 60(41): 1424-1426.

Categories
Articles Editorials

Thought the ‘bed shortage’ was bad, until the ‘surgeon shortage’ came along

“Make up your mind how many doctors a community needs to keep it well. Do not register more or less than this number.’’ George Bernard Shaw

If you have ever had the opportunity of finding yourself in a surgical theatre, the last thing you want to have on your mind are doubts about the person holding the scalpel. To ensure the highest professional standards are maintained, trainees of the Royal Australasian College of Surgeons (RACS) undergo a rigorous five to six year postgraduate training program prior to final qualification as a surgical consultant. [1] However, such a long and demanding training program has proven to be a double-edged sword for the surgical speciality. Studies have shown that one in four surgeons plan to retire in the next five years and that only sixteen percent of surgeons were under 40 years old. [2] The same study demonstrated that the average retirement age for surgeons has decreased by ten years. [2] These factors place an immense amount of pressure on surgical training programs, particularly in an era where the ageing population is creating more demand for surgical services. [2] While workforce shortage issues are by no means unique to the RACS, and indeed are felt by many medical colleges across Australia, this editorial will focus on the RACS to illustrate the issues affecting a broad range of medical specialities.

Along with many medical colleges around Australia, the RACS faces a looming workforce crisis with an ageing workforce approaching retirement and an ageing population with increasing healthcare needs, combining to create a critical demand for scarce services. The 2011 annual report published by the RACS highlighted that the number of first year surgical trainees across all specialties was 246 [3] compared to the 3000+ medical students graduating from around the country each year. While this represents a relatively small fraction of the available workforce pool, the RACS has taken the initiative to increase the number of surgical trainee positions by twelve percent compared to 2010. [3] Despite these gains, the RACS estimates that at least another 80 surgeons will have to graduate each year in addition to the 184 new surgeons currently graduating each year, in order to begin to redress surgeon workforce shortage. [4,5]

Low trainee numbers represent a composite of many factors, including financial limitations, need for skilled supervision and opportunity for practical experience. [6] The public sector has reached its full capacity for surgical training posts as such posts are funded by the State governments hence they are limited by budget provisions. [5] Consequently, underfunding, chronic shortage of nursing staff and lack of resources in public hospitals are seen as some of the main reasons for extended waiting times for surgery. [7] Due to the lack of such resources, it is a common trend now to see surgical lists being limited or procedures being cancelled because of time constraints. [7] Increasing the number of trainee posts will require significant fundamental changes, namely greater resourcing of the public health system. [6] To avoid the looming workforce crisis, governments will have to move quickly to ensure adequate training posts are in place across all medical specialties. [3,5] In Australia, more than 60% of elective surgery is in the private sector. [5] Novel training opportunities, such as those offered by the private sector, should also be considered as clinicians with the appropriate range and depth of experience required to train junior doctors are not limited to the public sector. [5] Lack of resources, funding, safe working hours and reduced clinical exposure are all elements that add to this crisis of looming workforce shortage. [6,8]

While there is a compelling argument to expand the number of trainee positions around Australia, the challenge is to maintain the highest standards for surgical trainees. [7] Emphasis on the number of training positions created is the priority of any college and is a crucial aspect in offering quality treatment in both the public and private hospitals. [7] However, increasing the number of trainees to accommodate and cope with surgeon shortage might result in reduced individual theatre time, which is not acceptable. [4,7] While this may relieve the workforce shortage, however, it would only create more specialists with limited exposure to a wide range of surgical presentations. [7] The aim of surgical training is to ensure that trainees progress through an integrated program that provides them with the highest professional responsibility under appropriate supervision. [9] This not only ensures exceptional quality but also enables trainees to acquire the competencies needed to perform independently as qualified surgeons. There are concerns nonetheless that if there is a large intake of surgeon trainees it may favour ‘quantity’ of trained surgeons over ‘quality’. [7] This is unacceptable, not only for the safety of our patients, but also in a world of increasing medico-legal implications and litigation. [7]

Another challenge affecting the surgical profession and surgical trainees is the issue of safe working hours. Currently, the reported working hours of the surgical workforce on average is 60 hours per week, excluding 25 hours per week on average spent on-call. [5] Although safe working hours are less of an issue in Australia than the rest of the world, it still affects surgical training. [10] Safety and wellbeing of surgical trainees is a top priority of the RACS. [7] Reduced trainee hours have been encouraged by research showing that doctor fatigue compromises patient care, as well as awareness that fatigue hampers learning. [10] Long hours traditionally worked by surgeons may result in concerns regarding safe working hours and the possibility that the next generation of surgeons will seek enhanced work-life balance. [4,7] Adding to the ominous shortage of surgeons, the challenge still remains whether surgical trainees can still assimilate the necessary clinical experience in this reduced timeframe. [7] More and more trainees place increased emphasis on work-life balance [5], making alternate specialisation pathways a real possibility that many consider.

Many, if not all, of the issues felt by the RACS across Australia are rarefied in rural Australia. Rural general surgery, much like its general practice counterpart, is facing an impending crisis of workforce numbers. [11] Despite increasing urbanisation, approximately 25% of Australians still live in rural Australia [12] and it is this portion of the population that is likely to be the first and worst affected by any further constriction in medical workforce numbers. Single or two-man surgical practices provide service to many rural and remote centres. [11] However in many areas where surgical services could be supported, no trainee surgeon is available. [11] Many current rural surgeons are also fast approaching retirement age. [11] In past years retention of surgeons in rural communities has been strong. [13] The lifestyle benefits, challenges and rewards all combined, have ensured that a large amount of rural surgeons are growing old in the country. [13] However, this perception may well be a thing of the past. [13] Younger surgeons are more likely to consider time off on call, annual leave and privacy as lifestyle considerations which compel them back towards the metropolitan area. [13] Such a shift in attitude towards limiting one’s workload combined with the continuing decline in Australian rural practices will apply various additional pressures on the rural surgeon workforce in the near future. [11]

Two main factors that determine if a trainee surgeon is more likely to pursue a rural career are the exposure to good quality rural terms as an undergraduate and having a rural background. [11,13] Selections for rural posts are more common in doctors from a country background who are more likely to return to, and remain in, a rural practice. [12,13] Acknowledging this factor, many Australian medical schools have now incorporated both mandatory and voluntary rural terms as a part of their curriculum. [11] In addition to these undergraduate initiatives, ongoing rural placements during postgraduate years may need to be established and given greater prominence. [11] A trainee being allocated to the same rural location over a period of years increases the possibility of the trainee settling in the same rural location following their training. [13] This may be due to familiarity with the social and cultural setting as well as the desire to provide continuous care for his/her patients. [13] As a result of these undergraduate and/or postgraduate initiatives, we can expect to witness the next generation of advanced surgical trainees with a foundation of rural experience, demonstrating a willingness to undertake rural terms as an accepted and expected component of their general surgery training. [11,13] These trainees may then choose to settle in the same rural location following training, thus decreasing the rural surgeon shortage.

The aim of surgical training is to ensure that trainees progress through an integrated program that provides them with increasing professional responsibility under appropriate supervision. [8] This enables them to acquire the competencies needed to perform independently as qualified surgeons. [9] The RACS has taken major steps to address its workforce shortage. Continuing efforts to provide for trainees and their needs are given place of prominence in the RACS 2011-2015 strategic plan. The RACS’ role in monitoring, coordinating, planning and provisioning of services, as well as obtaining adequate funding for surgical training programs, remains a major responsibility of the College. Emphasis on rural rotations at an undergraduate and early postgraduate level, consideration of the work-life balance of both trainees and surgeons and sufficient staffing of theatres, will help eradicate the surgeon shortage whilst ensuring that the finest surgical education and care is available to Australians into the future.

Conflict of interest

None declared.

Correspondence

J Goonawardena: j.goonawardena@amsj.org

References

[1] The College of Physicians and Surgeons of Ontario. Tackling the Doctor Shortage. Ontario: CPSO; 2004. p. 5

[2] Surgeon shortage looms. The Hobart Mercury 2006 March 22:26

[3] The College of Surgeons of Australia and New Zealand. The Royal Australasian College of Surgeons Annual Report 2010. Melbourne: RACS; 2011. p. 9

[4] Royal Australasian College of Surgeons. (2011, October 7). Surgeons warn of looming workforce crisis [Media release]. Retrieved from http://www.surgeons.org/media/293538/MED_2011-10-07_Surgeons_warn_of_looming_workforce_crisis.pdf

[5] Royal Australasian College of Surgeons. RACS 2011: Surgical Workforce Projection to 2025 (for Australia). Melbourne: RACS; 2011. P. 8-57

[6] Amott DH, Hanney RM. The training of the next generation of surgeons in Australia. Ann R Coll Surg Engl 2006; 88:320–322.

[7] Berney CR. Maintaining adequate surgical training in a time of doctor shortages and working time restriction. ANZ J Surg. 2011; 81:495–499.

[8] Australian Medical Association Limited. (2005 April 5). States and territories must stop passing the buck on surgical training [Media Release]. Retrieved from http://ama.com.au/node/1966

[9] Hillis DJ. Managing the complexity of change in postgraduate surgical education and training. ANZ J Surg. 2009; 79: 208–213.

[10] O’Grady G, Loveday B, Harper S, Adams B, Civil ID, Peters M. Working hours and roster structures of surgical trainees in Australia and New Zealand. ANZ J Surg. 2010; 80: 890–895.

[11] Bruening MH, Anthony AA, Madern GJ. Surgical rotations in provincial South Australia: The trainees’ perspective.  ANZ  J Surg. 2003; 73: 65-68.

[12] Green A. Maintaining surgical standards beyond the city in Australia. ANZ  J Surg. 2003; 73: 232-233.

[13] Kiroff G. Training, retraining and retaining rural general surgeons. Aust. N.Z.J. Surg. 1999; 69:413-414.

 

Categories
Case Reports Articles

Dengue fever in a rural hospital: Issues concerning transmission

Introduction: Dengue is either endemic or epidemic in almost every country located in the tropics. Within northern Australia, dengue occurs in epidemics; however, the Aedes aegypti vector is widespread in the area and thus there is a threat that dengue may become endemic in future years. Case presentation: An 18 year old male was admitted to a rural north Queensland hospital with the provisional diagnosis of dengue fever. No specific consideration was given to the risk that this patient posed to other patients, including a 56 year old male with chronic myeloid leukaemia and prior exposure to dengue. Discussion: Much media and public attention has been given to dengue transmission in the scope of vector control in the community. Hospital-based dengue transmission from patient-to-patient requires consideration so as to minimise unnecessary morbidity and mortality. Vector control within the hospital setting appears to be an appropriate preventative measure in the context of the presented case. Transfusion and transplantation-related transmission of dengue between patients are important considerations. Vertical dengue infection is also noted to be possible. Conclusion: Numerous changes in the management of dengue-infected patients can be made that are economically feasible. Education of healthcare workers is essential to ensure the safety of all patients admitted to hospitals in dengue-affected areas. Bed management in particular is one area that may benefit from increased attention.

Introduction

Dengue is diagnosed annually in more than 50 million people worldwide and represents one of the most important arthropod-borne viral infections. [1-4] Estimates suggest that the potentially lethal complication of dengue haemorrhagic fever occurs in 500 000 people and an alarming 24 000 deaths result from infection annually. [1,2,4] Coupled with the increasing frequency and severity of outbreaks in recent years, dengue has been identified as a major and escalating public health concern. [2,4,5]

Whilst most of the burden of dengue occurs in developing countries, northern Australia is known to have epidemics. Suggestions have been made that dengue may become endemic in this region in future years based on increasing migration, international travel, population growth, climate change and widespread presence of vectors. [6-12] The vast majority of studies have focused on vector control in the community setting. [2,4,5,9] The purpose of this report is to discuss the risks of transmission of dengue in a hospital setting and in particular, patient- to-patient transmission. Transmission of dengue in a hospital is important to consider as immunological responses and health status of hospitalised patients can be poor. Inadequate management of dengue- infected patients may ultimately threaten the lives and complicate treatment of other patients, creating unnecessary economic costs and demands on healthcare. [12-14]

This case report highlights the difficulties of handling a suspected dengue-infected patient from the perspective of an Australian rural hospital. Recommendations are made to improve management of such patients, in particular, embracing technological advancements including digital medical records that are likely to become available in future years.

Case report

An 18 year old male, patient 1, presented to a rural north Queensland hospital emergency department with a four day history of fever, generalised myalgia and headache. He resided in an area that was known to be in the midst of a dengue outbreak. He had no past medical or surgical history and had never travelled. On examination, the patient’s tympanic temperature was 38.9°C and he had dry mucous membranes. No rash was observed and no other abnormal findings were noted. Laboratory investigations, which included dengue PCR and dengue serology, were taken. He was admitted for observation and given intravenous fluids. A provisional diagnosis of dengue fever was made.

The patient was subsequently placed in a room with four beds. Whilst two of the beds in the room did not have patients in them, the remaining bed was occupied by patient 2, a 56 year old male with chronic myeloid leukaemia (CML), who had been hospitalised the previous day with a lower respiratory tract infection. The patient’s medical history was notable for a past episode of dengue fever five years previously following an overseas holiday.

The patient with presumed dengue fever remained febrile for two days. He walked around the ward and went outside for cigarettes. He also opened the room window, which was unscreened. Tests subsequently confirmed that he had a dengue viral infection.

Whilst no dengue transmission occurred, the incident raised a number of issues for consideration, as no concerns regarding transmission was raised by staff or either patients.

Discussion

The dengue viruses are single positive-stranded RNA viruses belonging to the Flaviviridae family, with four distinct serotypes described. [4,12] Infection can range from asymptomatic, to a mild viral syndrome associated with fever, malaise, headache, myalgia and rash, or an eventual severe presentation characterised by haemorrhage and shock. [3,9] Currently the immunopathogenesis of severe dengue infection, which occurs in less than 5 percent of infections and includes dengue haemorrhagic fever and shock syndromes, is poorly defined. [2,3]

Whilst primary infection in the young and well nourished has been associated with the development of severe infection, the major aetiology of severe infection is thought to be secondary infection with a different serotype. [3,9] This has been hypothesised to be as a result of an antibody-mediated enhancement reaction, although authors also suggest that other factors are likely to contribute. [3,4,9] Untreated dengue haemorrhagic fever is characterised by increased capillary permeability and haemostatic changes and has a mortality rate of 10-20 percent. [2,3,5] This complication can further deteriorate into dengue shock syndrome. [3] Whilst research shows that the serious complications of dengue infection occurs mainly in children, adults with asthma, diabetes and other chronic diseases may be at increased risk and secondary dengue infections could be life threatening in these groups. [4,5,15]

The most commonly reported route of infection is via the bite of an infected Aedes mosquito, primarily Aedes aegypti. [2-14] This vector feeds during the day, prefers human blood and breeds in close proximity to humans. [5,12,13] The transmission of dengue has been widely reported in the urban setting and has a geographical distribution including more than 100 countries. [3,13] However, only one study has reported dengue vector transmission from within a hospital. [16] Kularatne et al. (2007) recently described a dengue outbreak that started within a hospital in Sri Lanka and was unique such that a building site next to the hospital provided breeding sites for mosquitoes. [16] Dengue infection was noted to cause significant cardiac dysfunction, and of particular note was that medical students, nurses, doctors and other hospital employees were the main targets. [16] The authors report that at the initial outbreak one medical student died due to shock and severe pulmonary oedema as a result of acute viral myocarditis. [16] This case highlights the risk of dengue transmission within a hospital setting.

In addition to the vector-borne transmission, dengue can be also be transmitted by other routes, including transfusion. [17,18] The incidence of blood transfusion-associated dengue infection has been one area of investigation that has primarily been reported in endemic countries. In one study conducted in Hong Kong by Chuang et al. (2008) the prevalence of this mode of transmission was 1 in 126. [17] Whilst rare in Australia, an investigation undertaken during the 2004 outbreak in Cairns, Queensland calculated the risk of transfusion- related dengue infection by mathematical modelling and reported the risk of collecting a viraemic donation as 1 in 1028 persons during the course of the epidemic. [18] Donations from the affected areas were not used for transfusion. [18]

Case reports have also been published demonstrating that transplantation can represent a route of dengue infection between hospitalised patients. [19,20] Rigau-Pérez and Laufer (2006) described a six year old child who developed fever four days post-bone marrow transplantation and subsequently died. [19] Dengue virus was isolated from the blood and tissues of the child and the donor was subsequently known to have become febrile with tests for dengue being found to be positive. [19] Dengue infection resulting from solid organ transplantation has also been described in a 23 year old male with end-stage renal failure. [20] The donor of the transplanted kidney had dengue fever six months prior to the transplant and the recipient of the organ had dengue fever five days postoperatively. [20] The recipient had a complicated recovery and required an emergency laparotomy and blood products to ensure survival. [20] The authors of this case report further discuss the fact that the patient in question had resided in a dengue-endemic region and therefore could not exclude the usual mode of infection. [20]

Whilst not applicable to the presented case, vertical transmission of dengue has also been noted to be an important consideration in hospitalised patients. Reports from endemic countries have suggested that transmission can occur if infection of the mother occurs within eight days of delivery. [9,21] One neonatal death has been reported as a result of dengue infection and a number of studies have reported peripartum complications requiring medical treatment in other neonates. [21,22] Interpretation of this result should be viewed with caution due to difficulties cited in the clinical diagnosis of dengue in neonates, as it is possible that vertical transmission may be underreported. [22]

Taking into account the reported case study and presented evidence, it is clear that patient 1 presented a risk to patient 2. It is essential to acknowledge that dengue transmission can occur within a hospital setting. Whilst only one study has reported vector transmission of dengue within a hospital, it does define the real possibility of transmission associated with close contact and a competent vector. [16] There is also a need to emphasise the fact that patient 1 walked outside the hospital on numerous occasions and that unscreened windows were open within the hospital ward room. Consequently, it can be stated that patient-to-patient dengue infection would have been possible not only for patient 2, but also other admitted patients. Additionally, healthcare workers and community members that lived within the area surrounding the hospital were also at risk.

In acknowledging that vector transmission within a hospital is the most important hazard in regards to transmission of dengue from patient-to-patient, numerous control measures can be implemented to decrease the risk of transmission. Infrastructure plans within hospitals are important, as screened windows would decrease the ability of mosquitoes to enter hospitals. In those hospitals where such changes may not be economically feasible, studies have reported that having patients spend as much time as possible under insecticide treated mosquito nets, limiting outdoor time for infected patients, wearing protective clothing and applying insecticide numerous times throughout the day may decrease the possibility of dengue infection within hospitals. [23-25]

Educational programs for healthcare professionals and patients also warrant consideration. Numerous programs have been established primarily in the developing world and have proven to be beneficial. [26,27] It is important to create innovative education programs aimed at educating those healthcare workers that care for suspected dengue- infected patients as well as members of the public. This is one area that needs to be explored in future years.

Additionally, this case study demonstrates that current protocols in bed management do not consider a past medical history of dengue infection when assigning patients to beds. This report draws attention to the importance of identifying those patients at risk of secondary infection with dengue. As electronic patient records are implemented in many countries throughout the world, a past history of confirmed dengue infection needs to be considered. This may mean when resources are available, that patients are not placed in the same room thereby avoiding unnecessarily placing patients at risk. Whilst this would not completely exclude the possibility of dengue transmission in a hospital, it may set the trend for improved protocols in infection control particularly when secondary infection is associated with poorer outcomes. [2-5,9]

Conclusion

Infection control is often targeted in tertiary referral centres. This report clearly highlights the importance of appreciating infection control within a rural setting. Dengue infection between patients is a possibility with available evidence suggesting that this is most likely to be from exposure of an infected individual to a competent vector. Numerous changes have the potential to decrease the likelihood of dengue infection. Healthcare worker education is a critical component of these changes so that suspected dengue infected patients may also be educated regarding the risk that they represent to members of the public. The utilisation of screened windows, insecticide treated mosquito nets, and patient measures such as wearing protective clothing and applying insect repellents are all preventative measures that need to be considered. Future research is likely to develop technological aides for appropriate bed assignment. This will ensure that unnecessary morbidity and mortality associated with dengue infection are avoided.

Consent declaration

Informed consent was obtained from the patients for publication of this report.

Conflict of interest

None declared.

Correspondence

R Smith: ross.smith@my.jcu.edu.au

 

Categories
Review Articles Articles

Seasonal influenza vaccination in antenatal women: Views of health care workers and barriers in the delivery of the vaccine

Background: Pregnant women are at an increased risk of developing influenza. The National Health and Medical Research Council recommends seasonal influenza vaccination for all pregnant women who will be in their second or third trimester during the influenza season. The aim of this review is to explore the views of health care workers regarding seasonal influenza vaccination in antenatal women and describe the barriers in the delivery of the vaccine. Methods: A literature search was conducted using MEDLINE for the terms: “influenza,” “pregnancy,” “antenatal,” “vaccinations,” “recommendations,” “attitudes,” “knowledge” and “opinions”. The review describes findings of publications concerning the inactivated influenza vaccination only, which has been proven safe and is widely recommended. Results: No studies have addressed the knowledge and attitudes of Australian primary health care providers towards influenza vaccination despite their essential role in immunisations in Australia. Overseas studies indicate that factors that contribute to the low vaccination rates are 1) the lack of general knowledge of influenza and its prevention amongst health care workers (HCWs) 2) variable opinions and attitude regarding the vaccine 3) lack of awareness of the national guidelines 4) and lack of discussion of the vaccine by the HCW. Lack of maternal knowledge regarding the safety of the vaccine and the cost-burden of the vaccine are significant barriers in the uptake of the vaccination. Conclusion: Insufficient attention has been given to the topic of influenza vaccinations in pregnancy. Significant efforts are required in Australia to obtain data about the rates of influenza vaccination of pregnant women.

Introduction

Seasonal influenza results in annual epidemics of respiratory diseases. Influenza epidemics and pandemics increase hospitalisation rates and mortality, particularly among the elderly and high risk patients with underlying conditions. [1-3] All pregnant women are at an increased risk of developing influenza due to progressive suppression of Th1- cell-mediated immunity and other physiological changes that cause culmination of morbidity towards the end of pregnancy. [4-7]

Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications [8] Trivalent inactivated influenza vaccine (TIV) has been proven safe and is recommended for person aged ≥6 months, including those with high- risk conditions such as pregnancy. [8-10] A randomised controlled study in Bangladesh demonstrated that TIV administered in the third trimester of pregnancy resulted in reduced maternal respiratory illness and reduced infant influenza infection. [11, 12] Another randomised controlled trial has shown that influenza immunisation of pregnant women reduced influenza-like illness by more than 30% in both the mothers and the infants, and reduced laboratory-proven influenza infections in 0- to 6-month-old infants by 63%. [13]

The current Australian Immunisation Guidelines recommend routine administration of influenza vaccination for all pregnant women who will be in the second or third trimester during the influenza season, including those in the first trimester at the time of vaccination. [4,14,15] The seasonal influenza vaccination has been made available for free to all pregnant women in Australia since 2010. [4] However, The Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZOG) statement for ‘Pre-pregnancy Counselling and routine Antenatal Assessment in the absence of pregnancy Complications’ does not explicitly mention routine delivery of influenza vaccination to healthy pregnant women. [16] RANZCOG recently published the college statement on swine flu vaccination during pregnancy; advising that pregnant women without complications and recent travel history must weigh the risk-benefit ratio before deciding to uptake the H1N1 influenza immunisation. [17] Therefore, it is evident that there is conflicting advice in Australia about the routine delivery of influenza vaccination to healthy pregnant women. In contrast, firm recommendation for routine influenza vaccination for pregnant women was established in 2007, by the National Advisory Committee on Immunisations (NACI) in Canada, with minimal conflict from The Society of Obstetricians and Gynaecologists of Canada (SOGC). [6] Succeeding the 1957 influenza pandemic, the rate of influenza immunisations increased significantly with greater than 100,000 women receiving the vaccination annually between 1959-1965 in the United States. [8] Since 2004 the American Advisory Committee on Immunisation Practice (ACIP) has recommended influenza vaccination for all pregnant women, at any stage of gestation. [9] This is supported by The American College of Obstetricians and Gynaecologists’ Committee on Obstetric Practice. [18]

A recent literature review performed by Skowronski et al. (2009) found that TIV is warranted to protect women against influenza- related hospitalisation during the second half of normal pregnancy, but evidence is otherwise insufficient to recommend routine TIV as the standard of practice for all healthy women beginning in early pregnancy. [6] Similarly, another review looked at the evidence for the risks of influenza and the risks and benefits of seasonal influenza vaccination in pregnancy and concluded that data on influenza vaccine safety in pregnancy is inadequate. [19] However, based on the available literature, there was no evidence of serious side effects in women or their infants, including no indication of harm from vaccination in the first trimester. [19]

We aim to review the literature published on the delivery and uptake of influenza vaccination during pregnancy and identify the reasons for low adherence to guidelines. The review will increase our understanding of how the use of the influenza vaccination is perceived by health care providers and the pregnant women.

Evidence of health care provider’s attitude, knowledge and opinions

Several published studies have revealed data supporting deficits in the knowledge of health care providers regarding the significance of the vaccine and the national guidelines, hence suggesting a low rate of vaccine recommendation and uptake by pregnant women. [20] A research project in 2006 performed a cross-sectional study of the knowledge and attitudes towards the influenza vaccination in pregnancy amongst all levels of health care workers (HCW’s) working at the Department for Health of Women and Children at University of Milan, Italy. [20] The strength of this study was that it included 740 HCWs representing 48.4% working in obstetrics/gynaecology, 17.6% in neonatology and 34% in paediatrics, of whom 282 (38.1%) were physicians, 319 (43.1%) nurses, and 139 (18.8%) paramedics (health aides/healthcare assistants). The respondents were given a pilot-tested questionnaire about their perception of the seriousness of influenza, their general knowledge of influenza recommendations and preventive measures, and their personal use of influenza vaccination; which was to be self-completed in 20 mins in an isolated room. Descriptive analysis of the 707 (95.6%) HCWs that completed the questionnaire revealed that the majority (83.6%) of HCW’s in obstetrics/gynaecology never recommended the influenza vaccination to healthy pregnant women. Esposito et al. (2007) highlighted that only a small number of nurses and paramedics, from each speciality, regarded influenza as serious in comparison to the physicians. [20] Another study investigating practices of the Midwives found that only 37% believed that influenza vaccine is effective and 22% believed that the vaccine was a greater risk than influenza. [21] The results from these studies clearly indicate deficiencies in the general knowledge of influenza and its prevention amongst health care staff.

In contrast, a study by Wu et al. (2006) suggested unusually high vaccination uptake rate of the fellows from the American College of Obstetricians and Gynaecologists (ACOG) who live and practice in Nashville, Tennessee. [22] The survey focussed on physician knowledge, practices, and opinions regarding influenza vaccination of pregnant women. Results revealed that 89% of practitioners responded that they routinely recommend the vaccine to pregnant women and 73% actually administered the vaccination to pregnant and postpartum women. [21] Sixty-two percent responded that the earliest administration of the vaccine should be the second trimester, while 32% reported that it should be offered in the first trimester. Interestingly, 6% believed that it should not be delivered at all during the pregnancy. Despite the national recommendation to administer the vaccination routinely to all pregnant women, [4] more than half of the obstetricians preferred to withhold it until second trimester due to concerns regarding vaccine safety, association with spontaneous abortion and possibility of disruption in embryogenesis. [22] Despite the high uptake rate identified by the respondents, there are a few major limitations in this study. First, the researchers excluded the family physicians and midwives practicing obstetrics in their survey, which prevents a true representation of the sample population. Second, the vaccination rates were identified by the practitioners and not validated, which increases the likelihood of personal bias by the practitioners.

It is evident that HCWs attending to pregnant women and children have limited and frequently incorrect beliefs concerning influenza and its prevention. [20,23] A recent study by Tong et al. (2008) demonstrated that only 40% of the health care providers at the three hospitals studied in Toronto were aware of the high-risk status of pregnant women and only 65% were aware of the NACI recommendations. [23] Furthermore, obstetricians were less likely than family physicians to indicate that it was their responsibility to discuss, recommend, or provide influenza vaccination. [23] Tong et al. (2008) also demonstrated that high levels of provider knowledge about influenza and maternal vaccination, positive attitudes towards influenza vaccination, increased age, being a family physician, and having been vaccinated against influenza, were associated with recommending influenza vaccine to pregnant women. [23] This data is also supported by Wu et al. and Espostio et al.

In 2001, Silverman et al. (2001) concluded that physicians were more likely to recommend vaccine if they were aware of current ‘Centers for Disease Prevention and Control’ guidelines, gave vaccinations in their offices and had been vaccinated against influenza themselves. [24] Similarly, Lee et al. (2005) showed that midwives who received the immunisation themselves and firmly believed in its benefits, were more likely to offer it to pregnant women. [21] Wallis et al. (2006) conducted a multisite interventional study involving educational sessions with the physicians and the use of “Think Flu Vaccine” notes on active obstetric charts, to illustrate a fifteen fold increase in the rate of influenza vaccinations in pregnancy. [25] This study also demonstrated that increase in uptake was greater in family practices versus obstetric practices, and furthermore increased in small practices as opposed to large practices.

Overall, the literature here is derived mostly from American and Canadian studies as there is no data available for Australia. Existing data suggest that there is a significant lack of understanding regarding influenza vaccine safety, benefits and recommendations amongst the HCW’s. [20-27] These factors may lead to wrong assumptions and infrequent vaccine delivery.

Barriers in delivering the influenza vaccinations to pregnant women

Aside from the gaps in the health care provider’s understanding of vaccine safety and national guidelines, several other barriers in delivering the influenza vaccine to pregnant women have been identified. A study published in 2009, based on CDC analysis of data from the Pregnancy Risk Assessment and Monitoring System from Georgia and Rhode Island over the period of 2004-2007, showed that the most common reasons for not receiving the vaccination were, “I don’t normally get the flu vaccination” (69.4%), and “my physician did not mention anything about a flu vaccine during my pregnancy” (44.5%). [28] Lack of maternal knowledge about the benefits of the influenza vaccination has also been demonstrated by Yudin et al. (2009), who conducted a cross-sectional in hospital survey of 100 postpartum women during the influenza season in downtown Toronto. [29] This study concluded that 90% of women incorrectly believed that pregnant women have the same risk of complications as non-pregnant women and 80% incorrectly believed that the vaccine may cause birth defects. [29]. Another study highlighted that 48% of physician listed patient refusal as a barrier for administering the vaccine. [22] These results were supported by Wallis et al. (2006), which focused on using simple interventions such as chart reminders to surmount the gaps in knowledge of women. [25] ‘Missed opportunities’ by obstetricians and family physicians to offer the vaccination have been suggested as a major obstacle in the delivery of the influenza vaccination during pregnancy. [14,23,25,28]

During influenza season, hospitalized pregnant women with respiratory illness had significantly longer lengths of stay and higher odds of delivery complications than hospitalized pregnant women without respiratory illness. [5] In some countries cost-burden of the vaccine to women is another major barrier that contributes to lower vaccination rates among pregnant women. [22] This is not an issue in Australia where the vaccination is free for all pregnant women. Provision of free vaccination to all pregnant women is likely to have a significant advantage when considering the cost-burden of influenza on the health-care sector. However, the cost-burden on the patient can be viewed as lack of access, as reported by Shavell et al. (2012) As such patients that lacked insurance and transportation were less likely to receive the vaccine. [30]

This is supported by several studies that have shown that the vaccine is comparatively cost-effective when considering the financial burden of influenza related morbidity. [31] A 2006 study based on decision analysis modelling revealed that vaccination rate of 100% in pregnant women would save approximately 50 dollars per woman, resulting in a net gain of approximately 45 quality-adjusted hours relative to providing supportive care alone in the pregnant population. [32] Beigi et al. (2009) demonstrated that maternal influenza vaccination using either the single- or 2-dose strategy is a cost-effective approach when influenza prevalence is 7.5% and influenza-attributable mortality is 1.05%. [32] As the prevalence of influenza and/or the severity of the outbreak increases the incremental value of vaccination also increases. [32] Moreover, a study in 2006 has proven the cost-effectiveness to the health sector of the single dose influenza vaccination for influenza like illness. [31] Therefore, patient education about the relative cost- effectiveness of the vaccine and adequate reimbursement by the government is required to alleviate this barrier in other nations but not in Australia where the vaccination is free for all pregnant women.

Lack of vaccine storage facilities in physician offices is an important barrier preventing the recommendation and uptake of the vaccine by pregnant women. [23,33] A recent study monitoring the immunisation practices amongst practicing obstetricians found that less than 30% store influenza vaccine in their office. [18] One study showed acceptance rates of influenza vaccine of 71% of 448 eligible pregnant women who were offered the influenza vaccine at routine prenatal visit due to the availability of storage facilities at the practice, suggesting that the uptake of vaccination can be increased by simply overcoming the logistical and organisational barriers such as vaccine storage, inadequate reimbursement and patient education. [34]

Conclusion

From the limited data available, it is clear that there are is a variable level of knowledge of influenza and its prevention amongst HCWs. There is also and a general lack of awareness of the national guidelines in their countries. However, there is no literature for Australia to compare with other nations. There is some debate regarding the trimester in which the vaccine should be administered. There is further lack of clarity in terms of who is responsible for the discussion and delivery of the vaccine – the general practitioner or the obstetrician. These factors contribute to a lack of discussion of vaccine use and amplify the amount of ‘missed opportunities.’

Lack of maternal knowledge about the safety of the vaccine and its benefits is also a barrier that must be overcome by the HCW through facilitating an effective discussion about the vaccine. Since the vaccine has been rendered free in Australia, cost should not prevent vaccination. Regular supply and storage of vaccines especially in remote towns of Australia is likely to be a logistical challenge.

There is limited Australian literature exploring the uptake of influenza vaccine in pregnancy and the contributing factors such as the knowledge, attitude and opinion of HCWs, maternal knowledge of the vaccine and logistical barriers. A reasonable first step would be to determine the rates of uptake and prevalence of influenza vaccination in antenatal women in Australia.

Conflict of interest

None declared.

Correspondence

S Khosla: surabhi.khosla@my.jcu.edu.au

 

Categories
Review Articles Articles

The therapeutic potentials of cannabis in the treatment of neuropathic pain and issues surrounding its dependence

Cannabis is a promising therapeutic agent, which may be particularly beneficial in providing adequate analgesia to patients with neuropathic pain intractable to typical pharmacotherapy. Cannabinoids are the lipid-soluble compounds that mediate the analgesic effects associated with cannabis by interacting with the endogenous cannabinoid receptors CB1 and CB2, which are distributed along neurons associated with pain transmission. From the 60 different cannabinoids that can be found in cannabis plants, delta-9 tetrahydrocannabinol (THC) and cannabidiol are the most important in regards to analgesic properties. Whilst cannabinoids are effective in providing diminished pain responses, their therapeutic use is limited due to psychotropic side effects via interaction with CB1, which may lead to cannabis dependence. Cannabinoid ligands also interact with glycine receptors, selectively to CB2 receptors, and act synergistically with opioids and non-steroidal anti-inflammatory drugs (NSAIDs) to attenuate pain signals. This may be of therapeutic potential due to the lack of psychotropic effects produced. Clinical trials of cannabinoids in neuropathic pain have shown efficacy in providing analgesia; however, the small number of participants involved in these trials has greatly limited their significance. Although the medicinal use of cannabis is legal in Canada and some parts of the United States, its use as a therapeutic agent in Australia is not permitted. This paper will review the role cannabinoids play in providing analgesia, the pharmacokinetics associated with various routes of administration and dependence issues that may arise from its use.

Introduction

Compounds in plants have been found to be beneficial, and now contribute to many of the world’s modern medicines. Delta-9- tetrahydrocannibinol (THC), the main psychoactive cannabinoid derived from cannabis plants, mediates its analgesic effects by acting at both the central and peripheral cannabinoid receptors.[1] The analgesic properties of cannabis were first observed by Ernest Dixon in 1899, who discovered that dogs failed to react to pin pricks following the inhalation of cannabis smoke.[2] Since that time, there has been extensive research into the analgesic properties of cannabis, including whole plant and synthetic cannabinoid studies. [3-5]

Although the use of medicinal cannabis is legal in Canada and parts of the United States, every Australian jurisdiction currently prohibits its use.[6] Despite this, Australians lead the world in the illegal use of cannabis for both medicinal and recreational reasons. [7]

Although the analgesic properties of cannabis could be beneficial in treating neuropathic pain, the use of cannabis in Australia is a controversial, widely debated subject. The issue of dependence to cannabis arising from medicinal cannabis use is of concern to both medical and legal authorities. This review aims to discuss the pharmacology of cannabinoids as it relates to analgesia, and also the dependence issues that may arise from the use of cannabis.

Medicinal cannabis can be of particular benefit in the treatment of neuropathic pain that is intractable to the typical agents used, such as tricyclic antidepressants, anticonvulsants and opioids. [3,8] Neuropathic pain is a disease affecting the somatosensory nervous system which thereby causes pain that is unrelated to peripheral tissue injury. Treatment options are limited. The prevalence of chronic pain in Australia has been estimated at 20% of the population, [9] with neuropathic pain estimated to affect up to 7% of the population. [10]

The role of cannabinoids in analgesia

Active compounds found in cannabis

Cannabis contains over 60 cannabinoids, with THC being the quintessential mediator of analgesia and the only psychoactive constituent found in cannabis plants. [11] Another cannabinoid, cannabidiol, also has analgesic properties; however, instead of interacting with cannabinoid receptors, its analgesic properties are attributed to inhibition of anandamide degradation. [11] Anandamide is the most abundant endogenous cannabinoid in the CNS and acts as an agonist at cannabinoid receptors. By inhibiting the breakdown of anandamide, its time in the synapse is prolonged and its analgesic effects are perpetuated.

Cannabinoid and Vanilloid receptors

Distributed throughout the nociceptive pathway, cannabinoid receptors are a potential target for the administration of exogenous cannabinoids to suppress pain. Two known types of cannabinoid receptors, CB1 and CB2, are involved in pain transmission. [12] The CB1 cannabinoid receptor is highly expressed in the CNS as well as in peripheral tissues, and is responsible for the psychotropic effects produced by cannabis. There is debate regarding the location of the CB2 cannabinoid receptor, previously found to be largely distributed in peripheral immune cells. [12-13] Recent studies, however, suggest that CB2 receptors may also be found on neurons. [12-13] The CB2 metabotropic G-protein coupled receptors are negatively coupled to adenylate cyclase and positively coupled to mitogen-activated protein kinase. [14] The cannabinoid receptors are also coupled to pre-synaptic voltage-gated calcium channel inhibition and inward- rectifying potassium channel activation, thus depressing neuronal excitability, eliciting an inhibitory effect on neurotransmitter release and subsequently decreasing pain transmission. [14]

Certain cannabinoids have targets other than cannabinoid receptors through which they mediate their analgesic properties. Cannabidiol can act at vanilloid receptors, where capsacsin is active, to produce analgesia. [15] Recent studies have found that the actions of administered cannabinoids in mice have a synergestic effect to the response of glycine, an inhibitory neurotransmitter that may contribute to its analgesic effects. Analgesia was absent in mice that lacked glycine receptors, but not in those lacking cannabinoid receptors, thus indicating an important role of glycine in the analgesic affect of cannabis. [16] Throughout this study, modifications were made to the compound to enhance binding to glycine receptors and diminish binding to cannabinoid receptors, which may be of therapeutic potential to achieve analgesia without psychotropic side effects. [16]

Mechanism of action in producing analgesia and side effects

Cannabinoid receptors also play an important role in the descending inhibitory pathways via the midbrain periaqueductal grey (PAG) and the rostral ventromedial medulla (RVM). [17] Pain signals are conveyed via primary afferent nociceptive fibres to the brain via ascending pain pathways that synapse on the dorsal horn of the spinal cord. The descending inhibitory pathway modulates pain transmission in the spinal cord and medullary dorsal horn via the PAG and RVM before noxious stimuli reaches a supraspinal level and is therefore interpreted as pain. [17] Cannabinoids activate the descending inhibitory pathway via gamma-aminobutyric acid (GABA)-mediated disinhibition, thus decreasing GABAergic inhibition and enhancing impulses responsible for the inhibition of pain; this is similar to opioid-mediated analgesia. [17]

Cannabinoid receptors, in particular CB1, are distributed throughout the cortex, hippocampus, amygdala, basal ganglia outflow tracts and cerebellum, which corresponds to the capacity of cannabis to produce motor and cognitive impairment. [18] These deleterious side effects limit their therapeutic use as an analgesic. Since ligands binding to CB1 receptors are responsible for mediating the psychotropic effects of cannabis, studies have been undertaken on the effectiveness of CB2 agonists; they were found to attenuate neuropathic pain without experiencing CB1-mediated CNS side effects. The discovery of a suitable CB2 agonist may be of therapeutic potential. [19]

Synergism with commonly used analgesics

Cannabinoids are also important in acting synergistically with non- steroidal anti-inflammatory drugs (NSAIDs) and opioids to produce analgesia; cannabis could thus be of benefit as an adjuvant to typical analgesics. [20] A major central target of NSAIDs and opioids is the descending inhibitory pathway. [20] The analgesia produced by NSAIDs through its action on the descending inhibitory pathway requires simultaneous activation of the CB1 cannabinoid receptor. In the presence of an opioid antagonist, cannabinoids are still effective analgesics. Whilst cannabinoids do not act via opioid receptors, cannabinoids and opioids show synergistic activity. [20] On the other hand, Telleria-Diaz et al. reported that the analgesic effects of non- opioid analgesics, primarily indomethacin, in the spinal cord can be prevented by a CB1 receptor antagonist, thus highlighting synergism between the two agents. [21] Although no controlled studies in pain management have used cannabinoids with opioids, anecdotal evidence suggest synergistic benefits in analgesia, particularly in patients with neuropathic pain. [20] Whilst the interaction between opioids, NSAIDs and cannabinoids is poorly understood, numerous studies do suggest that they act in a synergistic manner in the PAG and RVM via GABA- mediated disinhibition to enhance descending flow of impulses to inhibit pain transmission. [20]

Route of Administration

Clinical trials of cannabis as an analgesic in neuropathic pain have shown cannabis to reduce the intensity of pain. [5,22] The most common administration of medicinal cannabis is through inhalation via smoking. Two randomised clinical trials assessing smoked cannabis showed that patients with HIV-associated neuropathic pain achieved significantly reduced pain intensity (34% and 46%) compared to placebo (17% and18% respectively). [5,22] One of the studies was composed of participants whose pain was intractable to first-line analgesics used in neuropathic pain, such as tricyclic antidepressants and anticonvulsants. [22] The numbers needed to treat (NNT=3.5) were comparable to agents already in use (gabapentin: NNT=3.8 and lamotrigine: NNT=5.4). [22] All of the studies undertaken on smoked cannabis have been short-term studies and do not address long- term risks of cannabis smoking. An important benefit associated with smoking cannabis is that the pharmacokinetic profile is superior to orally ingested cannabinoids. [23]After smoking one cannabis cigarette, peak plasma levels of THC are reached within 3-10 minutes and due to its lipid solubility, levels quickly decrease as THC is rapidly distributed throughout the tissues. [23] While the bioavailability of THC when inhaled via smoke is much higher than oral preparations, due to first pass metabolism, there are obvious harmful affects associated with smoking which warranted the study of using other means of inhalation such as vapourisation. In medicinal cannabis therapy, vapourisation may be less harmful than smoking as the cannabis is heated below the point of combustion where carcinogens are formed. [24] A recent study found that the transition from smoking to vapourising in cannabis smokers improved lung function measurements and, following the study, participants refused to participate in a reverse design in which they would return to smoking. [24]

Studies undertaken on the efficacy of oro-mucosal cannabinoid preparations (Sativex) showed a 30% reduction in pain as opposed to placebo; the NNT was 8.6.[4] Studies comparing oral cannabinoid preparations (Nabilone) to dihydrocodeine in neuropathic pain found that dihydrocodeine was a more effective analgesic. [25] The effects of THC from ingested cannabinoids lasted for 4-12 hours with a peak plasma concentration at 2-3 hours. [26] The effects of oral cannabinoids was variable due to first pass metabolism where significant amounts of cannabinoids are metabolized by cytochrome P450 mixed-function oxidases, mainly CYP 2C9. [26] First pass metabolism is very high and bioavailability of THC is only 6% for ingested cannabis, as opposed to 20% for inhaled cannabis. [26] The elimination of cannabinoids occurs via the faeces (65%) and urine (25%), with a clinical study showing that after five days 90% of the total dose was excreted. [26]

The issue of cannabis dependence

One of the barriers to the use of medicinal cannabis is the controversy regarding cannabis dependence and the adverse effects associated with chronic use. Cannabis dependence is a highly controversial but important topic, as dependence may increase the risk of adverse effects associated with chronic use. [27] Adverse effects resulting from long-term use of cannabis include short term memory impairment, mental health problems and, if smoked, respiratory diseases. [28] Some authors report that cannabis dependence and subsequent adverse negative effects upon cessation are only observed in non- medical cannabis users, other authors report that dependence is an issue for all cannabis users, whether its use is for medicinal purposes or not. An Australian study assessing cannabis use and dependence found that one in 50 Australians had a DSM-IV cannabis use disorder, predominately cannabis dependence. [27] They also found that cannabis dependence was the third most common life-time substance dependence diagnosis following tobacco and alcohol dependence. [27] Cannabis dependence can develop; however, the risk factors for dependence come predominantly from studies that involve recreational users, as opposed to medicinal users under medical supervision. [29]

A diagnosis of cannabis dependence, according to DSM-IV, is made when three of the following seven criteria are met within the last 12 months: tolerance; withdrawal symptoms; cannabis used in larger amounts or for a longer period than intended; persistent desire or unsuccessful efforts to reduce or cease use; a disproportionate amount of time spent obtaining, using and recovering from use; social, recreational or occupational activities were reduced or given up due to cannabis use; and use continued despite knowledge of physical or psychological problems induced by cannabis. [29] Unfortunately, understanding of cannabis dependence arising from medicinal use is limited due to the lack of studies surrounding cannabis dependence in the context of medicinal use. Behavioural therapies may be of use; however, their efficacy is variable. [30] A recent clinical trial indicated that orally-administered THC was effective in alleviating cannabis withdrawals, which is analogous to other well-established agonist therapies including nicotine replacement and methadone. [30]

The pharmacokinetic profiles also affect cannabis dependence. Studies suggest that the risk of dependence seems to be marginally greater with the oral use of isolated THC than with the oral use of combined THC-cannabidiol. [31] This is important because hundreds of cannabinoids can be found in whole cannabis plants, and cannabidiol may counteract some of the adverse effects of THC; however, more studies are required to support this claim. [31]

The risk of cannabis dependence in the context of long term and supervised medical use is not known. [31] However, some authors believe that the pharmacokinetic profiles of preparations used for medicinal purposes differ from those used for recreational reasons, and therefore causalities in terms of dependence and chronic adverse effects between the two differ greatly. [32]

Conclusion

Cannabis appears to be an effective analgesic and provides an alternative to analgesic pharmacotherapies currently in use for the treatment of neuropathic pain. Cannabis may be of particular use in neuropathic pain that is intractable to other pharmacotherapy. The issue of dependence and adverse side effects including short term memory impairment, mental health problems and if smoked, respiratory diseases arising from medicinal cannabis use is a highly debated topic and more research needs to be undertaken. The ability of cannabinoids to modulate pain transmission by enhancing the activity of descending inhibitory pathways and acting as a synergist to opioids and NSAIDs is important as it may decrease the therapeutic doses of opioids and NSAIDs required, thus decreasing the likelihood of side effects. The possibility of a cannabinoid-derived compound with analgesic properties free of psychotropic effects is quite appealing, and its discovery could potentially lead to a less controversial and more suitable analgesic in the future.

Conflict of interest

None declared.

Correspondence

S Sargent: stephaniesargent@mail.com


Categories
Feature Articles Articles

Bring back the white coats?

Should we bring back the white coat? Is it time for this once-venerated symbol of medicine to re-establish itself amongst a new generation of fledgling practitioners? Or, is this icon of medical apparel nothing more than a potentially dangerous relic of a bygone era?

Introduction

The white coat has long been a symbol of the medical profession, dating back to the late-1800s. [1] It was adopted as medical thought became more scientific. [2] Doctors wore coats aligning themselves with the scientists of the day, who commonly wore beige coats, but instead chose white – the colour lacking both hue and shade – as representation of purity and cleanliness. [3] Nowadays, the white coat is rarely seen in hospitals, possibly due to suspicions that it may function as a vector for transmission of nosocomial infections. [4] This article addresses the validity of such concerns, by reviewing the available literature.

The vanishing white coat

Twenty years ago in the United Kingdom (UK) white coats were commonly worn by junior doctors while consultants wore suits. [5] The choice to not wear a white coat was seen as a display of autonomous, high-ranking professionalism. [6] Many older Australian nurses now recall when doctors commonly wore white coats in the hospital. Over the last decade, white coats have become a rarity in Australian hospitals. [7,8] There are many reasons why this change occurred. Table 1 outlines some common thoughts of doctors on the matter. Paediatricians and psychiatrists stopped using white coats as they thought that it created communication barriers in the doctor-patient relationship. [3] Society viewed white coats as a status symbol, [7] evoking an omnipotent disposition, which was deemed inappropriate. [6,7] In addition, it was thought white coats might be a vector for nosocomial infection. [6,9-13] With these pertinent issues, and no official policy requiring white coats, doctors gradually hung them up.

Table 1. Reasons for why doctors choose to wear or not wear white coats

 

Reasons why doctors wear white coats Reasons why doctors do not wear white coats
For identification purposes [8]

To carry things [14]

Hygiene [7,8]

To protect clothes [8]

To create a psychological barrier [3]

Patients prefer doctors in white coats [14]

Looks professional [8,14]

No one else does [8]

Infection risk [5,8,14]

Hot or uncomfortable [5,8,14]

Interferes with the doctor-patient relationship [6,14]

Lack of seniority [5]

 

Hospital policies and white coats

In 2007 the British Department of Health published guidelines for healthcare worker uniforms, that banned the white coat from hospitals in England, [15] thereby producing a passionate controversy. [4] The primary reason for the ban was to decrease health-care acquired infections, [9,12,16] which was supposedly supported by one of two Thames Valley University literature reviews. [6,13] Interestingly, these reviews stated there was no evidence to support the notion that clothing or specific uniforms, could be a noteworthy medium for the spread of infections. [6,13] On closer inspection of the British policy, however, they state: “it seems unlikely that uniforms are a significant source of cross-infection.” [15] The text goes on to support the new uniform guidelines, including the abolition of the white coat, because “the general public’s perception is that uniforms pose an infection risk when worn inside and outside clinical settings.” [6] This statement lacks evidence, as many studies show patients prefer their doctors to wear white coats [7,14,17] and the notion of patients being concerned about infection risk are uncommon. [7] It would appear that the British Department of Health made this decision for some reason other than compulsion by evidence.

Despite significant discussion and debate, the United States (US) has chosen not to follow England in banning the white coat. [3,12,18] The US has a strong tradition associated with the white coat, which may influence their reluctance to abandon them so quickly. In 1993, the ‘white coat ceremony’ was launched in the US, where graduating medical students were robed in a white coat, as the senior doctors ‘demonstrate their belief in the student’s ability to carry on the noble tradition of doctoring.’ [1] Only five years later, 93 US medical schools had adopted this practice. [1] This indicates that the white coat is a real source of pride for doctors in the US, however, tradition alone cannot dictate hospital policies. In 2009, the American Medical Association (AMA) passed a resolution to encourage the “adoption of hospital guidelines for dress codes that minimise transmission of nosocomial infections.” [19] Rather than banning white coats, [16] the AMA proposed the need for more research, noting that there was insufficient evidence to support that there was an increased risk of nosocomial infection directly related to their use. [18]

The Australian Government National Health and Medical Research Council (NHMRC) published the Australian Guidelines for the Prevention and Control of Infection in Healthcare in 2010, outlining recommendations for the implementation of infection control in all Australian hospitals, and other areas of healthcare, based on current literature. [20] It states that uniforms should be laundered daily, whether at home or at the hospital, and that the literature has not shown a necessity to ban white coats or other uniforms, as there is no evidence that they increase transmission of nosocomial infections. [20] These guidelines, also contained the article that the British Department of Health used in support of banning white coats. [6]

The evidence of white coats and nosocomial infection

There are minimal studies done trying to assess whether white coats are potential sources of infection or not. [9-12] Analysis of the limited data paints a uniform picture of the minimal possibility for white coats to spread infection.

In 1991 a study of 100 UK doctors demonstrated that no pathogenic organisms were cultured from the white coats. [10] Notably, this study also found that the level of bacterial contamination of white coats did not vary with the amount of time the coat was worn, but varied with the amount of use. [10] The definition of usage was not included in the article, although doctor-patient time is the most likely interpretation. Similarly, a study in 2000 isolated no Methicillin-resistant Staphylococcus aureus (MRSA), or other infective organisms, but still concluded that the white coat was a possible cause of infection. [11] This study stated white coats were not to be used as a substitute for personal protective equipment (PPE) and it was recommended that they should be removed before putting on plastic aprons. [11]

A recent study swabbed MRSA on 4% of the white coats of medical participants, even though it was the biggest study of its kind, there was no statistically significant difference between colonised and uncolonised coats due to the population size. [9] This study has limitations in that it did not compare contamination with clinical dress, which could potentially show there is no difference. There appeared to be a correlation with the MRSA contaminated coats and hospital-laundered coats with four out of the six coats being hospital-laundered. [9] A potential major contributing factor to the contamination of white coats could be the frequency of washing white coats. A survey in the 2009 study showed that 81% of participants had not washed their coats for more than seven days and 17% in more than 28 days. [9] Even though the 1991 study showed that usage, not time, was the determinate for bacterial load, this does not negate a high amount of usage over a long period of time. [10] Interestingly, there may be a correlation with the MRSA contaminated coats and hospital-laundered coats. [9]

In response to the British hospital uniform guidelines, a Colorado study, published in April 2011, compared the degree and rate of bacterial contamination of a traditional, infrequently-washed, long-sleeved white coat, to a newly-cleaned, short-sleeved uniform. [12] Their conclusions were unexpected, such that after eight hours of wear, there was no difference in the degree of contamination of the two. Additionally, the study concluded that there was also no difference in the extent of bacterial or MRSA contamination of the cuffs of the physicians. Consequently, the study does not discourage the wearing of long-sleeved white coats [12] and concludes that there is no evidence for their abolition due to infection control concerns.

While, all these studies indicate the potential for organisms that cause nosocomial infections to be present on white coats, [10-12] the common conclusion is there is no higher infection risk from daily-washed, white coats, than any other clinical attire. [12] It needs to be recognised there are many confounding factors in all of these studies that compare attire and nosocomial infection, hence more studies are needed to clearly establish guidelines for evidence-based practice regarding this issue. Gaining an understanding of the difference in transmission rates between specialities could assist in implementing specific infection control practices. Studies that clearly establish transmission of organism from uniform to patient, and clinical data on the frequency of such transmissions, would be beneficial in developing policy. Additionally, nationwide hospital reviews on rates of nosocomial infections, comparing the dress of the doctors and nurses would contribute to gaining a more complete understanding of the role that uniforms play in transmission of disease.

Australian hospitals and white coats

Queensland State Infection Control Guidelines published by the Centre for Healthcare Related Infection Surveillance and Prevention (CHRISP), surprisingly had no details of recommended dress of doctors that could be found. [21] State guidelines like these, in combination with federal guidelines, influence the policies that each individual hospital in Australia creates and implements.

A small sample of hospitals across all the states and territories of Australia were canvassed to assess what the general attitudes were towards the wearing of white coats during patient contact and whether these beliefs were evidence-based. The infection control officers of each of the hospitals were contacted, by myself and the specifics of their policies attained, along with an inquiry regarding the wearing of white coats by students or staff. This data was collected verbally. Obviously there are limitations to this crude data collection it is the result of attempting to attain data not recorded.

On the whole, individual hospital policies emulated National Guidelines almost exactly, by not expelling white coats; instead encouraging them to be washed daily, like normal dress. Some hospitals had mandatory ‘bare-below-the-elbows’ and ‘no lanyard’ policies, while many hospitals did not. White coats were worn in a significant amount of Australian hospitals, usually by senior consultants and medical students (see Table 2). The general response from infection control officers regarding the wearing of white coats was negative, presumably due to the long sleeves and the knowledge that they are probably not being washed daily. [10,12]

Table 2. Relevant policies in place regarding white coats and if white coats are worn within hospitals in major Australian centres.

Hospital Policy regarding white coats White coat worn
Townsville Hospital No policy An Emergency Department doctor and surgeon
Mater Hospital – Townsville No policy Nil known
Royal Brisbane and Prince Charles

– Metro  North*

No policy Medical students
Brisbane Princess Alexandra and Queen Elizabeth 2

– Metro South*

No policy Medical students
One consultant who requires his medical students to wear white coats
Royal Darwin Hospital Sleeves to be rolled up Nil known
Royal Melbourne Hospital No policy Nil known
Royal Prince Alfred Hospital
– Sydney
No policy Senior doctors, occasionally
Royal Hobart Hospital No policy Nil known
Royal Adelaide Hospital No policy Orthopaedics, gynaecologists and medical students
Royal Perth Hospital Sleeves to be rolled up Only known to be worn by one doctor
Canberra Hospital Sleeves to be rolled up

*All the hospitals in the northern metropolitan region of Brisbane are governed by the same policy, likewise for Metro South.

This table shows white coats are not extinct in Australian hospitals and the policies in place pertaining to white coats reflect the Federal Guidelines. Policies regarding lanyards, ties and long-sleeves differed between hospitals. It is encouraging to note that Australia has not followed in the footsteps of England, regarding the abolition of white coats, as there is limited scientific evidence to support such a decision. The policies in Australia regarding white coats require daily laundering, although current literature even queries the necessity for this. [12] The negative image of white coats in Australian hospitals by the infection control officers is probably influenced by the literature that shows that white coats become contaminated. [9] The real discussion, however, is the difference in contamination of white coats and other clinical wear.

Meditations of a medical student

My own views…

I have worn a white coat on numerous occasions, during dissections and lab experiments, but never when I am in contact with patients. According to the James Cook University School of Medicine dress policy, all medical students are to wear ‘clean, tidy and appropriate’ clinical dress. [22] No detail is included regarding sleeve length, colour or style, although social norm is a very powerful force, and the main reason that my colleagues and myself would not wear white coats is simply because no one else is wearing them. This practice is concurrent with a study on what Australian junior doctors think of white coats. [8]

Personally, I think that a white coat would be quite useful. It may even decrease nosocomial infection, as it has big pockets and could carry books and instruments, negating the need for a shoulder bag or putting items down in patient’s rooms, thus becoming a potential cross-infection risk. In regards to the effects on patients, I think the psychological impact may have some effect, but this would be different for each individual. White coats are not the cause of nosocomial infections that are rampant in our hospitals, it is the compliance of health professionals washing their hands and adhering to the evidence-based guidelines provided by infection control organisations. In Australia these guidelines give freedom to wear the white coats, so why not?

Conclusion

White coats are a symbol of the medical profession and date back to the beginnings of evidence-based medicine. Suitably, it is appropriate to let the evidence shape the policies regarding the wearing and laundering of white coats in hospitals and medical practice. There has been much debate regarding white coats as an increased risk for nosocomial infection, [3,4,12,16,18] as many studies have shown that white coats carry infectious bacteria. [9-12] But, more notably, a study published in April 2011, showed that the bacterial loads on infrequently washed white coats did not differ from newly cleaned short-sleeve shirts. [12] The reason why Britain decided to ban white coats in 2007 is a mystery. Australia has not banned white coats, although there are some practitioners who choose to wear them, but is it far from the norm. [8] A nation-wide, formal re-introduction of white coats into Australian medical schools has no opposition from infection control according to the current evidence. “…Might not the time be right to rediscover the white coats as a symbol of our purpose and pride as a profession?” [1]

Conflict of interest

None declared.

Acknowledgements

Thank you to Sonya Stopar for her assistance in editing this article.

Correspondence

S Fraser: sara.fraser@my.jcu.edu.au

References

[1] Van Der Weyden MB. White coats and the medical profession. Med J Aust. 2001;174.
[2] Blumhagen DW. The doctor’s white coat:The image of the physician in modern America. Ann Intern Med. 1979;91(1):111-6.
[3] Ellis O. The return of the white coat? BMJ Careers [serial on the Internet]. 2010 Sep 1; [cited 2012 October 10]. Available from: http://careers.bmj.com/careers/advice/view-article.html?id=20001364.
[4] Kerr C. Ditch that white coat. CMAJ. 2008;178(9):1127.
[5] Sundeep S, Allen KD. An audit of the dress code for hospital medical staff. J Hosp Infect. 2006; 64(1):92-3.
[6] Loveday HP, Wilson JA, Hoffman PN, Pratt RJ. Public perception and the social and microbiological significance of uniforms in the prevention and control of healthcare-associated infections: An evidence review. British J Infect Control. 2007;8(4):10-21.
[7] Harnett PR. Should doctors wear white coats? Med J Aust. 2001;174:343-4.
[8] Watson DAR, Chapman KE. What do Australian junior doctors think of white coats? Med Ed. 2002; 36(12):1209-13.
[9] Treakle AM, Thom KA, Furuno JP, Strauss SM, Harris AD, Perencevich EN. Bacterial contamination of health care workers’ white coats. Am J Infect Control. 2009; 37(2):101-5.
[10] Wong D, Nye K, Hollis P. Microbial flora on doctors’ white coats. BMJ. 1991; 303(6817):1602-4.
[11] Loh W, Ng VV, Holton J. Bacterial flora on the white coats of medical students. J Hosp Infect. 2000; 45(1):65-8.
[12] Burden M, Cervantes L, Weed D, Keniston A, Price CS, Albert RK. Newly cleaned physician uniforms and infrequently washed white coats have similar rates of bacterial contamination after an 8-hour workday: A randomized controlled trial. J Hosp Med. 2011;6(4):177-82.
[13] Wilson JA, Loveday HP, Hoffman PN, Pratt RJ. Uniform: An evidence review of the microbiological significance of uniforms and uniform policy in the prevention and control of healthcare-associated infections. Report to the department of health (England). J Hosp Infect. 2007;66(4):301-7.
[14] Douse J, Derrett-Smith E, Dheda K, Dilworth JP. Should doctors wear white coats? Postgrad Med J. 2004;80(943):284-6.
[15] Jacob G. Uniforms and workwear: An evidence base for developing local policy [monograph on the Internet]. Leeds, England: Department of Health; 2007 [cited 2012 Oct 10]. Available from: http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_078435.pdf.
[16] Sweeney M. White coats may not carry an increased infection risk. [monograph on the Internet]. Cambridge, England: Cambridge Medicine Journal; 2011 [cited 2012 Oct 10]. Available from: http://www.cambridgemedicine.org/news/1298329618.
[17] Gherardi G, Cameron J, West A, Crossley M. Are we dressed to impress? A descriptive survey assessing patients’ preference of doctors’ attire in the hospital setting. Clin Med. 2009;9(6):519/24.
[18] Henderson J. The endangered white coat. Clin Infect Dis. 2010;50(7):1073-4.
[19] American Medical Association [homepage on the Internet]. Chicago: Board of Trustees. c2010. Reports of the Boards of Trustees. p31-3. Available from: http://www.ama-assn.org/resources/doc/hod/a-10-bot-reports.pdf.
[20] National Health and Medical Research Council. [homepage on the Internet]. Australia; Australian guidelines for the prevention and control of infection in healthcare. 2010 [cited 2012 Oct 10]. Available from: http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/cd33_complete.pdf.
[21] Centre for healthcare related infection surveillance and prevention. [homepage on Internet]. Brisbane; [updated 2012 October; cited 2012 Oct 10]. Available from: http://www.health.qld.gov.au/chrisp/.
[22] James Cook University School of Medicine and Dentistry [homepage on the Internet]. Townsville. 2012 [cited 2012 Oct 10]. Clothing; [1 screen]. Available from: https://learnjcu.jcu.edu.au/webapps/portal/frameset.jsp?tab_tab_group_id=_253_1&url=/webapps/blackboard/execute/courseMain?course_id=_18740_1

Categories
Review Articles Articles

A biological explanation for depression: The role of interleukin-6 in the aetiology and pathogenesis of depression and its clinical implications

Depression is one of the most common health problems addressed by general practitioners in Australia. It is well known that biological, psychosocial and environmental factors play a role in the aetiology of depression. Research into the possible biological mechanisms of depression has identified interleukin-6 (IL-6) as a potential biological correlate of depressive behaviour, with proposed contributions to the aetiology and pathogenesis of depression. Interleukin-6 is a key proinflammatory cytokine involved in the acute phase of the immune response and a potent activator of the hypothalamic-pitutary-adrenal axis. Patients with depression have higher than average concentrations of IL-6 compared to non- depressed controls, and a dose-response correlation may exist between circulating IL-6 concentration and the degree of depressive symptoms. Based on these insights the ‘cytokine theory of depression’ proposes that proinflammatory cytokines, such as IL-6, act as neuromodulators and may mediate some of the behavioural and neurochemical features of depression. Longitudinal and case- control studies across a wide variety of patient cohorts, disease states and clinical settings provide evidence for a bidirectional relationship between IL-6 and depression. Thus IL-6 represents a potential biological intermediary and therapeutic target for the treatment of depression. Recognition of the strong biological contribution to the aetiology and pathogenesis of depression may help doctors to identify individuals at risk and implement appropriate measures, which could improve the patient’s quality of life and reduce disease burden.

Introduction

Our understanding of the immune system has grown exponentially within the last century, and more questions are raised with each new development. Over the past few decades research has emerged to suggest that the immune system may be responsible for more than just fighting everyday pathogens. The term ‘psychoneuroimmunology’ was first coined by Dr Robert Ader and his colleagues in 1975 as a conceptual framework to encompass the emerging interactions between the immune system, the nervous system, and psychological functioning. Cytokines have since been found to be important mediators of this relationship. [1] There is considerable research that supports the hypothesis of proinflammatory cytokines, in particular interleukin-6 (IL-6), in playing a key role in the aetiology and pathophysiology of depression. [1-5] While both positive and negative results have been reported in individual studies, a recent meta-analysis supports the association between depression and circulating IL-6 concentration. [6] This review will explore the impact of depression in Australia, the role of IL-6 and the proposed links to depression and clinical implications of these findings.

Depression in Australia and its diagnosis

Depression belongs to a group of affective disorders and is one of the most prevalent mental illnesses in Australia. [7] It contributes to one of the highest disease burdens in Australia, closely following cancers and cardiovascular diseases. [7] Most of the burden of mental illness, measured as disability adjusted life years (DALYs), is due to years of life lost through disability (YLD) as opposed to years of life lost to death (YLL). This makes mental disorders the leading contributor (23%) to the non-fatal burden of disease in Australia. [7] Specific populations, including patients with chronic disease, such as diabetes, cancer, cardiovascular disease, and end-stage kidney disease, [1,3,4,10] are particularly vulnerable to this form of mental illness.[8, 9] The accurate diagnosis of depression in these patients can be difficult due to the overlapping of symptoms inherent to the disease or treatment and the diagnostic criteria for major depression. [10-12] Nevertheless, accurate diagnosis and treatment of depression is essential and can result in real gains in quality of life for patients with otherwise incurable and progressive disease. [7] Recognising the high prevalence and potential biological underpinnings of depression in patients with chronic disease is an important step in deciding upon appropriate diagnosis and treatment strategies.

Role of IL-6 in the body

Cytokines are intercellular signalling polypeptides produced by activated cells of the immune system. Their main function is to coordinate immune responses; however, they also play a key role in providing information regarding immune activity to the brain and neuroendocrine system. [13] Interleukin-6 is a proinflammatory cytokine primarily secreted by macrophages in response to pathogens. [14] Along with interleukin-1 (IL-1) and tumour necrosis factor-alpha (TNF-α), IL-6 plays a major role in fever induction and initiation of the acute-phase response. [14] The latter response involves a shi

Categories
Case Reports Articles

Use of olanzapine in the treatment of acute mania: Comparison of monotherapy and combination therapy with sodium valproate

Introduction: The aim of this article is to review the literature and outline the evidence, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. Case study: GR, a 55 year old male with no previous psychiatric history was assessed by the Consultation and Liaison team and diagnosed with an acute manic episode. He was placed under an involuntary treatment order and was prescribed olanzapine 10mg once daily (OD). After failing to respond adequately to this treatment, sodium valproate 500mg twice daily (BD) was added to the regimen. Methods: A literature search was conducted using Medline Ovid and NCBI Pubmed databases. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy and olanzapine AND mania were used. Results: Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo. There were no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate was more efficacious than olanzapine monotherapy. Conclusion: There is no evidence currently available to support the use of combination olanzapine/ sodium valproate as a more efficacious treatment than olanzapine alone.

Case report

GR is a 55 year old Vietnamese male with no previous psychiatric history who was seen by the Consultation and Liaison Psychiatry team at a Queensland hospital after referral from the Internal Medicine team. He was brought into the Emergency Department the previous day by his ex-wife after noticing increasing bizarre behaviour and aggressiveness. He had been discharged from hospital one week earlier after bilateral knee replacement surgery twenty days prior to his current admission. GR was assessed thoroughly for delirium caused by a general medical condition, with all investigations showing normal results.

GR was previously working as an electrician, but is currently unemployed and is on a disability benefit due to a prior back injury. He currently acts as a carer for his ex-wife who resides with him at the same address. He was reported to be irritable, excessively talkative with bizarre ideas, and sleeping for less than two hours each night for the past four nights. He has no other past medical history apart from hypertension which is currently well controlled with candesartan 10mg OD. He is allergic to meloxicam with an unspecified reaction.

On assessment, GR was dressed in his nightwear, sitting on the edge of his bed. He was restless and erratic in his behaviour with little eye contact. Speech was loud, rapid and slightly pressured. Mood was unable to be established as GR did not provide a response on direct questioning. Affect was expansive, elevated and irritable. Grandiose thought was displayed with flight of ideas. There was no evidence of perceptual disturbances, in particular any hallucinations or delusions. Insight and judgement was extremely poor. GR was assessed to have a moderate risk of violence. There was no risk of suicide or self harm or risk of vulnerability.

After a request and recommendation for assessment, GR was diagnosed with an acute manic episode in accordance with Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR) criteria and placed under an involuntary treatment order. He was prescribed olanzapine 10mg OD. After failing to respond adequately to this treatment, sodium valproate 500mg BD was added to the regimen. Improvement with the addition of the new medication was seen within a number of days.

Introduction

A manic episode, as defined by the DSM–IV-TR, is characterised by a distinct period of abnormally and persistently elevated, expansive or irritable mood lasting at least one week (or any duration if hospitalisation is required) and is associated with a number of other persistent symptoms including grandiosity, decreased need for sleep, talkativeness, distractibility and psychomotor agitation, causing impaired functioning and not accounted for by another disorder. [1] Mania tends to have an acute onset and it is these episodes that define the presence of bipolar disorder. Bipolar I Disorder is characterised by mania and major depression, or mania alone, and Bipolar II Disorder is defined by hypomania and major depression. [1] The pharmacological management of acute mania involves primary treatment of the pathologically elevated mood. A number of medications are recommended including lithium, anti-epileptics either sodium valproate or carbamazepine and second generation antipsychotics such as olanzapine, quetiapine, risperidone, or ziprasidone. [2] Suggested approaches to patients with mania who fail to respond to a single medication include optimising the current drug; switching to a different drug or using drugs in combination. [2] GR was initially managed with olanzapine 10mg OD and then after failing to respond adequately, sodium valproate 500mg BD was added. This raises the following question: Is the use of combination therapy of olanzapine and sodium valproate more efficacious than olanzapine monotherapy?

Objective

The objective of this article was to review the literature and outline the evidence that is available, if any, for the effectiveness of olanzapine as a monotherapy for acute mania in comparison with the effectiveness of its use as a combined therapy with sodium valproate. The issue of long term outcome and efficacy of these two therapies is outside the scope of this particular report.

Data collection

In order to address the question identified in the objective, a literature search was conducted using Medline Ovid and NCBI Pubmed databases with limits set to only include articles that were written in English and available as full text journals subscribed to by James Cook University. The search terms mania AND olanzapine AND valproate; acute mania AND pharmacotherapy AND olanzapine AND mania were used. A number of articles were also identified through the related articles link provided by the NCBI Pubmed Database. A number of articles including randomised controlled trials (Level II Evidence) and meta-analyses (Level I Evidence) were reviewed, however no study was found that compared the use of olanzapine as a monotherapy with the use of combined therapy of olanzapine and sodium valproate.

Discussion

Efficacy of olanzapine as a monotherapy

Two studies were identified that addressed the efficacy and safety of olanzapine for the treatment of acute mania. The first, by Tohen et al. in 1999 [3], was a random assignment, double blind, placebo controlled parallel group study involving a sample of 139 patients who met the DSM-IV-TR criteria for either a mixed or manic episode with 70 assigned to olanzapine 10mg OD and 69 to placebo. Both treatment groups were similar in their baseline characteristics and severity of illness with therapy lasting for three weeks. After the first day of treatment, the daily dosage could be increased or decreased by 5mg each day within the allowed range of 5-20mg/day. The use of lorazepam as a concurrent medication was allowed up to 4mg/day. [3] Patients were assessed at baseline and at the end of the study. The Young Mania Rating Scale was used as the primary efficacy measure with a change in total score from baseline to endpoint.

The study found those treated with olanzapine showed a greater mean improvement in total scores on the Young Mania Rating Scale with a difference of -5.38 points (95% CI -10.31-0.93). [3] Clinical response (decrease of 50% or more from baseline score) was also seen in 48.6% of patients receiving olanzapine compared to 24.2% of those assigned to placebo. [3] Improvement was also seen in other measures such as the severity of mania rating on the Clinical Global Impression – Bipolar version and total score on the Positive and Negative Symptom Scale. [3]

A second randomised, double blinded placebo controlled study was conducted by Tohen et al. in 2000. [4] This four week trial had a similar methodology with identical criteria for inclusion, primary efficacy measure and criteria for clinical response. It was, however, designed to also address some of limitations of the first trial, particularly the short treatment period, and to further determine the efficacy and safety of olanzapine in the treatment of acute mania. [4] The study design, method and assessment were clearly outlined. The study involved 115 patients and experienced a -6.65 point mean improvement in the Young Mania Rating Scale score and also showed a statistically significant greater clinical response in the olanzapine group compared to the placebo group. [4] Both studies confirmed the superior efficacy of olanzapine in the treatment of acute mania in comparison to placebo in a number of subgroups including mania versus mixed episode and psychotic-manic episode versus non-psychotic. [3,4]

The efficacy of olanzapine as monotherapy has also been compared to a number of other first line medications including lithium, haloperidol and sodium valproate. Two studies were identified that evaluated the efficacy of olanzapine and sodium valproate for the treatment of acute/mixed mania. Both demonstrated olanzapine to be an effective treatment. [5,6] Tohen et al. (2002) [5] showed olanzapine to have a superior improvement in mania rating scores and clinical improvement when compared to sodium valproate, however, this may have been affected by differences in dosage regimens between the study and mean model dosages. [7] Zajecka (2002) [6] described no significant differences between the two medications. In comparison to lithium, a small trial by Beck et al. in 1999 [8] described no statistically significant differences between the two medications. Similar rates of remission and response were shown in a twelve week double blinded study comparing olanzapine and haloperidol for the treatment of acute mania. [9]

The evidence presented from these studies suggests olanzapine at a dosage range of 5-20mg/day is an efficacious therapy in the treatment of acute manic episodes when compared to placebo and a number of other medications.

Efficacy of combination therapy of olanzapine and sodium valproate

As mentioned previously, there was no studies identified that directly addressed the question of whether use of combination therapy of olanzapine and sodium valproate were more efficacious than olanzapine monotherapy. One study by Tohen et al. in 2002 [10] was identified that investigated the efficacy of olanzapine in combination with sodium valproate for the treatment of mania, however this was in comparison to sodium valproate monotherapy rather than olanzapine.

This study was a six week double-blind, placebo controlled trial that evaluated patients with failure to respond to two weeks of monotherapy with sodium valproate or lithium. 344 patients were randomised to receive either combination therapy with olanzapine or continued monotherapy with placebo. [10] Efficacy was measured by use of the Young Mania Rating Scale with results showing combination therapy with olanzapine and sodium valproate showed greater improvement in total scores as well as clinically significant improved clinical response rates when compared to sodium valproate monotherapy. [10] This improvement was demonstrated by almost all measures used in the study. However, assignment to valproate or lithium therapy was not randomized with a larger number of patients receiving valproate monotherapy. This was noted as a limitation of the study. [10] The lack of an olanzapine monotherapy group within this study also prevents exploration of a postulated synergistic effect between olanzapine and the mood stabilisers such as sodium valproate. [10]

The study by Tohen et al. (2002) [10] does show that olanzapine when combined with the use of sodium valproate shows superior efficacy for the treatment of manic episodes than sodium valproate alone which may indicate that combination therapy may be more effective than monotherapy. Whilst suggestive that a patient not responding to initial therapy may benefit from the addition of a second medication, these study results cannot be generalised to compare olanzapine monotherapy and sodium valproate/olanzapine combination therapy.

Conclusion

When first line monotherapy for the treatment of acute manic episodes fails, the therapeutic guidelines recommend combination therapies as an option to improve response to therapy. [2] However there is no evidence currently available to support or disprove the use of combination olanzapine/sodium valproate as a more efficacious treatment than olanzapine alone. As no studies have been conducted addressing this specific question, the ability to comment about the appropriateness of the management of GR’s acute manic episode is limited.

This review has revealed a need for further studies to be undertaken evaluating the effectiveness of combination therapy for the treatment of acute manic episodes. In order to answer the question raised, it is essential that a trial be conducted with a large sample size; placebo controlled involving monotherapy with olanzapine and combination therapy in order to ascertain what approach is most effective. Another potential area for future research is for further assessment of what approach is best for those patients who fail to respond to initial monotherapy (increase current dose, change drugs or addition of medications) and then to identify whether characteristics of the patient such as whether they are experiencing a manic or mixed episode has any infl uence on the effectiveness of particular pharmacotherapies. This information would provide more evidence on which to base future recommendations.

There is clear evidence that supports the efficacy of olanzapine monotherapy in the treatment of acute mania as well as evidence suggesting combined therapy with sodium valproate is also an effective treatment; however a comparison between the two approaches to management was unable to be made. When evidence is lacking, it then becomes appropriate to consider the progress of the patient in order to assess the efficacy of the current management plan, as GR experienced considerable improvement, this may indicate that his current therapy is suitable for his condition.

Consent declaration

Informed consent was obtained from the patient for the original case report.

Conflicts of interest

None declared.

Correspondence

H Bennet: hannah.bennett@my.jcu.edu.au


Categories
Articles Review Articles

Is Chlamydia trachomatis a cofactor for cervical cancer?

Introduction

The most recent epidemiological publication on the worldwide burden of cervical cancer has reported that cervical cancer (0.53 million cases) was the third most common female cancer reported in 2008 after breast (1.38 million cases) and colorectal cancer (0.57 million cases). [1] Cervical cancer is the leading source of cancer-related death among women in Africa, Central America, South-Central Asia and Melanesia, indicating that it remains a major public health problem in spite of effective screening methods and vaccine availability. [1]

The age-standardised incidence of cervical cancer in Australian women (20-69 years) has decreased by approximately 50% from 1991 (the year the National Cervical Screening Program was introduced) to 2006 (Figure 1). [2,3] Despite this drop, the Australian Institute of Health and Welfare estimated an increase in cervical cancer incidence and mortality for 2010 by 1.5% and 9.6 % respectively. [3]

Human papillomavirus (HPV) is required but not sufficient to cause invasive cervical cancer (ICC). [4-6] Not all women with a HPV infection progress to develop ICC. This implies the existence of cofactors in the pathogenesis of ICC such as smoking, sexually transmitted infections, age at first intercourse and number of lifetime sexual partners. [7] Chlamydia trachomatis (CT) is the most common bacterial sexually transmitted infection (STI) and it has been associated with the development of ICC in many case-controlled and population based studies. [8-11] However, a clear cause-and effect relationship has not been elucidated between CT infection, HPV persistence and progression to ICC as an end stage. This article aims to review the literature for evidence that CT acts as a cofactor in the development of ICC and HPV establishment. The understanding of CT as a risk factor for ICC is crucial as it is amenable to prevention.

Aim: To review the literature to determine if an infection with Chlamydia trachomatis (CT) acts as a confounding factor in the pathogenesis of invasive cervical cancer (ICC) in women. Methods: Web-based Medline and the Australian Institute of Health and Welfare (AIHW) search for key terms: cervical cancer (including neoplasia, malignancy and carcinoma), chlamydia, human papillomavirus (HPV) and immunology. The search was restricted to English language publications on ICC (both squamous and adenocarcinoma) and cervical intraepithelial neoplasia (CIN) between 1990-2010. Results: HPV is essential but not sufficient to cause ICC. Past and current infection with CT is associated with squamous cell carcinoma of the cervix of HPV-positive women. CT infection induces both protective and pathologic immune responses in the host that depend on the balance between Type-1 helper cells versus Type-2 helper cell-mediated immunity. CT most likely behaves as a cervical cancer cofactor by 1) invading the host immune system and 2) enhancing chronic inflammation. These factors increase the susceptibility of a subsequent HPV infection and build HPV persistence in the host. Conclusion: Prophylaxis against CT is significant in reducing the incidence of ICC in HPVposi tive women. GPs should be raising awareness of the association between CT and ICC in their patients.

Evidence for the role of HPV in the aetiology and pathogenesis of cervical cancer

HPV is a species-specific, non-enveloped, double stranded DNA virus that infects squamous epithelia and consists of the major protein L1 and the minor capsid protein L2. More than 130 HPV types have been classified based on their genotype and HPV 16 (50-70% of cases) and HPV 18 (7-20% cases) are the most important players in the aetiology of cervical cancer. [6,12] Genital HPV transmission is usually spread via skin-to-skin contact during sexual intercourse but does not require vaginal or anal penetration, which implies that condoms only offer some protection against CIN and ICC. [6] The risk factors for contracting HPV infection are early age at first sexual activity, multiple sexual partners, early age at first delivery, increased number of pregnancies, smoking, immunosuppression (for example, human immunodeficiency virus or medication), and long-term oral contraceptive use. Social customs in endemic regions such as child marriages, polygamy and high parity use may also increase the likelihood of contracting HPV. [13] More than 80% of HPV infections are cleared by the host’s cellular immune response, which starts about three months from the inoculation of virus. HPV can be latent for 2-12 months post-infection. [14]

Molecular Pathogenesis

HPV particles enter basal keratinocytes of mucosal epithelium via binding of virions to the basal membrane of disrupted epithelium. This is mediated via heparan surface proteoglycans (HSPGs) found in the extracellular matrix and cell surface of most cells. The virus is then internalised to establish an infection mainly via a clathrin-dependent endocytic mechanism. However, some HPV types may use alternative uptake pathways to enter cells, such as a caveolae-dependent route or the involvement of tetraspanin-enriched domains as a platform for viral uptake. [15] The virus replicates in nondividing cells that lack the necessary cellular DNA polymerases and replication factors. Therefore, HPV encodes proteins that reactivate cellular DNA synthesis in noncycling cells, inhibit apoptosis, and delay the differentiation of the infected keratinocyte, to allow viral DNA replication. [6] The integration of viral genome in the host DNA causes deregulation of E6 and E7 oncogenes of high-risk HPV (HPV 16 and 18) but not of low risk HPV (HPV 6 and 11). This results in the expression of E6 and E7 oncogenes throughout the epithelium resulting in aneuploidy and karotypic chromosomal abnormalities that accompany keratinocyte immortalisation. [5]

Natural History of HPV infection and cervical cancer

Low risk HPV infections are usually cleared by cellular immunity coupled with seroconversion and antibodies against major coat protein L1. [5,6,12] Infection with high-risk HPV is highly associated with the development of squamous cell and adenocarcinoma of the cervix, which is confounded by other factors such as smoking and STIs. [4,9,10] The progression of cervical cancer in response to HPV is schematically illustrated in Figure 2.

Chlamydia trachomatis and the immune response

CT is a true obligate intracellular pathogen and is the most common bacterial cause of STIs. It is associated with sexual risk-taking behaviour and leads to asymptomatic and therefore undiagnosed genital infections due to the slow growth cycle of CT. [16] A CT infection is targeted by innate immune cells, T cells and B cells. Protective immune responses control the infection whereas pathological responses lead to chronic inflammation that causes tissue damage. [17]

Innate immunity

The mucosal epithelium of the genital tract provides first line of host defence. If CT is successful in entering the mucosal epithelium, the innate immune system is activated through the recognition of pathogen-associated molecular patterns (PAMPs) such as the Toll-like receptors (TLRs). Although CT lipopolysaccharides can be recognised by TLR4, TLR2 is more crucial for signalling pro-inflammatory cytokine production. [18] This leads to the production of pro-inflammatory cytokines such as interleukin-1 (IL-1), IL-6, tumour necrosis factor-a (TNF-a) and granulocyte-macrophage colony-stimulating factor (GMCSF). [17] In addition, chemokines such as IL-8 can increase recruitment of innate-immunity cells such as macrophages, natural killer (NK) cells, dendritic cells (DCs) and neutrophils that in turn produce more proinflammatory cytokines to restrict CT growth. Infected epithelial cells release matrix metalloproteases (MMPs) that contribute to tissue proteolysis and remodelling. Neutrophils also release MMPs and elastases that contribute to tissue damage. NK cells produce interferon (IFN)–gamma that drives CD4 T cells toward the Th1-mediated immune response. The infected tissue is infi ltrated by a mixture of CD4, CD8, B cells, and plasma cells (PCs). [17,19,20] DCs are essential for processing and presenting CT antigens to T cells and therefore linking innate and adaptive immunity.

Adaptive Immunity

Both CD4 and CD8 cells contribute to control of CT infection. In 2000, Morrison et al. showed that B cell-deficient mice, depleted of CD4 cells, are unable to clear CT infection. [21] However, another study in 2005 showed that passive transfer of chlamydia-specific monoclonal antibodies into B-cell deficient and CD4 depleted cells restored the ability of these mice to control a secondary CT infection. [22] This indicates a strong synergy between CD4 and B cells in the adaptive immune response to CT. B cells produce CT-specific antibodies to combat the pathogens. In contrast, CD8 cells produce IL-4, IL-5 and IL- 13 that do not appear to protect against chlamydia infection and may even indirectly enhance chlamydia load by inhibiting the protective CD4 response. [23] A similar observation was made by Agrawal et al. who examined cervical lymphocyte cytokine responses of 255 CT antibody–positive women with or without fertility disorders (infertility and multiple spontaneous abortions) and of healthy control women negative for CT serum IgM or IgG. [20] The study revealed a significant increase in CD4 cells in the cervical mucosa of fertile women, compared with those with fertility disorders and with negative control women. There was a very small increase in CD8 cells in cervical mucosa of CT infected women in both groups. The results showed that cervical cells from the women with fertility disorders secreted higher levels of IL- 1b, IL-6, IL-8, and IL-10 in response to CT; whereas, cervical cells from antibody-positive fertile women secreted significantly higher levels of IFN-gamma and IL-12. This suggests that a skewed immune response toward Th1 prevalence protects against chronic infection. [20]

The pathologic response to CT can result in inflammatory damage within the upper reproductive tract due to either failed or weak Th1 action resulting in chronic infection or an exaggerated Th1 response. Alternatively, chronic infection can occur if Th2 response dominates Th1 immune response and result in autoimmunity and direct cell damage which in turn will enhance tissue inflammation. Inflammation also increases the expression of human heat shock protein (HSP), which induce production of IL-10 via autoantibodies leading to CT associated pathology such as tubal blockage and ectopic pregnancies. [24]

Evidence that Chlamydia trachomatis is a cofactor for cervical cancer

Whilst it has been established that HPV is a necessary factor in the development of cervical cancer, it is still unclear why the majority of women infected with HPV do not progress to ICC stage. Several studies in the last decade have focused on the role of STIs in the pathogenesis of ICC and discovered that CT infection is consistently associated with squamous cell ICC.

In 2000, Koskela et al. performed a large-scale case-controlled study within a cohort of 530,000 Nordic women to evaluate the role of CT in the development of ICC. [10] One-hundred and eighty-two women with ICC (diagnosed during a mean follow-up of five years after serum sampling) were identified via linking data files of three Nordic serum banks and the cancer registries of Finland, Norway and Sweden. Microimmunofl uorescence (MIF) was used to detect CT-specific IgGs and HPV16-, 18- and 33-specific IgG antibodies were determined by standard ELISAs. Serum antibodies to CT were associated with an increased risk for cervical squamous-cell carcinoma (HPV and smoking adjusted odds ratio (OR), 2.2; 95% confi dence interval (CI), 1.3–3.5). The association remained also after adjustment for smoking both in HPV16-seronegative and seropositive cases (OR, 3.0; 95% CI, 1.8–5.1; OR, 2.3, 95% CI, 0.8–7.0 respectively). This study provided sero-epidemiologic evidence that CT could cause squamous cell ICC. However the authors were unable to explain the biological association between CT and squamous cell ICC.

Many more studies emerged in 2002 to investigate this association between CT and ICC even further. Smith et al. performed a hospital case-controlled study of 499 ICC women from Brazil and 539 from Manila that revealed that CT seropositive women have a twofold increase in squamous ICC (OR, 2.1; 95% CI, 1.1-4.0) but not adenocarcinoma or adenosquamous ICC (OR, 0.8; 95% CI, 0.3-2.2). [8] Similarly, Wallin et al. conducted a population based prospective study of 118 women who developed cancer after having a normal pap smear (average of 5.6 years later). [25] Women were followed up for 26 years. PCR analysis for CT and HPV DNA showed that the relative risk for ICC associated with past CT, adjusted for concomitant HPV DNA positivity, was 17.1. They also concluded that the presence of CT and of HPV was not interrelated.

In contrast, another study examining the association between CT and HPV in women with cervical intraepithelial neoplasia (CIN) found that there is an increase in CT rate in HPV-positive women (29/49) as compared to HPV-negative women (10/80), (p<0.001). [26] However, no correlation between HPV and CT co-infection was found and the authors suggested that the increased CT infectivity rate in HPVposi tive women is presumably due to HPV-related factors, including modulation of the host’s immunity. In 2004, a case-controlled study of 1,238 ICC women and 1100 control women in 7 countries coordinated by the International Agency for Research on Cancer (IARC), France also supported the findings of previous studies. [7]

Strikingly, a very recent study in 2010 confirmed that there was no association between CT infection, as assessed by DNA or IgG, and risk of cervical premalignancy, after controlling for carcinogenic HPVposi tive status. [11] The authors have justified the difference in results from previous studies by criticising the retrospective nature of the IARC study, which meant that HPV and CT status at relevant times were not available. [7] However, other prospective studies have also identified the association between CT and ICC development. [9,25] Therefore, the results from this one study remain isolated from practically every other study that has found an association between CT and ICC in HPV infected women.

Consequently, it is evident that CT infection has a role in confounding squamous cell ICC in HPV infected women but it is not an independent    cause for ICC as previously suggested by Koskela et al. [10] Previous cause-and-eff ect association between CT and HPV are most likely from CT infection increasing the susceptibility to HPV. [9,11,27] The mechanisms by which CT can act as a confounder for ICC relate to CT induced inflammation (associated with metaplasia) and invasion of the host immune response, which increases susceptibility to HPV infection and enhances HPV persistence in the host. CT can directly degrade RFX-5 and USF-1 transcription factors that induce expression of MHC class I and MHC class II respectively. [17,28] This prevents recognition of both HPV and CT by CD4 and CD8 cells, thus preventing T-cell effector functions. CT can also suppress IFN-gamma-induced MHC class II expression by selective disruption of the IFN-gamma signalling pathways, hence evading host immunity. [28] Additionally, as discussed above, CT induces inflammation and metaplasia of infected cells, which predisposes them as target cells for HPV. CT infection may also increase access of HPV to the basal epithelium and increases HPV viral load. [16]

Conclusion

There is sufficient evidence to suggest that CT infection can act as a cofactor in squamous cell ICC development due to consistent positive correlations between CT infection and ICC in HPV positive women. CT invades the host immune response due to chronic inflammation and it is presumed that it prevents the clearance of HPV from the body, thereby increasing the likelihood of developing ICC. More studies are needed to establish the clear biological pathway linking CT to ICC to support the positive correlation found in epidemiological studies. An understanding of the significant role played by CT as a cofactor in ICC development should be exercised to maximise efforts in CT prophylaxis, starting at the primary health care level. Novel public health strategies must be devised to reduce CT transmission and raise awareness among women.

Conflicts of interest

None declared.

Correspondence

S Khosla: surkhosla@hotmail.com