Categories
Book Reviews Articles

Starlight stars bright

White T. Starlight: An Australian Army doctor in Vietnam. Brisbane: CopyRight Publishing; 2011.

RRP: $33.00

Not many of us dream of serving as a medical doctor in the frontlines of war. War is after all the antithesis of everything the medical profession stands for. [1] In Starlight, Dr Tony White AM vividly recounts his tour of duty in South Vietnam between 1966 and 1967 through correspondence exchanged with his family. STARLIGHT was the radio call sign for the medical officer and it bore the essence of what was expected of young White as a Regimental Medical Officer (RMO) in the 5th Battalion of the Royal Australian Regiment (5 RAR).

White was born in Perth, grew up in Kenya and read medicine in Clare College, Cambridge University. After completing the first half of the six-year degree, he moved back with his family to Sydney where the pivotal decision to join military service was made. White accepted a scholarship from the Australian government to continue at the University of Sydney in exchange for four years of service in the Australian Defence Force after a year of residency.

In May 1966, White’s wartime duties commenced with 5 RAR in Vung Tau, southeast of Saigon, dubbed “Sufferer’s Paradise”. After a brief settling-in, the battalion moved to Nui Dat, their operational base for the year. The initial excitement of the 25-year-old’s first visit to Asia quickly faded as the realities of war – the mud, the sweat and the blood – set in. Footnotes and explanation of military jargon and organisation were immensely helpful in acquainting the reader to the battalion’s environment. As an RMO, White worked round-the-clock performing General Practice duties such as sick parades and preventive medicine, emergency duties attending to acute trauma, and public health duties monitoring for disease outbreaks and maintaining hygiene. The stark difference from being a civilian doctor is candidly described, “You live, eat, dig, and [defecate] with your patients and, like them, get every bit as uncomfortable and frightened. There is no retreat or privacy.”

From the early “friendly fires” and booby traps to the horror of landmines, White’s affecting letters offer a very raw view of war’s savagery. It was a war fought against guerrillas, much like the present war in Afghanistan, where the enemy is unknown and threat may erupt into danger at any time. During the numerous operations 5 RAR conducted, White attended to and comforted many wounded. With every digger killed in action, a palpable sense of loss accompanies the narration. White clearly laments the “senseless killing of war” as he explained, “You spend all that time – 20 years or so – making a man, preserving his health, educating and training him, to have him shot to death.” White himself had close brushes with death. He was pinned down by sniper fire on one occasion and even found himself in the middle of a minefield in the worst of tragedies encountered. The chapter “Going troppo” ruminates on the enduring psychological effects of these events as the year unfolded.

The insanity of war is balanced by many heartening acts. First and foremost is the remarkable resilience of the diggers whose tireless disposition to work inspired White profoundly. White also voluntarily set up regular clinics in surrounding villages to provide care for civilians despite the threat of enemy contact. In an encouraging twist, both friendly and enemy (Viet Cong) casualties were rendered the same standards of care. Even more ironic was the congenial interactions between the two factions within the confines of the hospital. Perhaps the most moving of all was White’s heartfelt words of appreciation to his family who supported his spirits through sending letters and homemade goodies like fruitcakes, biscuits and smoked oysters.

So why should you read this book? Textbooks do not teach us empathy. White shares in these 184 pages experiences that we all hope never to encounter ourselves. Yet countless veterans, refugees, abuse victims, etcetera have faced such terror and our understanding of their narratives is essential in providing care and comfort. In the final chapters of this book White gives a rare physician perspective on post-traumatic stress disorder and how he reconciled with the profound impact of war to achieve success in the field of dermatology. These invaluable lessons shine through this book.

Conflicts of Interest

None declared.

References


[1] DeMaria AN. The physician and war. Journal of the American College of Cardiology. 2003;41(5):889-90.

Categories
Feature Articles Articles

Burdens lifted, hopes restored

During the summer break of our third year of medicine at the University of Tasmania, we decided to embark on an elective at Padhar in India. The country of India fascinated us as an opportunity to experience a very different health care system and to learn more about the Indian culture.

Padhar is a small town located in Madhya Pradesh in the central highlands of India. It appealed to us because of its rural location. This tiny town boasts a 200 bed multispecialty missionary hospital, which initially started out in 1958 as a clinic. The hospital is often the first point of contact for many patients from surrounding states, including the Gond and the Korku tribals, and some patients travel for days to seek medical help here.

After fifteen hours of flying and an eventful 26 hour train ride, we arrived at Itarsi Junction, a two hour bumpy drive away from Padhar.

Padhar is declared endemic for malaria so we came ‘armed’ with insect repellents and mosquito coils. Despite our best efforts, we were not spared the wrath of the mosquitoes. We couldn’t help but feel paranoid when we got our first mosquito bites even though we took our doxycycline regularly.

Tuberculosis (TB) is a serious and common health problem in Padhar. We had not expected such a high prevalence to the extent that, for doctors in Padhar, the first differential diagnosis for a cough and a cold was often TB until proven otherwise. It was not uncommon to see the sorts of chest X-rays with cavitating lesions that we had previously only seen in textbooks.

Another difference we observed during our elective was the vastly differing attitudes to hygiene. In Australia we are well familiarised with the hand hygiene posters plastered all over hospital walls. In Padhar, in place of our ‘5 moments of hand hygiene’ signs are signs that read, ‘Gloves are useful but not necessary.’ The sanitation practices were also very rudimentary as basins of water and lemon replaced the sinks and chlorhexidine we had previously taken for granted.

Textbook photographs of patients with late presentations of cancer came to life in Padhar. Geographical barriers, as well as the habit of betel nut and tobacco chewing, often result in patients presenting with large tumours of the oral cavity. One of the cases we saw was that of a 45 year old man who presented for a surgical resection of a large squamous cell carcinoma on the left side of his tongue. The skilled surgeons at Padhar performed a COMMANDO Procedure (COMbined MANDibulectomy and Neck Dissection Operation). The surgeons are particularly skilled at this procedure as it is commonly performed. This is because late presentations of cancer are common here due to the lack of preventative screening, as well as geographical barriers and poverty. It saddened us to see that there is a huge health disparity between a developed and developing country.

However, despite disparities in health care systems, we found that generosity knew no boundaries. There were many charming patients and helpful medical staff whom we encountered during our time in Padhar. In particular, we met a pair of omphalophagus conjoined twins, who were four months old at the time of our visit. Their parents were poor farmers who were devastated when their twins were born, as they did not have the means to care for them. Therefore, they did what they thought was best for the twins, by returning home without them and leaving them in the hospital. Won over by the twins’ infectious smiles, the hospital staff decided to take them into their care. The current plan is to wait for the twins to reach ten kilograms before separating them. However, the amount needed to separate the twins is more than US$150,000, much more than the hospital can afford. In addition, the hospital would need to cover the cost of raising the twins. However, they are determined to raise the twins and provide them with the best life that they can have. The twins were constantly surrounded by nurses, doctors and other hospital staff. The care and love shown by the team in Padhar certainly tugged at our heartstrings.

We also saw other cases that taught us some fundamental rules about diagnosis and history taking. One was a sixteen year old girl who presented to the emergency department complaining of a five day history of progressively worsening generalised abdominal pain.. She had a background of trauma after a fall whilst collecting water from a well. Although injury to the jejunum is common after blunt force trauma, [1] the medical team had ruled it out as it would be expected to cause very significant pain, usually leading to immediate hospital presentation. Thus it came as a surprise when a perforated jejunum was found on X-Ray. This case reminded us that clinical presentations, though incredibly useful, can still be deceiving.

One of the highlights of the trip was being a part of the team involved in the Mobile Clinic under the Rural Outreach Program, which was an initiative of Padhar Hospital. The Mobile Clinic services the surrounding villages that have limited access to healthcare due to geographical barriers. More often than not, it would have been months or even years since the villagers engaged with the healthcare system.

The makeshift clinic attracted many people from the village and surrounding villages as people of all ages with a myriad of diseases lined up patiently to seek medical help. The most common presentation was scabies and we quickly ran out of Permethrin cream. As Padhar Hospital has always been passionate about contributing towards the fight against human immunodeficiency virus (HIV) we also took bloods from patients to test for HIV and educated them about the disease and the importance of safe sexual practices.

On the last few days of our trip, we were very lucky to be a part of Padhar’s celebration of World Aids Day. The prevalence of HIV/AIDS in India in 2009 was 2.4 million out of a population of 1.2 billion. [2] It was the aim of Padhar Hospital to raise awareness of HIV and AIDS in conjunction with this day. In the morning church service conducted in the hospital compound, testimonials were shared from HIV patients as well as doctors who had clinical contact with them. During the lunch break, the hospital invited school children from nearby primary schools as part of the awareness program. One of the interesting things they had in store for them was a parody of the stereotypes against HIV patients. It is good to see that, unlike for many in the older generations, these young minds were receptive to the idea that HIV is not a deadly infectious disease that spreads through touch. The children were educated about safe sex practices as well as informed about the availability of free needles.

Whilst seeing plenty of patients and medical staff gave us opportunities and insights into medicine, our elective was also a culturally enriching experience. Generally, people were curious about our backgrounds and it was good to be able to share our culture with them and learn about theirs too. It gave us a glimpse into a very different way of life to our own. We experienced firsthand the gracious hospitality of the locals; we were invited to be a part of one of the doctors’ daughter’s wedding, despite the fact that we have never met the bride before.

We also loved seeing the sights and sounds of the town and outskirts, from people bathing and doing their laundry in rivers to women in bright coloured sarees carrying urns twice the size of their heads. We also saw families of five piled onto motorcycles. We were touched by the hospitality that was shown by the villagers, despite the fact that we were foreigners who did not speak their language. Many villagers opened their homes to us and we had a chance to see how they live their life, which contrasted immensely to what we were used to. They cooked with firewood and had to walk a fair distance to collect water from wells. What touched our hearts was the fact that everyone seemed satisfied with what they had. Their voices and faces seemed to echo the old adage, “Happiness is not having what you want, but appreciating what you have.”

It was a humbling experience, and reminded us to be grateful for everything around us. It is sad to think that in this day and age, there are many people who are still living in poverty and unable to access healthcare. Hospitals like Padhar Hospital have certainly made a difference in terms of rural healthcare provision. When it was time for us to go, we left with a heavy heart but knowing that we will always do our best to uphold the hospital’s motto, ‘Burdens lifted, hopes restored.’

Acknowledgements

Sharene Chong and Niyanta D’souza for making the trip memorable. Dr Choudrie and the amazing team in Padhar for their hospitality. Pictures taken by Tiffany Foo.

Conflict of interest

None declared.

Correspondence

A Lim: jnalim@utas.edu.au

T Foo: sytfoo@utas.edu.au

References


[1]Langell J. Gastrointestinal perforation and the acute abdomen. The Medical Clinics of North America 2008;92(3):599-625.
[2] USAID, HIV/AIDS health profile [Internet]. 2010 [updated 2010 Dec; cited 2012 April 30]. Available from: http://www.usaid.gov/our_work/global_health/aids/ Countries /asia/india.html

Categories
Feature Articles Articles

Bring back the white coats?

Should we bring back the white coat? Is it time for this once-venerated symbol of medicine to re-establish itself amongst a new generation of fledgling practitioners? Or, is this icon of medical apparel nothing more than a potentially dangerous relic of a bygone era?

Introduction

The white coat has long been a symbol of the medical profession, dating back to the late-1800s. [1] It was adopted as medical thought became more scientific. [2] Doctors wore coats aligning themselves with the scientists of the day, who commonly wore beige coats, but instead chose white – the colour lacking both hue and shade – as representation of purity and cleanliness. [3] Nowadays, the white coat is rarely seen in hospitals, possibly due to suspicions that it may function as a vector for transmission of nosocomial infections. [4] This article addresses the validity of such concerns, by reviewing the available literature.

The vanishing white coat

Twenty years ago in the United Kingdom (UK) white coats were commonly worn by junior doctors while consultants wore suits. [5] The choice to not wear a white coat was seen as a display of autonomous, high-ranking professionalism. [6] Many older Australian nurses now recall when doctors commonly wore white coats in the hospital. Over the last decade, white coats have become a rarity in Australian hospitals. [7,8] There are many reasons why this change occurred. Table 1 outlines some common thoughts of doctors on the matter. Paediatricians and psychiatrists stopped using white coats as they thought that it created communication barriers in the doctor-patient relationship. [3] Society viewed white coats as a status symbol, [7] evoking an omnipotent disposition, which was deemed inappropriate. [6,7] In addition, it was thought white coats might be a vector for nosocomial infection. [6,9-13] With these pertinent issues, and no official policy requiring white coats, doctors gradually hung them up.

Table 1. Reasons for why doctors choose to wear or not wear white coats

 

Reasons why doctors wear white coats Reasons why doctors do not wear white coats
For identification purposes [8]

To carry things [14]

Hygiene [7,8]

To protect clothes [8]

To create a psychological barrier [3]

Patients prefer doctors in white coats [14]

Looks professional [8,14]

No one else does [8]

Infection risk [5,8,14]

Hot or uncomfortable [5,8,14]

Interferes with the doctor-patient relationship [6,14]

Lack of seniority [5]

 

Hospital policies and white coats

In 2007 the British Department of Health published guidelines for healthcare worker uniforms, that banned the white coat from hospitals in England, [15] thereby producing a passionate controversy. [4] The primary reason for the ban was to decrease health-care acquired infections, [9,12,16] which was supposedly supported by one of two Thames Valley University literature reviews. [6,13] Interestingly, these reviews stated there was no evidence to support the notion that clothing or specific uniforms, could be a noteworthy medium for the spread of infections. [6,13] On closer inspection of the British policy, however, they state: “it seems unlikely that uniforms are a significant source of cross-infection.” [15] The text goes on to support the new uniform guidelines, including the abolition of the white coat, because “the general public’s perception is that uniforms pose an infection risk when worn inside and outside clinical settings.” [6] This statement lacks evidence, as many studies show patients prefer their doctors to wear white coats [7,14,17] and the notion of patients being concerned about infection risk are uncommon. [7] It would appear that the British Department of Health made this decision for some reason other than compulsion by evidence.

Despite significant discussion and debate, the United States (US) has chosen not to follow England in banning the white coat. [3,12,18] The US has a strong tradition associated with the white coat, which may influence their reluctance to abandon them so quickly. In 1993, the ‘white coat ceremony’ was launched in the US, where graduating medical students were robed in a white coat, as the senior doctors ‘demonstrate their belief in the student’s ability to carry on the noble tradition of doctoring.’ [1] Only five years later, 93 US medical schools had adopted this practice. [1] This indicates that the white coat is a real source of pride for doctors in the US, however, tradition alone cannot dictate hospital policies. In 2009, the American Medical Association (AMA) passed a resolution to encourage the “adoption of hospital guidelines for dress codes that minimise transmission of nosocomial infections.” [19] Rather than banning white coats, [16] the AMA proposed the need for more research, noting that there was insufficient evidence to support that there was an increased risk of nosocomial infection directly related to their use. [18]

The Australian Government National Health and Medical Research Council (NHMRC) published the Australian Guidelines for the Prevention and Control of Infection in Healthcare in 2010, outlining recommendations for the implementation of infection control in all Australian hospitals, and other areas of healthcare, based on current literature. [20] It states that uniforms should be laundered daily, whether at home or at the hospital, and that the literature has not shown a necessity to ban white coats or other uniforms, as there is no evidence that they increase transmission of nosocomial infections. [20] These guidelines, also contained the article that the British Department of Health used in support of banning white coats. [6]

The evidence of white coats and nosocomial infection

There are minimal studies done trying to assess whether white coats are potential sources of infection or not. [9-12] Analysis of the limited data paints a uniform picture of the minimal possibility for white coats to spread infection.

In 1991 a study of 100 UK doctors demonstrated that no pathogenic organisms were cultured from the white coats. [10] Notably, this study also found that the level of bacterial contamination of white coats did not vary with the amount of time the coat was worn, but varied with the amount of use. [10] The definition of usage was not included in the article, although doctor-patient time is the most likely interpretation. Similarly, a study in 2000 isolated no Methicillin-resistant Staphylococcus aureus (MRSA), or other infective organisms, but still concluded that the white coat was a possible cause of infection. [11] This study stated white coats were not to be used as a substitute for personal protective equipment (PPE) and it was recommended that they should be removed before putting on plastic aprons. [11]

A recent study swabbed MRSA on 4% of the white coats of medical participants, even though it was the biggest study of its kind, there was no statistically significant difference between colonised and uncolonised coats due to the population size. [9] This study has limitations in that it did not compare contamination with clinical dress, which could potentially show there is no difference. There appeared to be a correlation with the MRSA contaminated coats and hospital-laundered coats with four out of the six coats being hospital-laundered. [9] A potential major contributing factor to the contamination of white coats could be the frequency of washing white coats. A survey in the 2009 study showed that 81% of participants had not washed their coats for more than seven days and 17% in more than 28 days. [9] Even though the 1991 study showed that usage, not time, was the determinate for bacterial load, this does not negate a high amount of usage over a long period of time. [10] Interestingly, there may be a correlation with the MRSA contaminated coats and hospital-laundered coats. [9]

In response to the British hospital uniform guidelines, a Colorado study, published in April 2011, compared the degree and rate of bacterial contamination of a traditional, infrequently-washed, long-sleeved white coat, to a newly-cleaned, short-sleeved uniform. [12] Their conclusions were unexpected, such that after eight hours of wear, there was no difference in the degree of contamination of the two. Additionally, the study concluded that there was also no difference in the extent of bacterial or MRSA contamination of the cuffs of the physicians. Consequently, the study does not discourage the wearing of long-sleeved white coats [12] and concludes that there is no evidence for their abolition due to infection control concerns.

While, all these studies indicate the potential for organisms that cause nosocomial infections to be present on white coats, [10-12] the common conclusion is there is no higher infection risk from daily-washed, white coats, than any other clinical attire. [12] It needs to be recognised there are many confounding factors in all of these studies that compare attire and nosocomial infection, hence more studies are needed to clearly establish guidelines for evidence-based practice regarding this issue. Gaining an understanding of the difference in transmission rates between specialities could assist in implementing specific infection control practices. Studies that clearly establish transmission of organism from uniform to patient, and clinical data on the frequency of such transmissions, would be beneficial in developing policy. Additionally, nationwide hospital reviews on rates of nosocomial infections, comparing the dress of the doctors and nurses would contribute to gaining a more complete understanding of the role that uniforms play in transmission of disease.

Australian hospitals and white coats

Queensland State Infection Control Guidelines published by the Centre for Healthcare Related Infection Surveillance and Prevention (CHRISP), surprisingly had no details of recommended dress of doctors that could be found. [21] State guidelines like these, in combination with federal guidelines, influence the policies that each individual hospital in Australia creates and implements.

A small sample of hospitals across all the states and territories of Australia were canvassed to assess what the general attitudes were towards the wearing of white coats during patient contact and whether these beliefs were evidence-based. The infection control officers of each of the hospitals were contacted, by myself and the specifics of their policies attained, along with an inquiry regarding the wearing of white coats by students or staff. This data was collected verbally. Obviously there are limitations to this crude data collection it is the result of attempting to attain data not recorded.

On the whole, individual hospital policies emulated National Guidelines almost exactly, by not expelling white coats; instead encouraging them to be washed daily, like normal dress. Some hospitals had mandatory ‘bare-below-the-elbows’ and ‘no lanyard’ policies, while many hospitals did not. White coats were worn in a significant amount of Australian hospitals, usually by senior consultants and medical students (see Table 2). The general response from infection control officers regarding the wearing of white coats was negative, presumably due to the long sleeves and the knowledge that they are probably not being washed daily. [10,12]

Table 2. Relevant policies in place regarding white coats and if white coats are worn within hospitals in major Australian centres.

Hospital Policy regarding white coats White coat worn
Townsville Hospital No policy An Emergency Department doctor and surgeon
Mater Hospital – Townsville No policy Nil known
Royal Brisbane and Prince Charles

– Metro  North*

No policy Medical students
Brisbane Princess Alexandra and Queen Elizabeth 2

– Metro South*

No policy Medical students
One consultant who requires his medical students to wear white coats
Royal Darwin Hospital Sleeves to be rolled up Nil known
Royal Melbourne Hospital No policy Nil known
Royal Prince Alfred Hospital
– Sydney
No policy Senior doctors, occasionally
Royal Hobart Hospital No policy Nil known
Royal Adelaide Hospital No policy Orthopaedics, gynaecologists and medical students
Royal Perth Hospital Sleeves to be rolled up Only known to be worn by one doctor
Canberra Hospital Sleeves to be rolled up

*All the hospitals in the northern metropolitan region of Brisbane are governed by the same policy, likewise for Metro South.

This table shows white coats are not extinct in Australian hospitals and the policies in place pertaining to white coats reflect the Federal Guidelines. Policies regarding lanyards, ties and long-sleeves differed between hospitals. It is encouraging to note that Australia has not followed in the footsteps of England, regarding the abolition of white coats, as there is limited scientific evidence to support such a decision. The policies in Australia regarding white coats require daily laundering, although current literature even queries the necessity for this. [12] The negative image of white coats in Australian hospitals by the infection control officers is probably influenced by the literature that shows that white coats become contaminated. [9] The real discussion, however, is the difference in contamination of white coats and other clinical wear.

Meditations of a medical student

My own views…

I have worn a white coat on numerous occasions, during dissections and lab experiments, but never when I am in contact with patients. According to the James Cook University School of Medicine dress policy, all medical students are to wear ‘clean, tidy and appropriate’ clinical dress. [22] No detail is included regarding sleeve length, colour or style, although social norm is a very powerful force, and the main reason that my colleagues and myself would not wear white coats is simply because no one else is wearing them. This practice is concurrent with a study on what Australian junior doctors think of white coats. [8]

Personally, I think that a white coat would be quite useful. It may even decrease nosocomial infection, as it has big pockets and could carry books and instruments, negating the need for a shoulder bag or putting items down in patient’s rooms, thus becoming a potential cross-infection risk. In regards to the effects on patients, I think the psychological impact may have some effect, but this would be different for each individual. White coats are not the cause of nosocomial infections that are rampant in our hospitals, it is the compliance of health professionals washing their hands and adhering to the evidence-based guidelines provided by infection control organisations. In Australia these guidelines give freedom to wear the white coats, so why not?

Conclusion

White coats are a symbol of the medical profession and date back to the beginnings of evidence-based medicine. Suitably, it is appropriate to let the evidence shape the policies regarding the wearing and laundering of white coats in hospitals and medical practice. There has been much debate regarding white coats as an increased risk for nosocomial infection, [3,4,12,16,18] as many studies have shown that white coats carry infectious bacteria. [9-12] But, more notably, a study published in April 2011, showed that the bacterial loads on infrequently washed white coats did not differ from newly cleaned short-sleeve shirts. [12] The reason why Britain decided to ban white coats in 2007 is a mystery. Australia has not banned white coats, although there are some practitioners who choose to wear them, but is it far from the norm. [8] A nation-wide, formal re-introduction of white coats into Australian medical schools has no opposition from infection control according to the current evidence. “…Might not the time be right to rediscover the white coats as a symbol of our purpose and pride as a profession?” [1]

Conflict of interest

None declared.

Acknowledgements

Thank you to Sonya Stopar for her assistance in editing this article.

Correspondence

S Fraser: sara.fraser@my.jcu.edu.au

References

[1] Van Der Weyden MB. White coats and the medical profession. Med J Aust. 2001;174.
[2] Blumhagen DW. The doctor’s white coat:The image of the physician in modern America. Ann Intern Med. 1979;91(1):111-6.
[3] Ellis O. The return of the white coat? BMJ Careers [serial on the Internet]. 2010 Sep 1; [cited 2012 October 10]. Available from: http://careers.bmj.com/careers/advice/view-article.html?id=20001364.
[4] Kerr C. Ditch that white coat. CMAJ. 2008;178(9):1127.
[5] Sundeep S, Allen KD. An audit of the dress code for hospital medical staff. J Hosp Infect. 2006; 64(1):92-3.
[6] Loveday HP, Wilson JA, Hoffman PN, Pratt RJ. Public perception and the social and microbiological significance of uniforms in the prevention and control of healthcare-associated infections: An evidence review. British J Infect Control. 2007;8(4):10-21.
[7] Harnett PR. Should doctors wear white coats? Med J Aust. 2001;174:343-4.
[8] Watson DAR, Chapman KE. What do Australian junior doctors think of white coats? Med Ed. 2002; 36(12):1209-13.
[9] Treakle AM, Thom KA, Furuno JP, Strauss SM, Harris AD, Perencevich EN. Bacterial contamination of health care workers’ white coats. Am J Infect Control. 2009; 37(2):101-5.
[10] Wong D, Nye K, Hollis P. Microbial flora on doctors’ white coats. BMJ. 1991; 303(6817):1602-4.
[11] Loh W, Ng VV, Holton J. Bacterial flora on the white coats of medical students. J Hosp Infect. 2000; 45(1):65-8.
[12] Burden M, Cervantes L, Weed D, Keniston A, Price CS, Albert RK. Newly cleaned physician uniforms and infrequently washed white coats have similar rates of bacterial contamination after an 8-hour workday: A randomized controlled trial. J Hosp Med. 2011;6(4):177-82.
[13] Wilson JA, Loveday HP, Hoffman PN, Pratt RJ. Uniform: An evidence review of the microbiological significance of uniforms and uniform policy in the prevention and control of healthcare-associated infections. Report to the department of health (England). J Hosp Infect. 2007;66(4):301-7.
[14] Douse J, Derrett-Smith E, Dheda K, Dilworth JP. Should doctors wear white coats? Postgrad Med J. 2004;80(943):284-6.
[15] Jacob G. Uniforms and workwear: An evidence base for developing local policy [monograph on the Internet]. Leeds, England: Department of Health; 2007 [cited 2012 Oct 10]. Available from: http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/documents/digitalasset/dh_078435.pdf.
[16] Sweeney M. White coats may not carry an increased infection risk. [monograph on the Internet]. Cambridge, England: Cambridge Medicine Journal; 2011 [cited 2012 Oct 10]. Available from: http://www.cambridgemedicine.org/news/1298329618.
[17] Gherardi G, Cameron J, West A, Crossley M. Are we dressed to impress? A descriptive survey assessing patients’ preference of doctors’ attire in the hospital setting. Clin Med. 2009;9(6):519/24.
[18] Henderson J. The endangered white coat. Clin Infect Dis. 2010;50(7):1073-4.
[19] American Medical Association [homepage on the Internet]. Chicago: Board of Trustees. c2010. Reports of the Boards of Trustees. p31-3. Available from: http://www.ama-assn.org/resources/doc/hod/a-10-bot-reports.pdf.
[20] National Health and Medical Research Council. [homepage on the Internet]. Australia; Australian guidelines for the prevention and control of infection in healthcare. 2010 [cited 2012 Oct 10]. Available from: http://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/cd33_complete.pdf.
[21] Centre for healthcare related infection surveillance and prevention. [homepage on Internet]. Brisbane; [updated 2012 October; cited 2012 Oct 10]. Available from: http://www.health.qld.gov.au/chrisp/.
[22] James Cook University School of Medicine and Dentistry [homepage on the Internet]. Townsville. 2012 [cited 2012 Oct 10]. Clothing; [1 screen]. Available from: https://learnjcu.jcu.edu.au/webapps/portal/frameset.jsp?tab_tab_group_id=_253_1&url=/webapps/blackboard/execute/courseMain?course_id=_18740_1

Categories
Review Articles Articles

A biological explanation for depression: The role of interleukin-6 in the aetiology and pathogenesis of depression and its clinical implications

Depression is one of the most common health problems addressed by general practitioners in Australia. It is well known that biological, psychosocial and environmental factors play a role in the aetiology of depression. Research into the possible biological mechanisms of depression has identified interleukin-6 (IL-6) as a potential biological correlate of depressive behaviour, with proposed contributions to the aetiology and pathogenesis of depression. Interleukin-6 is a key proinflammatory cytokine involved in the acute phase of the immune response and a potent activator of the hypothalamic-pitutary-adrenal axis. Patients with depression have higher than average concentrations of IL-6 compared to non- depressed controls, and a dose-response correlation may exist between circulating IL-6 concentration and the degree of depressive symptoms. Based on these insights the ‘cytokine theory of depression’ proposes that proinflammatory cytokines, such as IL-6, act as neuromodulators and may mediate some of the behavioural and neurochemical features of depression. Longitudinal and case- control studies across a wide variety of patient cohorts, disease states and clinical settings provide evidence for a bidirectional relationship between IL-6 and depression. Thus IL-6 represents a potential biological intermediary and therapeutic target for the treatment of depression. Recognition of the strong biological contribution to the aetiology and pathogenesis of depression may help doctors to identify individuals at risk and implement appropriate measures, which could improve the patient’s quality of life and reduce disease burden.

Introduction

Our understanding of the immune system has grown exponentially within the last century, and more questions are raised with each new development. Over the past few decades research has emerged to suggest that the immune system may be responsible for more than just fighting everyday pathogens. The term ‘psychoneuroimmunology’ was first coined by Dr Robert Ader and his colleagues in 1975 as a conceptual framework to encompass the emerging interactions between the immune system, the nervous system, and psychological functioning. Cytokines have since been found to be important mediators of this relationship. [1] There is considerable research that supports the hypothesis of proinflammatory cytokines, in particular interleukin-6 (IL-6), in playing a key role in the aetiology and pathophysiology of depression. [1-5] While both positive and negative results have been reported in individual studies, a recent meta-analysis supports the association between depression and circulating IL-6 concentration. [6] This review will explore the impact of depression in Australia, the role of IL-6 and the proposed links to depression and clinical implications of these findings.

Depression in Australia and its diagnosis

Depression belongs to a group of affective disorders and is one of the most prevalent mental illnesses in Australia. [7] It contributes to one of the highest disease burdens in Australia, closely following cancers and cardiovascular diseases. [7] Most of the burden of mental illness, measured as disability adjusted life years (DALYs), is due to years of life lost through disability (YLD) as opposed to years of life lost to death (YLL). This makes mental disorders the leading contributor (23%) to the non-fatal burden of disease in Australia. [7] Specific populations, including patients with chronic disease, such as diabetes, cancer, cardiovascular disease, and end-stage kidney disease, [1,3,4,10] are particularly vulnerable to this form of mental illness.[8, 9] The accurate diagnosis of depression in these patients can be difficult due to the overlapping of symptoms inherent to the disease or treatment and the diagnostic criteria for major depression. [10-12] Nevertheless, accurate diagnosis and treatment of depression is essential and can result in real gains in quality of life for patients with otherwise incurable and progressive disease. [7] Recognising the high prevalence and potential biological underpinnings of depression in patients with chronic disease is an important step in deciding upon appropriate diagnosis and treatment strategies.

Role of IL-6 in the body

Cytokines are intercellular signalling polypeptides produced by activated cells of the immune system. Their main function is to coordinate immune responses; however, they also play a key role in providing information regarding immune activity to the brain and neuroendocrine system. [13] Interleukin-6 is a proinflammatory cytokine primarily secreted by macrophages in response to pathogens. [14] Along with interleukin-1 (IL-1) and tumour necrosis factor-alpha (TNF-α), IL-6 plays a major role in fever induction and initiation of the acute-phase response. [14] The latter response involves a shi

Categories
Review Articles Articles

Where to from here for Australian childhood obesity?

Aim: At least one in twenty Australian school children are obese. [1] The causes and consequences of childhood obesity are well documented. This article examines the current literature on obesity management in school-aged, Australian children. Methods: A systematic review was undertaken to examine the efficacy of weight management strategies of obese Australian school-aged children. Search strategies were implemented in Medline and Pubmed databases. The inclusion criteria required original data of Australian origin, school-aged children (4 to 18 years), BMI defined populations and publication within the period of January 2005 to July 2011. Reviews, editorials and publications with inappropriate focus were excluded. Thirteen publications were analysed. Results: Nine of the thirteen papers reviewed focused on general practice (GP) mediated interventions, with the remainder utilising community, school or tertiary hospital management. Limitations identified by GP-led interventions included difficulties recognising obese children, discussing obesity with families, poor financial reward, time constraints, and a lack of proven management strategies. A school-based program was investigated, but was found to be ineffective in reducing obesity. Successful community- based strategies focused on parent-centred dietary modifications or exercise alterations in children. Conclusion: Obesity-specific management programs of children are scarce in Australia. As obesity remains a significant problem in Australia, this topic warrants further focus and investigation.

Introduction

In many countries the level of childhood obesity is rising. [2] Whilst the popular press have painted Australia as being in a similar situation, research has failed to identify significant increases in the level of childhood obesity since 1997, and in fact, recent data suggests a small decrease. [2,3] Nonetheless, an estimated four to nine percent of school-aged children are obese. [1,4] Consequently, the Australian government have pledged to reduce the prevalence of childhood obesity. [5]

In this review, articles defined Body Mass Index (BMI) by dividing weight (in kilograms) by the square of the height (in metres). [1] BMI was then compared to age- and gender-specific international set points. [6] Obesity was defined as children who had a BMI ≥ 95% of children with the same age and gender. [6] The subjects of this review, Australian school-aged children, were defined as those aged 4 to 18 years in order to include most children from preschool to the completion of secondary school throughout Australia. As evidence suggests that obese individuals have significantly worse outcomes than overweight children, this review focused on obesity rather than overweight and obese individuals. [1]

The aim of this article was to examine the recent Australian literature on childhood obesity management strategies.

Background

Causes of obesity

A myriad of causes of childhood obesity are well established in the literature. Family and culture influence a child’s eating habits, their level of physical activity and ultimately their weight status. [4,7,8] Parental attributes such as maternal obesity and dismissive or disengaged fathers also play a role. [9] Notably, maternal depression or inappropriate parenting styles have little effect on obesity. [10] Children from lower socio-economic status (SES) are at a greater risk of being obese. [9,11-13]

Culture and genetic inheritance also influence a child’s chance of being obese. [8] Evidence suggests that culture influences an individual’s beliefs regarding body size, food and exercise. [7,14] O’Dea (2008) found that Australian children of European and Asian descent had higher rates of obesity when compared with those of Pacific Islander or Middle Eastern heritage. [8] Interestingly, there is conflicting evidence as to whether being an Indigenous Australian is an independent risk factor for childhood obesity. [7,9]

A child’s nutritional knowledge has little impact on their weight. Several authors have shown that while obese and non-obese children have different eating styles, they possess a similar level of knowledge about food. [4,13] Children with a higher BMI had lower quality breakfast and were more likely to omit meals in comparison to normal weight children. [4,7,13]

The environment in which a child lives may impact their weight status; existing literature suggests that the built environment has little influence over dietary intake, physical activity and ultimately weight status. [15,16] However, there is limited research presently available.

Consequences of obesity

Obesity significantly impacts a child’s health, resulting in poorer physical and social outcomes. [4,17] Obese children are at greater risk of becoming obese in adulthood. [4,18] Venn et al. (2008) estimates that obese children are at a four- to nine-fold risk of becoming obese adults. [18] Furthermore, obese children have an increased risk of acquiring type 2 diabetes, sleep apnoea, fatty liver disease, arthritis and cardiovascular disease. [4,19]

An individual’s social health is detrimentally affected by childhood obesity. Obese children have significantly lower self worth, body image and perceived level of social acceptance amongst their peers. [7,20,21] Indeed, overall social functioning is reduced in obese children. [17] Interestingly, some studies identify no differing rates of mental illness or emotional functioning between obese and non-obese children. [12,17,22,23]

Method

Using Medline and Pubmed, searches were undertaken with the following MeSH terms: child, obesity and Australia. Review and editorial publication types were excluded, as only original data was sought for analysis. Further limits to the search included literature available in English, focused on school-aged children from 4 to 18 years, articles which defined obesity in their population using BMI, publications which addressed the research question (management of childhood obesity), and recent literature. Recent literature was defined as articles published from 1 January 2005 until 31 July 2011. This restriction was placed in part due to resource constraints, but January 2005 was specifically chosen, as this marked the introduction of several Australian government strategies to reduce childhood obesity. [5]

In total, 280 publications were identified in the Pubmed and Medline searches. The abstracts of these articles were manually assessed by the investigator for relevance to the research question and described inclusion and exclusion criteria. As a result of inappropriate topic focus, population, publication type, publication date and repetition, 265 articles were excluded. Ten articles were identified as pertinent via Pubmed. Medline searches revealed five articles of relevance, all of which were duplicated in the Pubmed search. Hence, ten publications were examined. Additionally, a search of relevant publications’ reference lists identified three further articles for analysis. Subsequently, this paper reviews thirteen articles.

Publications included in this study were either randomised controlled trials or cross-sectional analyses. The papers collected data from a variety of sources, including children, parents, clinicians and simulated patients. Consequently, population sizes varied greatly throughout the literature.

Results

Much of the Australian literature on childhood weight management does not specifically focus on the obese; instead, it combines the outcomes of obese and overweight children, sometimes including normal weight children.

Thirteen intervention articles were identified in the literature, nine of which employed GP mediated interventions, with the remainder using a community-based approach, school-based or tertiary hospital mediated obesity management.

General practitioner intervention

The National Health and Medical Research Council (NHMRC) guidelines recommend biannual anthropometric screening for children; however, many studies illustrate that few GPs regularly weigh and measure children. [24,25] Whilst Dettori et al. (2009) reported 79% of GPs interviewed measure children’s weight and height, only half of their respondents regularly converted these figures to determine if a child was obese. [26] A possible reason for the low rates of BMI calculation may be that many GPs find it difficult to initiate discussions about weight status in children. [24-27] A number of authors have identified that some GPs fear losing business, alienating or offending their clients. [24,25,27]

There was wide variability in the tools GPs used to screen children, which may ultimately have led to incorrect weight classifications. [24] Spurrier et al. (2006) investigated this further, identifying that GPs may use visual cues to identify normal weight children; however, using visual cues alone GPs are not always able to recognise an obese from an overweight child or an overweight from a normal weight child. [28] Hence, GPs may fail to identify obese children if appropriate anthropometric testing is not performed.

There is mixed evidence regarding the willingness of GPs to manage obese children. Firstly, McMeniman et al. (2007) identified that GPs felt there was a lack of clear management guidelines, with the majority of participants feeling they would not be able to successfully treat an obese child. [27] Some studies identified that GPs see their role as gatekeeper for allied health intervention. [24,25] Another study showed that GPs preferred shared care, providing the primary support for obese children, which involved offering advice on nutrition, weight and exercise, whilst also referring onto other health professionals such as nutritionists, dieticians and physicians. [11]

Other factors impeding GP-managed programs are time and financial constraints. The treatment of childhood obesity in general practice is time consuming. [11,26,27] Similarly, McMeniman et al. [27] highlighted that the majority of responders (75%) felt that there was not adequate financial incentive to identify and manage obese children.

Evidence suggests that providing education to GPs on identifying and managing obesity could be useful in building their confidence. [26] One publication found that over half of GPs receiving education were able to better identify obese children. [26] Similarly, Gerner et al. (2010) illustrated, by using simulated patients, that GPs felt they had improved their competence in the management of obese children. [29] In the Live, Eat and Play (LEAP) trial, patient outcomes at nine months were compared to GP’s self-rated competence, simulated patient ratings and parent ratings on consultations. [29] Interestingly, simulated patient ratings were shown to be a good predictor of real patient outcomes, with higher simulated patient marks correlating to larger drop in a child’s BMI. [29]

Unfortunately, no trials illustrated an effective GP-led child obese management strategy. The LEAP trial, a twelve week GP-mediated intervention focused on nutrition, physical exercise and the reduction of sedentary behaviour, failed to show any significant decrease in BMI of the intervention group compared with the control. [30] Notably, the LEAP trial failed to separate the data of obese and non-obese children. [30]

Further analysis of the LEAP trial illustrated that the program was expensive, with the cost to an intervention family being $4094 greater than of that to a control family. [31] This is a significant burden on families, with an additional fiscal burden of $873 per family to the health sector. [31] Whilst these amounts are likely to be elevated due to the small number of children, program delivery is costly for both families and the health care sector. [31]

Community-based programs

Literature describing community-based obesity reduction was sparse. Two publications were identified, both of which pertained to the HICKUP trial. These articles illustrated that parent-centred dietary program and child-focused exercise approaches can be efficacious in weight reduction in a population of children including the obese. [32,33] In this randomised controlled trial, children were divided into three groups: i) parent-focused dietary program, ii) child-centred exercise, and iii) combination of the aforementioned. [32,33] Dietary programs focused on improving parenting skills to provide behavioural change in children, whilst physical activity program involved improving children’s fundamental skills and competence. [32,33] A significant limitation of the study was that children were identified through responding to advertising in school newsletters and GP practices, lending this investigation to volunteer bias. Additionally, the outcome data in these studies failed to delineate obese children from overweight or normal weight children.

School-based programs

Evidence suggests that an education and exercise-based program can be implemented into a school system. [34] The Peralta et al. (2009) intervention involved a small sample of twelve to thirteen year old boys who were either normal weight, overweight or obese, and were randomised to a control or intervention group. [34] The program’s curriculum focused on education as well as increasing physical activity. Education sessions were based on dietary awareness, goal se

Categories
Feature Articles Articles

Putting awareness to bed: improving depth of anaesthesia monitoring

Intraoperative awareness and subsequent explicit recall can lead to prolonged psychological damage in patients. There are many methods currently in place to prevent this potentially traumatic phenomenon from occurring. Such methods include identifying haemodynamic changes in the patient, monitoring volatile anaesthetic concentration, and various electroencephalographic algorithms that correlate with a particular level of consciousness. Unfortunately none of these methods are without limitations.

Introduction

Intraoperative awareness is defined by both consciousness and explicit memory of surgical events. [1] There are a number of risk factors that predispose patients to such a phenomenon, both surgical and patient-related. Procedures where the anaesthetic dose is low, such as in caesarean sections, trauma and cardiac surgery, have been associated with a higher incidence. Likewise patients with low cardiac reserve or resistance to some agents are prominent attributable factors. [2] A small number of cases are also due to a lack of anaesthetist vigilance with administration of incorrect drugs or failure to recognize equipment malfunction. [2] Ultimately it is largely an iatrogenic complication due to administration of inadequate levels of anaesthetic drugs. Most cases of awareness are inconsequential, with patients not experiencing pain but rather having auditory recall of the experience, which is usually not distressing. [3] In some cases, however, patients experience and recall pain, which can have disastrous, long-term consequences. Awareness has a high association with post-operative psychosomatic dysfunction, including depression and post-traumatic stress disorder, [4] and is a major medico-legal liability. Though the incidence of awareness is infrequent, estimated to occur in 1-2 cases per 1000 patients having general anaesthesia in developed countries, [1] the sequelae of experiencing such an event necessitates the development and implementation of a highly sensitive monitoring system to prevent it from occurring.

Measuring depth of anaesthesia:

1. Monitoring clinical signs

Adequate depth of anaesthesia occurs when the administration of anaesthetic agents are sufficient to allow conduct of the surgery whilst ensuring the patient is unconscious. There are both subjective and objective methods of monitoring this depth. [5] Subjective methods rely primarily on the patient’s autonomic response to a nociceptive stimulus. [5] Signs such as hypertension, tachycardia, sweating, lacrimation and mydriasis indicate a possible lightening of anaesthesia. [5] Such signs however are not specific as they can be the result of other factors that cause haemodynamic changes, such as haemorrhage. Additionally, patient body habitus, autonomic tone and medications (in particular beta-adrenergic blockers and calcium channel antagonists) can also haemodynamically affect the patient. [5] Consequently the patient’s autonomic response is a poor indicator of depth of anaesthesia, [6] and the presence of haemodynamic change in response to a surgical incision does not indicate awareness, nor does the absence of autonomic response exclude it. [5]

Patient movement remains an important sign of inadequate depth of anaesthesia, however is often suppressed by administration of neuromuscular blocking drugs. [1] This consequent paralysis can be overcome with the ‘isolated forearm technique’. In this technique, a tourniquet is placed on an arm of the patient prior to administration of a muscle relaxant and inflated above systolic pressure to exclude the effect of the relaxant and retain neuromuscular function. The patient is then instructed to move their arm during the surgery if they begin to feel pain. [5] Though this technique is effective in monitoring depth of anaesthesia, it has not been adopted into clinical practice. [7] Furthermore, patient movement and autonomic signs may reflect the analgesic rather than hypnotic component of anaesthesia and thus are not an accurate measure of consciousness. [8]

2. Minimum Alveolar Concentration (MAC)

The unreliable nature of subjective methods for assessing depth of anaesthesia has seen the development and implementation of various objective methods which rely on the sensitivity of monitors. The measurement of end-tidal volatile anaesthetic agent concentration to determine the MAC has become a standard component of modern anaesthetic regimens. MAC is defined as the concentration of inhaled anaesthetic required to prevent 50% of subjects from responding to noxious stimuli. [9] It is recommended that administration of at least 0.5 MAC of volatile anaesthetic should reliably prevent intra-operative awareness. [10]

Unfortunately the MAC is affected by a number of factors and thus it is difficult to determine an accurate concentration that will reliably prevent awareness. Patient age is the major determinate of the amount of inhalation anaesthesia required, as are altered physiological states such as pregnancy, anaemia, alcoholism, hypoxaemia and temperature of the patient. [11] Most importantly, the administration of opioids and ketamine, both commonly included in the anaesthetic regimen, severely curtail the ability of the gas analyser to determine the MAC. [12] Further, the MAC is a reflection of inhalational anaesthetic concentration, not effect. The suppression of response to noxious stimuli whilst under volatile anaesthesia is mediated largely through the spinal cord, and thus does not accurately reflect cortical function and the penetration of the anaesthetic into the brain. [13] Another major limitation to using gas analysers is that they have limited reliability when intravenous anaesthesia is used. Simultaneous administration of intravenous anaesthetic agents is extremely common and in many cases total intravenous anaesthesia is used; in such cases the use of the MAC is not applicable.

3. Electroencephalogram (EEG) and derived indices

Bispectral Index (BIS)

Advances in technology have lead to the concomitant development of processed encephalographic modalities and their use as parameters to assess depth of anaesthesia; the most widely used being the BIS monitor. The BIS monitor uses algorithmic analysis of a patient’s EEG to produce a single number from 1 to 100, which correlates with a particular level of consciousness. [5,14] For general anaesthesia, 40-60 is recommended. [14] The establishment of this monitor at first seemed promising with the publication of several studies advocating its use in preventing awareness. The first of these was conducted by Ekman et al, [15] and indeed found that there was a substantial decrease in incidence of awareness when the BIS monitor was used. In this study, however, the patients were not randomly allocated to the control group and the BIS monitoring group, and thus the results are subject to a high degree of bias and cannot be reliably interpreted. The second study, the B-Aware trial, [16] also found that BIS-guided anaesthesia resulted in a reduction in awareness in high risk patients, however despite having a sound study design, subsequent studies failed to reproduce this result. One prominent study, the B-Unaware trial, [17] compared BIS monitoring to more traditional analysis of end-tidal concentrations of anaesthetic gases to assess depth of anaesthesia during surgeries on high risk patients. This study failed to show a significant reduction in the incidence of awareness using BIS monitoring, however a major criticism of this study is that the criteria used to classify the patients in the trial as ‘high-risk’ was less stringent than those used in the B-Aware trial which likely biased the results. Also, given the low incidence of awareness, a larger number of study subjects would be required to demonstrate any significant reduction.

The BIS monitor also has several practical issues that further question its efficacy in monitoring consciousness. It is subject to electrical interference from the theatre environment, particularly from electromyography, diathermy and direct vibration. [14] This is more likely in cases where the surgical field is near the BIS electrode (such as facial muscle surgery) which will falsely elevate BIS values, leading to possible excess administration of anaesthesia. [14] Similar to the MAC, standard BIS scores are not applicable to all patient populations, particularly in patients with abnormal EEGs – those with dementia, head injuries, cardiac arrest and have hypo- or hyperthermia. [1] In such cases, the BIS value may underestimate the depth of anaesthesia, leading to the administration of excess anaesthetic and a deeper level of anaesthesia than required. Further, as the molecular action of various anaesthetic agents differs, the consequent EEG changes are not uniform. Specifically, the BIS monitor cannot accurately assess changes in consciousness when the patient is administered ketamine [18] and nitrous oxide, [19] both commonly used agents.

Despite these practical downfalls, however, there are substantial benefits to the BIS monitor which should be incorporated into future depth of anaesthesia monitors. The BIS monitor helps anaesthetists to titrate the correct dosage of anaesthetic for the patient, [5] and to adjust this accordingly throughout the surgery to keep the patient within the recommended range for general anaesthesia without administering excess agent. This results in decreased haemodynamic disturbance, faster recovery times and reduced post-operative side effects. [20] A meta-analysis found that use of BIS monitoring significantly reduced anaesthetic consumption by 10%, reduced the incidence of nausea/vomiting by 23% and reduced time in the recovery room by four minutes. [21] This may offer a cost-benefit as less anaesthetic will be required during surgeries.

Despite the aforesaid advantages of using the MAC and BIS monitor to assess consciousness during surgery, the major inadequacy to both of these methods is that they only measure the hypnotic element of anaesthesia. [8] Anaesthetic depth is in fact a complex construct of several components including hypnosis, analgesia, amnesia and reflex suppression. [8] Different anaesthetic agents have varying effects across these areas; some are able to be administered independently, and others only have properties in one area, and thus must be used in conjunction with other pharmacologic agents to achieve anaesthesia. [8] If only the hypnotic component of anaesthesia is monitored, optimal drug delivery is difficult and there is a risk that insufficient analgesia may go unnoticed. Thus the MAC and BIS monitors can be used to monitor hypnosis and sedation, but have little role in predicting the quality of analgesia or patient movement mediated by spinal reflexes.

Entropy

Entropy monitoring is based on the acquisition and processing of EEG and electromyelogram (EMG) signals by using the entropy algorithm. [22] It relies on the concept that the irregularity within an EEG signal decreases as the anaesthetic concentration in the brain rises. Similar to the BIS, the signal is captured via a sensor that is mounted on the patient’s forehead, and the monitor produces two numbers between 0 and 100 – the response entropy (RE) and the state entropy (SE). The RE incorporates higher frequency components (including EMG activity) thus allowing a faster response from the monitor in relation to clinical state. [22] Numbers close to 100 suggest consciousness whereas numbers close to 0 indicate a very deep level of anaesthesia. The ideal values for general anaesthesia lie between 40 and 60. [22] Studies have shown that entropy monitoring measures the level of consciousness just as reliably as the BIS, and is subject to less electrical interference during the intraoperative period. [23]

Evoked potentials

Alternative mechanisms such as evoked potentials, which monitor the electrical potential of nerves following a stimulus, have also demonstrated a clear dose-response relationship with increasing anaesthetic administration [14,24]. In particular, auditory evoked potentials (in which the response to auditory canal stimulation is recorded) have lead to the development of the auditory evoked potential index. This index was proven to have greater sensitivity than the BIS monitor in detecting unconsciousness. [24] Unfortunately, using evoked potentials to monitor depth of anaesthesia is a complex process, and as with BIS many artifacts can interfere with the EEG reading. [14,24]

Brain Anaesthesia Response (BAR) Monitor

New electroencephalographically derived algorithms have been developed which define both the patient’s hypnotic and analgesic states individually. [25,26] This is essential in cases where combinations of anaesthetic agents that have separate sedative and analgesic properties are used. Dr David Liley, Associate Professor of the Brain Sciences Institute at Swinburne University of Technology, began a research project a decade ago with the aim of producing such a means of assessing consciousness, and subsequently pioneered the Brain Anaesthesia Response (BAR) monitor. [25] Liley initially analyzed EEG data from 45 patients in Belgium who were administered both propofol (a hypnotic agent) and remifentanil (an analgesic agent) as part of their anaesthetic regimen. Two measures were derived from the EEG to measure the brain response to the anaesthetic agents – cortical state (which measures brain responsiveness to stimuli) and cortical input (which quantifies the strength of each stimuli that reaches the brain). He was able to detect the effects of the drugs separately; cortical state reflected changes for hypnotic agents, and cortical input reactions reflected change in levels of analgesia; from this, the BAR algorithm was developed. [25] Its use will allow anaesthetists to determine which class of drug needs adjustment, and to titrate it accordingly. It is suggested that the BAR monitor will narrow the range of the exclusion criteria that limit previously mentioned indexes such as the BIS and Entropy. [25,26] This innovative monitor has an improved ability to detect a number of drugs that are not effectively measured using the BIS monitor, for example ketamine and nitrous oxide. [25] The capacity to titrate anaesthetics specifically and accurately would increase optimal drug delivery, not only reducing the likelihood of intra-operative awareness but also avoiding issues of over or under sedation. This in turn might reduce side effects associated with excess anaesthetic administration and improve post-operative recovery. The BAR monitor is currently undergoing trial at the Royal Melbourne Hospital under Professor Kate Leslie, and at St. Vincent’s Hospital in Melbourne under Dr. Desmond McGlade. [25,26]

Though advancements have undoubtedly been made in regards to depth of anaesthesia monitors, it cannot be emphasized enough that the most important monitor of all is the anaesthetist themselves. A significant percentage of awareness cases are caused by drug error or equipment malfunction. [2,27] These cases can easily be prevented by adhering to strict practice guidelines, such as those published by the Australian and New Zealand College of Anaesthetists. [28]

Conclusion

Measuring depth of anaesthesia to prevent intra-operative awareness remains a highly contentious aspect of modern anaesthesia. Current parameters for monitoring consciousness include the observation of clinical signs, the MAC and BIS indices, as well as less commonly used methods such as evoked potentials and entropy. These instruments allow clinicians to accurately titrate anaesthetic agents leading to a subsequent decrease in post-operative side effects and a reduction in awareness among patients at increased risk of this complication. Despite these benefits, all of the current monitors have limitations and there is still no completely reliable method of preventing this potentially traumatising event. What is required now is a parameter or measure that shows minimal inter-patient variability and the capacity to respond consistently to an array of anaesthetic drugs with different molecular formulations. It is important to remember, however, that no monitor can replace the role of the anaesthetist in preventing awareness.

Conflict of interest

None declared.

Correspondence

L Kostos: lkkos1@student.monash.edu

References

[1] Mashour GA, Orser BA, Avidan MS. Intraoperative awareness: From neurobiology to clinical practice. Anesthesiology 2011;114(5):1218-33.
[2] Ghoneim MM, Block RI, Haffarnan M, Mathews MJ. Awareness during anaesthesia: risk factors, causes and sequelae: a review of reported cases in the literature. Anesth Analg 2009; 108:527-35.
[3] Orser BA, Mazer CD, Baker AJ. Awareness during anaesthesia. CMAJ 2008; 178:185–8.
[4] Osterman JE, Hopper J, Heran WJ, Keane TM, Van der Kolk BA. Awareness under anaesthesia and the development of posttraumatic stress disorder. Gen Hosp Psychiatry 2001; 23:198-204.
[5] Kaul HL, Bharti N. Monitoring depth of anaesthesia. Indian J. Anesth 2002;46(4):323-32.
[6] Struys MM, Jensen EW, Smith W, Smith NT, Rampil I, Dumortier FJ et al. Performance of the ARX-derived auditory evoked potential index as an indicator of anesthetic depth: a comparison with bispectral index and hemodynamic measures using propofol administration. Anesthesiology 2002;96:803-16.
[7] Bruhn J, Myles P, Sneyd R, Struys M. Depth of anaesthesia monitoring: what’s available, what’s validated and what’s next? Br J Anaesth 2006; 97:85-94.
[8] Myles PS. Prevention of awareness during anaesthesia. Best Pract Res Clin Anesthesiol 2007; 21(3):345-55.
[9] Eger EI 2nd, Saidman IJ, Brandstater B. Minimum alveolar anaesthetic concentration: a standard of anaesthetic potency. Anesthesiology 1965; 26:756-63.
[10] Eger EI 2nd, Sonner JM. How likely is awareness during anaesthesia? Anaesth Analg 2005; 100:1544.
[11] Eger EI 2nd. Age, minimum alveolar anesthetic concentration and the minimum alveolar anesthetic concentration-awake. Anesth Analg 2001;93:947-53.
[12] Nost R, Thiel-Ritter A, Scholz S, Hempelmann G, Muller M. Balanced anesthesia with remifentanil and desflurane: clinical considerations for dose adjustment in adults. J Opioid Manag 2008;4:305-9.
[13] Rampil IJ, Mason P, Singh H. Anesthetic potency (MAC) is independent of forebrain structures in the rat. Anesthesiology 1993;78:707-12.
[14] Morimoto, Y. Usefulness of electroencephalogramic monitoring during general anaesthesia. J Anaesth 2008;22:498-501.
[15] Ekman A, Lindholm ML, Lennmarken C, Sandin R. Reduction in the incidence of awareness using BIS monitoring. Acta Anaesthesiol Scand 2004;48:20-6.
[16] Myles PS, Leslie K, McNeil J, Forbes A, Chan MT. Bispectral index monitoring to prevent awareness during anaesthesia : the B-Aware randomised controlled trial. Lancet 2004;363:1757-63.
[17] Avidan M, Shang L, Burnside BA, Finkel KJ, Searleman AC, Selvidge JA et al. Anesthesia awareness and the bispectral index. N Engl J Med 2008;358:1097-1108.
[18] Morioka N, Ozaki M, Matsukawa T, Sessler D, Atarashi K, Suzuki H. Ketamine causes a paradoxical increase in the Bispectral index. Anesthesiology 1997;87:502.
[19] Puri GD. Paradoxical changes in bispectral index during nitrous oxide administration. Br J Anesth 2001;86:141-2.
[20] Sebel PS, Rampil I, Cork R, White P, Smith NT. Brull S et al. Bispectral analysis for monitoring anaesthesia – a multicentre study. Anesthesiology 1993;79:178.
[21] Liu SS. Effects of bispectral index monitoring on ambulatory anaesthesia: a meta-analysis of randomized controlled trials and a cost analysis. Anesthesiology 2004;101:591-602.
[22] Bein B. Entropy. Best Pract Res Clin Anaesthesiol 2006;20:101-9.
[23] Baulig W, Seifert B, Schmid E, Schwarz U. Comparison of spectral entropy and bispectral index electroencephalography in coronary artery bypass graft surgery. J Cardiothorac Vasc Anesth 2010;24:544-9.
[24] Gajraj RJ, Doi M, Mantzaidis H, Kenny GNC. Analysis of the EEG bispectrum, auditory evoked potentials and EEG power spectrum during repeated transitions from consciousness to unconsciousness. Br. J. Anaesth 1998;80:46-52.
[25] Thoo M. Brain monitor puts patients at ease. Swinburne Magazine 2011 Mar 17;6-7.
[26] Breeze, D. Ethics approval obtained for BAR monitor trial in Melbourne. Cortical Dynamics Ltd. 2011 Nov; 1.
[27] Orser B, Mazer C, Baker A. Awareness during anaesthesia. CMAJ 2008;178(2):185–8.
[28] Australian and New Zealand College of Anaesthetics. Guidelines on checking anaesthesia delivery systems [document on the Internet]. Melbourne; 2012 [cited 2012 Sept 20]. Available from ANZCA: http://www.anzca.edu.au

Categories
Articles Feature Articles

Doctors’ health and wellbeing: Where do we stand?

Doctors continue to record significant rates of burn out, stress-related illness, substance abuse and suicide, despite greater awareness of these issues in the profession. [1,2] Whilst improved support services have been a positive move, there are underlying systemic issues that must be addressed within the profession.

Physician distress results from a complex interplay of several factors that include a challenging work environment, specific physician characteristics and other contextual factors such as stigma (Figure 1). [3] Specific physician characteristics that may make us prone to stress-related illness include the motivated and driven personality types that many of us possess; these are useful in meeting heavy workloads, but can be detrimental in times of distress. When combined with a great sense of professional obligation to patients, an “admirable but unhealthy tradition of self-sacrifice” can ensue. [4]

Stigma is also a contributing factor, with many doctors concerned about how they will be perceived by others. Common stigmatised attitudes include the fear of being considered weak, concern about registration status and career impact, and the need to appear healthy to patients. [1] These individuals are less likely to seek help for their illness or to take time off, which can be compounded by the pressure of ‘letting the team down’ when they do. Attitudes such as this develop early on as a medical student, and are often reinforced later in professional practice by colleagues and supervisors. [5,6]

These factors contribute to a culture within medicine of the frequent neglect of preventive health issues. [7] Commonly, there is a reliance on informal care from colleagues (‘corridor consultations’), and many doctors may self-diagnose and self-treat. [8] While this might suffice for minor illnesses, during times of serious distress or mental illness, this approach may lead to late or suboptimal treatment and a poor prognosis or to relapse. [6]

In the past, little effort has been made to promote prevention, wellbeing and appropriate self-care, particularly in the early stages of the profession such as during medical school. Current undergraduate medical curricula focus almost exclusively on the acquisition of clinical knowledge, with a clear deficit in the development of self-care skills and an understanding of the personal challenges of the profession. [5] This is increasingly evident in new graduates, with 38% of Australian junior doctors recently reporting that they were unprepared for life as a doctor and 17% who would not choose medicine as a career again, if given the choice. [8]

With a suicide rate up to two and a half times greater than the general population, a culture of self-care and wellbeing in the profession needs to be nurtured to ensure a more resilient medical workforce. [1,5]

So where do we stand?

The doctors’ wellbeing movement has had strong leadership through individual doctors and small groups such as Doctors’ Health Services. [7] In South Australia, ‘Doctors’ Health SA’ has developed into a fully independent, profession-controlled organisation that acts as a focal point for doctors’ health and provides clinical services in the central business district for doctors and medical students. The program offers comprehensive after-hours check-ups and easier access to a state-wide network of general practitioners and health professionals associated with the program. Similar programs are in development in other states. [8]

Medical student groups have also played important roles in health promotion and advocacy for student welfare needs. The Australian Medical Students’ Association has focussed heavily on medical student health and wellbeing in recent years, developing policy and resources to support student wellbeing. [9] Student-run wellbeing events are also now common place at most medical schools around Australia. It is essential that medical educators also play a role in promoting student wellbeing; Monash University has been a leader in this area, with the incorporation of a ‘Health Enhancement Program’ into its core medical curriculum, which aims to teach students about the relevance of mental and physical health in medicine. Further examples of initiatives aimed at students are listed in Table 1. [5]

Table 1. Summary of interventions to improve medical student wellbeing and health seeking behavior.

Intervention / Setting Aims Intervention Evaluation Results
Health Enhancement Program

(Monash University School of Medicine)

Australia

Evidence level: III-1* [15]

Foster behaviours, skills, attitudes and knowledge of self-care strategies for managing stress and maintaining healthy lifestyle, and understanding of the mind-body relationship. Eight lectures on mental and physical health, mind-body medicine, behaviour change strategies, mindfulness therapies, and the ESSENCE lifestyle program, supported by six two-hour tutorials. Depression, anxiety and hostility scales of the Symptom Checklist-90-R incorporating the Global Severity Index (GSI) and WHO Quality of Life (WHOQOL) questionnaire to measure effects on wellbeing. Improved student well-being was noted for depression and hostility subscales but not the anxiety subscale.

 

Mental Health in Medicine Seminar

(Flinders University Medical Students Society)

Australia

Evidence level: III-3*

Foster behaviours, skills, attitudes and knowledge of self-care strategies or managing stress and maintaining healthy lifestyle, and understanding of the mind-body relationship. Half-day didactic seminar discussing epidemiology, stigmatising attitudes, causes, risk factors, signs and symptoms of depression, stress management, and support avenues as a student and physician. Pre/post intervention survey to assess changes in mental health literacy (knowledge/attitudes towards depression and helpseeking behaviour). Based on International Depression Literacy Survey. Results pending at time of publication.
Student Well-Being Program (SWBP)

(West Virginia Uni. School of Medicine)

United States

Evidence level: III-3* [16]

Prevention and treatment of medical student impairment Voluntary lunch hour lectures (six lectures over six month period) for first and second year students addressing various aspects of wellbeing. Post-intervention questionairre distributed to 94 students assessing erceptions of depression, academic difficulties, substance abuse, health-seeking behaviour. Participants who had one or more symptoms of impairment were more likely to feel a need for counselling and to seek help
Physician Life-style Management Elective

(Wright State Uni. School of Medicine)

United States

Evidence level: III-3* [17]

Enhance the quality of medical student life-planning as a future physician and prevent physician disability. Voluntary two week elective (lectures) for first year students focusing on physician health, practice management, relationships, and physician disability. Ratings of each didactic session were collected from seventeen first year medical students. Students rated sessions on the residency experience highest followed by assertiveness training, then by emotional health management.
Wellness Elective

(Case Western Reserve University School of Medicine)

United States

Evidence level: III-3* [19]

Provide students with information on wellness, stress reduction, and coping strategies.

 

Series of six, weekly lectures from medical and allied health professionals on wellness, coping strategies and stress reduction. Evaluated via essay review and a questionnaire administered after the elective concluded. Participants reported that the elective helped them realise the importance of personal wellbeing, self-care, and provided a variety of coping strategies.
Self-care intervention

(Indiana University School of Medicine)

United States

Evidence level: III-3* [18]

Promote positive health habits and emotional adjustment during students’ first semester via selfawareness and self-care interventions. Lecture, written information, and group discussions on emotional adjustments, sleep hygiene, substance use and recognition/ management of depression and anxiety. Survey assessing patterns of sleep, alcohol consumption, depression, exercise, caffeine use, satisfaction with teaching, social life, physical health, emotional health, finances, time management. Promising effects on patterns of alcohol consumption, exercise and socialisation.

Influenced some sleep and exercise behaviours, but not overall emotional or academic adjustment.

*National Health and Medical Research Council levels of evidence. I: Systematic review of randomised controlled trials. II: One properly designed randomised controlled trial. III-1: One well designed pseudo-randomised controlled trial. III-2: Non-randomised trials, case–control and cohort studies. III-3: Studies with historical controls, single-arm studies, or interrupted time series. IV: Case-series evidence

Sadly, the doctors’ health agenda is still lacking within our hospitals, particularly for junior medical staff. Hospitals remain challenging places to work for interns and residents, with variable levels of support from the institutions. Administrative or support staff such as medical education officers may be asked to consider doctors’ health issues, but usually as an add-on to their daily roles, rather than as a core component of it. This has led to a sporadic approach towards junior doctor health, with the level of support dependent on individual clinical training staff. The Queen Elizabeth Hospital (SA) has a unique support program for interns, which incorporates five wellbeing sessions throughout the year as part of the weekly education schedule; however, this remains the exception rather than the rule.

For doctors’ health to move forward, it needs to become a mainstream workforce issue within medical education, training and practice. Leadership across each of these areas is important so that we can begin to implement systemic initiatives to facilitate resilience in doctors. One key area of focus should be greater mentoring and peer support, particularly within hospitals. [10,11] Whilst junior medical staff currently work fewer hours than in the past, this has also resulted in less ‘living in’, and reduced opportunities for peer support. Doctors’ common spaces, once typical places for medical staff to debrief with colleagues, are also the first areas to be expended within hospitals looking for more administrative space.

Health promotion also needs to occur across the learning and professional continuum of medical practice. It is essential that medical students and junior doctors are targeted, as this seems to be the time when an acceptance of self-treatment and stigmatised attitudes become entrenched.[6] With a greater awareness of these issues amongst the next generation of doctors, we can gradually shift the culture within the profession. Whilst this is difficult and many of us are set in our ways, it is incumbent upon all of us to have a vision of a medical profession that is strong, vibrant and resilient.

Tips for those who are struggling
Don’t be afraid to tell someone; struggling in medicine is more common than you think.
Don’t rely on alcohol or other drugs to cope. This can have a brief mood-lifting effect but can later cause feelings of depression or anxiety.
Try to eat a healthy diet and stay active.
Keep connected with other people, including a support network outside of medicine.
Seek help early from a friend, teacher, doctor, or counsellor. All states and territories now have specific health services for doctors and medical students.

Conflict of interest

None declared.

Correspondence

M Nguyen: minh.nguyen@flinders.edu.au

Categories
Original Research Articles Articles

Predicting falls in the elderly: do dual-task tests offer any added value? A systematic review

The issue of falls is a significant health concern in geriatric medicine and a major contributor to morbidity and mortality in those over 65 years of age. Gait and balance problems are responsible for up to a quarter of falls in the elderly. It is unclear whether dual- task assessments, which have become increasingly popular in recent years, have any added benefit over single-task assessments in predicting falls. A previous systematic review that included manuscripts published prior to 2006 could not reach a conclusion due to a lack of available data. Therefore, a systematic review was performed on all dual-task material published from 2006 to 2011 with a focus on fall prediction. The review included all studies published between 2006-2011 and available through PubMed, EMBASE, PsycINFO, CINAHL and the Cochrane Central Register of Controlled Trials databases that satisfied inclusion and exclusion criteria utilised by the previous systematic review. A total of sixteen articles met the inclusion criteria and were analysed for qualitative and quantitative results. A majority of the studies demonstrated that poor performance during dual-task assessments was associated with a higher risk of falls in the elderly. Only three of the 16 articles provided statistical data for comparison of single- and dual-task assessments. These studies provided insufficient data to demonstrate whether dual-task tests were superior to single- task tests in predicting falls in the elderly. Further head-to-head studies are required to determine whether dual-task assessments are superior to single-task assessments in their ability to predict future falls in the elderly.

Introduction

Many simple tasks of daily living such as standing, walking or rising from a chair can potentially lead to a fall. Each year one in three people over the age of 65 living at home will experience a fall, with five percent requiring hospitalisation. [1, 2] Gait and balance problems are responsible for 10-25% of falls in the elderly, only surpassed by ‘slips and trips,’ which account for 30-50%. [2] Appropriate clinical evaluation of identifiable gait and balance disturbances, such as lower limb weakness or gait disorders, has been proposed as an efficient and cost-effective practice which can prevent many of these falls. As such, fall prevention programs have placed a strong emphasis on determining a patient’s fall risk by assessing a variety of physiological characteristics. [2, 3]

Dual-task assessments have become increasingly popular in recent years, because they examine the relationship between cognitive function and attentional limitations, that is, a subject’s ability to divide their attention. [4] The accepted model for conducting such tests involves a primary gait or balance task (such as walking at normal pace) performed concurrently with a secondary cognitive or manual task (such as counting backwards). [4, 5] Divided attention whilst walking may manifest as subtle changes in posture, balance or gait. [5, 6] It is these changes that provide potentially clinically significant correlations, for example, detecting changes in balance and gait after an exercise intervention. [5, 6] However, it is unclear whether a patient’s performance during a dual-task assessment has any added benefit over a single-task assessment in predicting falls.

In 2008, Zijlstra et al. [7] produced a systematic review of the literature which attempted to evaluate whether dual-task balance assessments are more sensitive than single balance tasks in predicting falls. It included all published studies prior to 2006 (inclusive), yet there was a lack of available data for a conclusion to be made. This was followed by a review article by Beauchet et al. [8] in 2009 that included additional studies published up to 2008. These authors concluded that changes in performance while dual-tasking were significantly associated with an increased risk of falling in older adults. The purpose of this present study was to determine, using recently published data, whether dual-task tests of balance and/or gait have any added benefit over single-task tests in predicting falls. A related outcome of the study was to gather data to either support or challenge the use of dual-task assessments in fall prevention programs.

A systematic review of all published material from 2006 to 2011 was performed, focusing on dual-task assessments in the elderly. Inclusion criteria were used to ensure only relevant articles reporting on fall predictions were selected. The method and results of included manuscripts were qualitatively and quantitatively analysed and compared.

Methods

Literature Search

A systematic literature search was performed to identify articles which investigated the relationship between falls in older people and balance/gait under single-task and dual-task conditions. The electronic databases searched were PubMed, EMBASE, PsycINFO, CINAHL and Cochrane Central Register of Controlled Trials. The search strategy utilised by Ziljstra et al. [7] was carried out. Individual search strategies were tailored to each database, being adapted from the following which was used in PubMed:

1. (gait OR walking OR locomotion OR musculoskeletal equilibrium OR posture)

2. (aged OR aged, 80 and over OR aging)

3. #1 AND #2

4. (cognition OR attention OR cognitive task(s) OR attention tasks(s) 
OR dual task(s) OR double task paradigm OR second task(s) OR 
secondary task(s))

5. #3 AND #4

6. #5 AND (humans)

Bold terms are MeSH (Medical Subjects Headings) key terms.
The search was performed without language restrictions and results were filtered to produce all publications from 2006 to March 2011 (inclusive). To identify further studies, the author hand-searched reference lists of relevant articles, and searched the Scopus database to identify any newer articles which cited primary articles.

Selection of papers

The process of selecting manuscripts is illustrated in Figure 1. Only articles with publication dates from 2006 onwards were included, as all relevant articles prior to this were already contained in the mini-review by Ziljstra et al. [7] Two independent reviewers then screened article titles for studies that employed a dual-task paradigm – specifically, a gait or balance task coupled with a cognitive or manual task – and included falls data as an outcome measure.

Article abstracts were then appraised to determine whether the dual- task assessment was used appropriately and within the scope of the present study; that is to: (1) predict future falls, or (2) differentiate between fallers and non-fallers based on retrospective data collection of falls. Studies were only considered if subjects’ fall status was determined by actual fall events – the fall definitions stated in individual articles were accepted. Studies were included if participants were aged 65 years and older. Articles which focused on adult participants with a specific medical condition were also included. Studies that reported no results for dual-task assessments were included for descriptive purposes only. Interventional studies which used the dual- task paradigm to detect changes after an intervention were excluded, as were case studies, review articles or studies that used subjective scoring systems to assess dual-task performance.

Analysis of relevant papers

Information on the following aspects was extracted from each article: study design (retrospective or prospective collection of falls), number of subjects (including gender proportion), number of falls required to be classified a ‘faller’, tasks performed and the corresponding measurements used to report outcome, task order and follow up period if appropriate.

Where applicable, each article was also assessed for values and results which allowed comparison between the single and dual-task assessments and their respective falls prediction. The appropriate statistical measures required for such a comparison include sensitivity, specificity, positive and negative predictive values, odds ratios or likelihood ratios. [9] The dual-task cost, or difference in performance between the single and dual-task, was also considered.

Results

The database search of PubMed, EMBASE, PsycINFO, CINAHL and Cochrane produced 1154, 101, 468, 502 and 84 references respectively. As alluded to by Figure 1, filtering results for publications between 2006-2011 almost halved results to a total of 1215 references. A further 1082 studies were omitted as they fell under the category of duplicates, non-dual task studies, or did not report falls as the outcome.

The 133 articles which remained reported on falls using a dual- task approach, that is, a primary gait or balance task paired with a secondary cognitive task. Final screening was performed to ensure that the mean age of subjects was at least 65, as well as to remove case studies, interventional studies and review articles. Sixteen studies met the inclusion criteria, nine retrospective and seven prospective fall studies, summarised by study design in Tables 1A and 1B respectively.

The number of subjects ranged from 24 to 1038, [10, 11] with half the studies having a sample size of 100 subjects or more. [11-18] Females were typically the dominant participants, comprising over 70% of the subject cohort on nine occasions. [10, 13, 14, 16-21] Eight studies investigated community-dwelling older adults, [10-12, 14, 15, 19, 20, 22] four examined older adults living in senior housing/residential facilities [13, 16-18] and one focused on elderly hospital inpatients. [21] A further three studies exclusively investigated subjects with defined pathologies, specifically progressive supranuclear palsy, [23] stroke [24] and acute brain injury. [25]

Among the nine retrospective studies, the fall rate ranged from 10.0% to 54.2%. [12, 25] Fall rates were determined by actual fall events; five studies required subjects to self-report the number of falls experienced over the preceding twelve months, [10, 12, 20, 23, 24] three studies asked subjects to self-report over the previous six months [13, 22, 25] and one study utilised a history-taking approach, with subjects interviewed independently by two separate clinicians. [19] Classification of subjects as a ‘faller’ varied slightly, with five studies reporting on all fallers (i.e. ≥ 1 fall), [10, 19, 20, 22, 25] three reporting only on recurrent fallers (i.e. ≥ 2 falls), [12, 13, 23] and one which did not specify. [24]

The fall rate for the seven prospective studies ranged from 21.3% to 50.0%. [15, 21] The number of falls per subject were collected during the follow-up period, which was quite uniform at twelve months, [11, 14, 16-18, 21] except for one study which continued data collection for 24 months. [15] The primary outcome measure during the follow- up period was fall rate, based on either the first fall [16-18, 21] or incidence of falls. [11, 14, 15]

The nature of the primary balance/gait task varied between studies. Five studies investigated more than one type of balance/gait task. [10, 12, 19, 20, 24] Of the sixteen studies, ten required subjects to walk along a straight walkway, nine at normal pace [10, 11, 14, 16-19, 21, 24] and one at fast pace. [23] Three studies incorporated a turn along the walkway [15, 22, 25] and a further study comprised of both a straight walk and a separate walk-and-turn. [12] The remaining two studies did not employ a walking task of any kind, but rather utilised a voluntary step execution test [13], a Timed Up & Go test and a one-leg balance test. [20]

The type of cognitive/secondary task also varied between studies. All but three studies employed a cognitive task; one used a manual task [19] and two used both a cognitive and a manual task. [11, 14] Cognitive tasks differed greatly to include serial subtractions, [14, 15, 20, 22, 23] backward counting aloud, [11, 16-18, 21] memory tasks, [24, 25] stroop tasks [10, 13] and visuo-spatial tasks. [12] The single and dual-tasks were performed in a random order in six of the sixteen studies. [10, 12, 16-18, 20]

Thirteen studies recorded walking time or gait parameters as a major outcome. [10-12, 14-17, 19, 21-25] Of all studies, eleven reported that dual-task performance was associated with the occurrence of falls. A further two studies came to the same conclusion, but only in the elderly with high functional capacity [11] or during specific secondary tasks. [14] One prospective [17] and two retrospective studies [20, 25] found no significant association between dual-task performance and falls.

As described in Table 2, ten studies reported figures on the predictive ability of the single and/or dual-tasks; [11-18, 21, 23] some data was obtained from the systematic review by Beauchet et al. [8] The remaining six studies provided no fall prediction data. In predicting falls, dual-task tests had a sensitivity of 70% or greater, except in two studies which reported values of 64.8% [17] and 16.7%. [16] Specificity ranged from 57.1% to 94.3%. [16, 17] Positive predictive values ranged from to 38.0% to 86.7%, [17, 23] and negative predictive values from 54.5% to 93.2%. [21, 23] Two studies derived predictive ability from the dual-task ‘cost’, [11, 14] which was defined as the difference in performance between the single and dual-task test.

Only three studies provided statistical measures for the fall prediction of the single task and the dual-task individually. [16, 17, 21] Increased walking time during single and dual-task conditions were similarly associated with risk of falling, OR= 1.1 (95% CI, 1.0-1.2) and OR= 1.1 (95% CI, 0.9-1.1), respectively. [17] Variation in stride time also predicted falls, OR= 13.3 (95% CI, 1.6-113.6) and OR= 8.6 (95% CI, 1.9- 39.6) in the single and dual-task conditions respectively. [21] Walking speed predicted recurrent falls during single and dual-tasks, OR = 0.96 (95% CI, 0.94-0.99) and OR= 0.60 (95% CI, 0.41-0.85), respectively. [16] The later study reported that a decrease in walking speed increased risk of recurrent falls by 1.67 in the dual-task test compared to 1.04 during single-task. All values given in these three studies, for both single and dual-task tests, were interpreted as significant in predicting falls by their respective authors.

Discussion

Only three prospective studies directly compared the individual predictive values of the single and dual-task tests. The first such study concluded that the dual-task test was in fact equivalent to the single- task test in predicting falls. [17] This particular study also reported the lowest positive predictive value of all dual-task tests at 38%. The second study [21] also reported similar predictive values for the single and dual-task assessments, as well as a relatively low positive predictive value of 53.9%. Given that all other studies reported higher predictive values, it may be postulated that at the very least, dual-task tests are comparable to single-task tests in predicting falls. Furthermore, the two studies focused on subjects from senior housing facilities and hospital inpatients (187 and 57 participants respectively), and therefore results may not represent all elderly community-dwelling individuals. The third study [16] concluded that subjects who walked slower during the single-task assessment would be 1.04 times more likely to experience recurrent falls than subjects who walked faster. However, after a poor performance in the dual-task assessment, their risk may be increased to 1.67. This suggests that the dual-task assessment can offer a more accurate figure on risk of falling. Again, participants tested in this study were recruited from senior housing facilities, and thus results may not be directly applicable to the community-dwelling older adult.

Eight studies focused on community-dwelling participants, and all but one [20] suggested that dual-task performance was associated with falls. Evidence that dual-task assessments may be more suitable for fall prediction in the elderly who are healthier and/or living in the community as opposed to those with poorer health is provided by Yamada et al. [11] Participants were subdivided into groups by results of a Timed Up & Go test, separating the ‘frail’ from the ‘robust’. It was found that the dual-task assessments were associated with falls only in groups with a higher functional capacity. This intra-cohort variability may account for, at least in part, why three studies included in this review concluded that there was no benefit in performing dual-task assessments. [17, 20, 25] These findings conflicted with the remaining thirteen studies and may be justified by one or all of several possible reasons: (1) the heterogeneity of the studies, (2) the non-standardised application of the dual-task paradigm, or (3) the hypothesis that dual- task assessments are more applicable to specific subpopulations within the generalised group of ‘older adults’, or further, that certain primary and secondary task combinations must be used to produce favourable results.

The heterogeneity among the identified studies played a major role in limiting the scope of analysis and potential conclusions derived from this review. For example, the dichotomisation of the community- dwelling participants in to frail versus robust [11] illustrates the variability within a supposedly homogenous patient population. Another contributor to the heterogeneity of the studies is the broad range of cognitive or secondary tasks used, which varied between manual tasks [19] and simple or complex cognitive tasks. [10-21, 23-25] The purpose of the secondary task is to reduce attention allocated to the primary task. [5] Since the studies varied in secondary task(s) used, each with a slightly different level of complexity, attentional resources redirected away from the primary balance or gait task would also be varied. Hence, the ability of each study to predict falls is expected to be unique, or poorer, in studies employing a secondary task which is not sufficiently challenging. [26] One important outcome from this review has been to highlight the lack of a standardised protocol for performing dual-task assessments. There is currently no identified combination of a primary and secondary task which has proven superiority in predicting falls. Variation in the task combinations, as well as varied participant instructions given prior to the completion of tasks, is a possible explanation for the disparity between results. To improve result consistency and comparability in this emerging area of research, [6] dual-task assessments should be comprised of a standardised primary and secondary task.

Sixteen studies were deemed appropriate for inclusion in this systematic review. Despite a thorough search strategy, it is possible that some relevant studies may have been overlooked. Based on limited data from 2006 to 2011, the exact benefit of dual-task assessments in predicting falls compared to single-task assessments remains uncertain. For a more comprehensive verdict, further analysis is required to combine previous systematic reviews, [7, 8] which incorporates data prior to 2006. Future dual-task studies should focus on fall prediction and report predictive values for both the single-task and the dual-task individually in order to allow for comparisons to be made. Such studies should also incorporate large sample sizes, and assess living conditions and health status of participants. Emphasis on the predictive value of dual-task assessments requires these studies to be prospective in design, as prospective collection of fall data is considered the gold standard. [27]

Conclusion

Due to the heterogeneous nature of the study population, the limited statistical analysis and a lack of direct comparison between single- task and dual-task assessments, the question of whether dual-task assessments are superior to single-task assessments for fall prediction remains unanswered. This systematic review has highlighted significant variability in study population and design that should be taken into account when conducting further research. Standardisation of dual-task assessment protocols and further investigation and characterisation of sub-populations where dual-task assessments may offer particular benefit are suggested. Future research could focus on different task combinations in order to identify which permutations provide the greatest predictive power. Translation into routine clinical practice will require development of reproducible dual-task assessments that can be performed easily on older individuals and have validated accuracy in predicting future falls. Ultimately, incorporation of dual- task assessments into clinical fall prevention programs should aim to provide a sensitive and specific measure of effectiveness and to reduce the incidence, morbidity and mortality associated with falls.

Acknowledgements

The author would like to thank Professor Stephen Lord and Doctor Jasmine Menant from Neuroscience Research Australia for their expertise and assistance in producing this literature review.

Conflict of interest

None declared.

Contact

M Sarofim: mina@student.unsw.edu.au

Categories
Feature Articles Articles

Student-led malaria projects – can they be effective?

Introduction
In this article we give an account of establishing a sustainable project in Uganda. We describe our experiences, both positive and negative, and discuss how such endeavours are beneficial to both students and universities. The substantial work contributed by an increasing group of students at our university and around Australia demonstrates an increasing push towards a greater national contribution to global health. Undoubtedly, student bodies have the potential to become major players in global health initiatives, but first we must see increased financial and academic investment by universities in this particular area of medicine.

Background
There are an estimated three billion people at risk of infection from malaria, with an estimated one million deaths annually. The greatest burden of malaria exists in Sub-Saharan Africa. [1,2] Amongst the Ugandan population of 26.9 million, malaria is the leading cause of morbidity and mortality, with 8 to 13 million episodes reported. [3] The World Malaria Report estimated that there were 43 490 malaria-related deaths in Uganda in 2008, ranking it third in the world behind Nigeria and the Democratic Republic of Congo. [4] In 2011, the situation remained alarming, with 90% of the population living in areas of high malaria transmission. [5]

The focus of this report is the Biharwe region of south-west Uganda. Due to a lack of reliable epidemiological data regarding the south-west of Uganda, it is difficult to evaluate the effectiveness of current malaria intervention strategies. However, Uganda is a country with relatively stable political and economic factors, [6] making it a strong candidate for the creation of sustainable intervention programs.

Insecticide Treated Nets (ITN)
Insecticide treated nets are a core method of malaria prevention and reduce disease-related mortality. [5] The World Health Organisation (WHO) Global Malaria Programme report states that an insecticide-treated net is a mosquito net that repels, disables and/or kills mosquitoes that come into contact with the insecticide. There are two categories of ITNs: conventionally treated nets, and long-lasting insecticidal nets (LLINs). The WHO recommends the distribution of LLINs rather than conventionally treated nets as LLINs are designed to maintain their biological efficacy against vector mosquitoes for at least three years in the field under recommended conditions of use, removing the need for regular insecticide treatment. [7]

Long-lasting insecticide nets have been reported to reduce all-cause child mortality by an average of eighteen percent in Sub-Saharan Africa (with a range of 14-29%). This implies that 5.5 lives could be saved per 1000 children under five years of age per year. [8] Use of LLINs in Africa increased mean birth weight by 55 g, reduced low birth weight by 23%, and reduced miscarriages/stillbirths by 33% in the first few pregnancies when compared with a control arm in which there were no mosquito nets. [9]

Use of LLINs is one of the most cost-effective interventions against malaria. In high-transmission areas where most of the malaria burden occurs in children under the age of five years, the use of LLINs is four to five times cheaper than the alternate strategy of indoor residual spraying. [10] Systematic delivery of LLINs through distribution projects can be a cost-effective way to make a significant impact on a local community. This makes the distribution of LLINs an ideal project for student-led groups with limited budgets.

Our experience implementing a sustainable intervention project in Uganda
This article comments on student-led research performed in Biharwe, which aimed to evaluate the Biharwe community’s current knowledge of malaria prevention techniques; to assess how people used their ITNs and to investigate from where they sourced their ITNs. We also aimed to alleviate the high malaria burden in Biharwe through the distribution of ITNs. We fundraised in Tasmania, with financial support being garnered from local Rotarian groups and student societies. Approximately five thousand dollars was raised which we used to purchase ITNs. Simultaneously we began contacting a local non-governmental organisation (NGO) and a student body from Mbarara University, the largest university in south-west Uganda. We felt we had laid the foundation for a successful overseas trip.

Our endeavours suffered initial setbacks due to the observation of a local organisation we were working with misusing the funds of other projects. We felt that in order to avoid a similar fate we would need to cut ties, and decided to seek out other local groups. We made contact with the Mbarara University students and they pointed us towards the Biharwe sub-county as a region of particular neglect with regards to previous government and NGO ITN distribution programs. At their recommendation we travelled to villages in the area. Access to these villages was obtained through respectfully approaching the village representatives and their councils, and asking their permission to engage with the local community.

Despite all our preparations before heading to Uganda, we were still not fully prepared for the stark realities of everyday life in East Africa. One problem we encountered was the misuse and misunderstanding of the ITN distribution program by locals. We also encountered local ‘gangs’ who would collect free ITNs from our distribution programs and then sell them at the market place for a profit; people who used their ITNs as materials to build their chicken coups; and widespread myths about the effects of ITNs. To combat this we sought the advice of a local priest who requested that the village heads put together a list of households as a means of minimising the fraudulent distribution of our nets. While not ideal, this approach did give us greater confidence when distributing the ITNs. As Uganda is a religious nation the support of a well-respected local priest made local leaders more receptive to our program.

It became apparent that we had to strengthen our understanding of local attitudes towards and usage of ITNs if we were to create a long-term, meaningful relationship with people in the area. At the suggestion of Mbarara University students, we commissioned DEKA Consult Limited, a local research group, to conduct qualitative epidemiological research in villages in these communities. Data collected was useful in identifying the scope of the problem. It identified that community members already had a significant amount of knowledge on the use of ITNs and that those who owned mosquito nets had purchased them from local suppliers. Local ethics approval and permission for access to local community members was gained by DEKA Consult Limited.

Evaluating local knowledge on malaria prevention
The study commissioned addressed community attitudes towards malaria prevention by surveying two distinct groups living in the Biharwe sub-county of south-west Uganda. Through questionnaires and focus group discussions, local researchers gathered information concerning attitudes towards and usage of mosquito nets in the area. One of the key findings was that ITNs were nominated as the main preventative technique by the respondents (33.3%). This is congruent with previous data indicating an increase in awareness of ITNs in Uganda following the Roll Back Malaria Abuja Summit. [11] A majority of respondents indicated some knowledge of the appropriate use of these mosquito nets (83.3%), meaning though that one in six of the Biharwe community members were unsure of how to correctly use ITNs. The research also explored common reasons why people neglected to sleep under ITNs in the Biharwe sub-county. Common misperceptions such as ITNs causing impotence and leading to burns were identified as barriers to people using their mosquito nets, and were issues that would need to be addressed in future education seminars. The findings indicate that assessment of existing knowledge and perceptions of a community are crucial in identifying obstacles that must be overcome during the implementation of an effective intervention project. Activities promoting education can then be moulded around the particular culture and social dynamic of a community, which will lead to maximal project impact. [12, 13] We believe this data indicates that the distribution of ITNs would be improved if it was accompanied by robust educational initiates that are tailored to local community needs.

Our way forward
In the summer of 2011-2012 another group of students from UTAS implemented an LLIN distribution project in the south-west of Uganda. They furthered the work outlined in this report. Our experiences and connections provided an excellent foundation for them to implement expanded projects. A further group of UTAS students has been assembled and is planning to travel to Uganda this coming summer, once again with the aim of building on the previous two visits. With the generous assistance of the Menzies Institute and UTAS School of Medicine, plans for a more robust epidemiology project have been formulated in order to measure the efficacy of future projects in Uganda. We believe the sustainability and effectiveness of these programs relies on both the development of a long-term relationship between our student organisation and the local community, as well as appropriate evaluation of all our projects.

Free distribution or subsidised LLINs
The majority of the malaria burden exists in the poorest, most rural communities, yet it is these regions that are often neglected in widespread ITN distribution programs. [14]

Our data indicates that only a minority of the households in the rural Biharwe sub-county own ITNs (11.1%), and that all of these ITNs have been purchased through the commercial sector. Again methodological disparities need to be addressed in order to confirm the validity of these results. However it does raise the important question of whether the commercial sector, rather than the public/non-governmental organisation (NGO) sector, would be better placed to serve their local communities.

Our dilemma serves as a microcosm for a much larger debate that has been occurring over the last decade regarding the most effective means of delivering ITNs in order to achieve the greatest national coverage.[15] Free distribution of ITNs is far more equitable and effective at reaching the poor. [16] However, utilisation of the commercial sector through subsidies, vouchers or a stratification model [17] is more sustainable, because a portion of the losses may be recovered. Populations, including those in the rural Biharwe sub-region, that have been neglected from ITN schemes such as Roll Back Malaria, [5] may stand to benefit from free targeted distribution of nets. Collaborations with both local and international students are well placed to combine local knowledge and financial support to best implement such initiatives.

The role of students in malaria prevention and international development projects
Organisations such as the World Health Organisation, when involved in widespread ITN distribution, [5] have far greater capabilities than any student-led project. However, due to shortfalls in funding and co-ordination, these schemes will not be able to reach all at-risk populations, particularly the poorest rural areas. [5] Small scale and independently funded student-led projects can fill a void in this neglected population. In order to achieve the maximal impact with a malaria intervention project, students should identify areas with a low rate of household ITN ownership, as well as areas with a low percentage of the owned ITNs being donated. It is these areas that ultimately stand to make the greatest progress in terms of ITN coverage amongst vulnerable individuals, resulting in a decrease in morbidity and mortality from malaria. [18] With locally-specific research, strong relationships with the community and the community leaders, and appropriate evaluation processes in place, students can make the maximal impact on reducing morbidity and mortality from malaria with limited funds. [19]

The aim should always be for a long-term partnership between the community [19] and student-led organisations who are willing to promote sustainability. This has the greatest opportunity to provide long-term benefits for both parties. Our experience is that medical students provide a continuous stream of like-minded youth who have been able to rise to the challenge and continue the work of previous students. Through bilateral exchanges between students and overseas partners, trust and friendship are able to be fostered, which further encourages participation in the project upon returning. Important information regarding the social hierarchy is also gained, which greatly helps with gaining access to the local decision makers. In turn, this creates greater understanding of the health problems, culture and reasons why particular communities have been left behind. Student-led organisations are perfectly placed to deliver these educational programs, as they constitute a long-term pool of motivated, altruistic skilled workers who are able to learn from their predecessors. Individual students also stand to benefit through increased cultural understanding, application of learned skill sets and an opportunity which can enhance their career paths. [19] Through appropriate long-term trial, error and proper evaluation, systems of program implementation can be formulated which may then be applied to similar communities elsewhere.

The Role of Universities
Preparing students for a leadership role in global health and its related fields is critical. University curricula should reflect today’s problems and those that are likely to be present in the coming decades. [20] It is our opinion that students are increasingly becoming aware and more willing to be involved in providing solutions, no matter how small, to current international issues, thanks mainly to a surge in the exposure to social media. When universities do not explore such issues deeply in their curricula, and do not provide the support for active student involvement, it may lead students to perceive that universities are about something other than the realities of the world. [21] Encouraging participation in international health projects has been reported to encourage students to better examine cross cultural issues, to improve their problem solving skills and to help improve the delivery of healthcare for under-privileged people. [22] These are transferable skills that are vital in the Australian health care system.

North American and European universities continue to lead the way; however, Australian universities are starting to become more involved with global health issues. The Australian Medical Students Association’s Global Health Committee aims to link and empower groups of students from each Australian medical school. [23] The Melbourne University Health Initiative, which oversees the Victorian Student’s Aid Program, aims to help students make a difference in health issues on a local and international level by running events on campus to promote awareness about several health issues, and by organising public health lectures to promote awareness in the community. [24] The Training for Health Equity Network (THEnet) is a composition of ten schools from around the world, including James Cook and Flinders Universities, who have committed to ensure that teaching, research and service activities address priority health needs, using a focus on underserved communities. [25] A focus of THEnet is on social accountability, with a framework to assess whether the schools are contributing to the improvement of health conditions within their local communities. [26]

In our view, there is no doubt that there needs to be more penetration of such initiatives into each of the universities’ curriculum. Should this occur, Australia may be able to produce a generation of graduates who will be well placed to address the numerous complex global health issues we are facing today, and that we will inevitably face in the future.

Conflict of interest
None declared.

Correspondence
B Wood: benjaminmwood88@gmail.com

References
[1] Greenwood BM, Bojang K, Whitty CJM, Targett GAT. Malaria, The Lancet. 2005 Apr 23-29; 365 (9469): 1487-98.
[2] Snow RW, Guerra CA, Noor AM, Myint HY, Hay SI. The global distribution of clinical episodes of Plasmodium falciparum malaria. Nature. 2005 Mar 2010; 434 (7030): 214-7.
[3] Uganda. Uganda Ministry of Health. Uganda Malaria Control Strategic Plan 2005/06 – 2009/10: Roll Back Malaria; 2003.
[4] Aregawi M, Cibulskis R, Williams R. World Malaria Report 2008. Switzerland: World Health Organisation; 2008.
[5] Aregawi M, Cibulskis R, Lynch M, Williams R. World Malaria Report 2011. Switzerland: World Health Organisation; 2011.
[6] Yeka A, Gasasira A, Mpimbaza A, Achan J, Nankabirwa J, Nsobya S, et al., Malaria in Uganda: Challenges to control on the long road to elimination: I. Epidemiology and current control efforts. Acta Tropica. 2012 Mar; 121 (3); 184-95.
[7] Fifty-eighth World Health Assembly: Resolution WHA58.2 Malaria Control [Internet Article]. Geneva: World Health Organisation; May 2005 [cited 2012 12th April]. Available from: http://apps.who.int/gb/ebwha/pdf_files/WHA58-REC1/english/A58_2005_REC1-en.pdf
[8] Lengeler C. Insecticide-treated bed nets and curtains for preventing malaria. Cochrane Database of Systemic Review (Online). 2004; 2: CD000363.
[9] Gamble C, Ekwaru JP, Ter Kuile FO. Insecticide-treated nets for preventing malaria in pregnancy. Cochrane Database of Systematic Reviews (Online). 2006 April 19; 2: CD003755.
[10] Yukich J, Tediosi F, Lengeler C. Comparative cost-effectiveness of ITNs or IRS in Sub-Saharan Africa. Malaria Matters (Issue 18). 2007 July 12: pg. 2-4.
[11] Baume CA, Marin MC. Gains in awareness, ownership and use of insecticide-treated nets in Nigeria, Senegal, Uganda and Zambia. Malaria J. 2008 Aug 7; 7: 153.
[12] Williams PCM, Martina A, Cumming RG, Hall J. Malaria prevention in Sub-Saharan Africa: A field study in rural Uganda. J Community Health. 2009 April; 34:288-94.
[13] Marsh VM, Mutemi W, Some ES, Haaland A, Snow RW. Evaluating the community education programme of an insecticide-treated bed net trial on the Kenyan coast. Health Policy Plan. 1996 Sep; 11(3): 280-91.
[14] Webster J, Lines J, Bruce J, Armstrong Schellenberg JR, Hanson K. Which delivery systems reach the poor? A review of equity of coverage of ever-treated nets, never-treated nets, and immunisation to reduce child mortality in Africa. Lancet Infect Dis. 2005 Nov; 5(11): 709-11.
[15] Sexton A. Best practices for an insecticide-treated bed net distribution programme in sub-Saharan eastern Africa. Malaria J. 2011. Jun 8; 10:157.
[16] Noor AM, Mutheu JJ, Tatem AJ, Hay SI, Snow RW. Insecticide-treated net coverage in Africa: mapping progress in 2000-07. Lancet. 2009 Nov 18; 373 (9657): 58-67.
[17] Noor AM, Amin AA, Akhwale WS, Snow RW. Increasing coverage and decreasing inequity in insecticide-treated bed net use among rural Kenyan children. PLoS Medicine. 2007 Aug 21; 4(8): e255.
[18] Cohen J, Dupas P. Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment. The Quarterly Journal of Economics. 2010; 125 (1): 1-45.
[19] Glew RH. Promoting collaborations between biomedical scholars in the U.S. and Sub-Saharan Africa. Experimental Biology and Medicine. 2008 Mar; 233(3): 277-85.
[20] Bryant JH, Velji. Global health and the role of universities in the twenty-first century,.Infect Dis Clin North Am 2011 Jun; 25(2): 311-21.
[21] Crabtree RD. Mutual empowerment in cross-cultural participatory development and service learning: Lessons in communication and social justice from projects. J Appl Commun Res. 1998; 26 (2): 182-209.
[22] Harth SC, Leonard NA, Fitzgerald SM, Thong YH. The educational value of clinical electives. Medical Education. 1990 Jul; 24 (4) :344–53.
[23] Murphy A. AMSA Global Health Committee [Internet]. 2012 [cited 2012 April 10]. Available from: http://ghn.amsa.org.au/
[24] Melbourne University Health Initiative [Internet]. 2012 [cited 2012 April 3]. Available from: http://muhi-gh.org/about-muhi
[25] THEnet. Training for Health Equity Network [Internet]. 2012 [cited 2012 March 30]. Available from: http://www.thenetcommunity.org/
[26] The Training for Health Equity Network. THEnet’s Social Accountability Evaluation Framework Version 1. Monograph I (1 ed.). The Training for Health Equity Network, 2011.

Categories
Letters Articles

The hidden value of the Prevocational General Practice Placements Program

Medical students, prevocational doctors and general practitioners (GPs) may have little knowledge of the Prevocational General Practice Placements Program (PGPPP).

This article seeks to explore the value of such placements and provide an aspiring surgeon’s reflection on a PGPPP internship placement in rural New South Wales (NSW).

General practice placements for interns have been available for the past three decades in the United Kingdom, with literature unanimously promoting the educational benefits. [1] The Australian PGPPP experiences in Western Australia and South Australia reinforce the feasibility of such placements, and propose cooperation between universities, postgraduate councils, training networks and specialist training colleges. [2] Semi-structured interviews with interns who had completed the PGPPP indicated experience in communication, counselling and procedural skills, with a range of patient presentations. [3] The uptake of the PGPPP has varied between states, with NSW until recently, having substantially fewer placements, particularly at an intern level. [4]

Prevocational GP placements have the potential to alleviate some of the pressure of sourcing additional postgraduate placements for junior doctors. With the dramatic increase of Australian medical school graduates – by 81% in seven years – overwhelming traditional postgraduate training placements, [5] the growth of PGPPP will continue. Despite available qualitative data, there is currently no published information that adequately describes the range and volume of patient and procedural experiences in PGPPP placements. In response, a prospective study of caseload data is currently underway to better inform medical students, prospective PGPPP doctors, GP supervisors and specialist training colleges of the potential value of such placements.

 

In April 2012, I undertook an eleven week placement at Your Health Griffith, a medical centre in Griffith, NSW. The practice was staffed by seven GPs and two practice nurses. Two GPs alternated as clinical supervisors and a third GP, separate from the practice, conducted informal weekly tutorials and reviewed patient encounter and procedure logs. Both clinical supervision and teaching exceeded RACGP standards. [6]

Presentations during a single day included complex medical or surgical issues, paediatrics, obstetrics, gynaecology, dermatology, mental health, occupational health, preventative health and more. The workload included booked appointments and consenting patients from the GP supervisors’ bookings, thus ensuring a reasonable patient caseload. Patients often attended follow up appointments during this term. The continuity of patient care in PGPPP was in stark contrast to acute medicine and surgery in tertiary hospitals. This allowed establishment of greater rapport, with patients openly discussing intimate social or mental health issues during subsequent consultations.

The majority of tertiary hospitals encompass an established hierarchy of fellows, accredited and unaccredited registrars, residents and enthusiastic medical students vying for procedures. With the possible exception of emergency, most interns complete only a few procedures in hospital rotations. Hence, in PGPPP, junior doctors value the opportunities to practice procedural skills including administration of local anaesthesia, skin lesion excisions and suturing.

The main source of frustration within the placement was administrative red tape. The restrictions placed upon interns with provisional medical registration meant all scripts and referrals had to be counter-signed and conducted under the GP supervisors’ provider number and prescription authority. Interns routinely prescribe medications and make referrals in the hospital system. That this authority in a supervised hospital system has not been extended to similarly supervised PGPPP is bewildering. The need to obtain formal consent prior to consults, in contrast to the implied consent in hospital treatment, was reminiscent of medical school.

One of the main purposes for the development of PGPPP was to promote general practice to prevocational junior medical officers. These placements provide junior doctors with valuable exposure to community medicine, develop confidence to deal with the uncertainty of diagnoses, a range of patient presentations, and improve counselling and procedural skills. These skills and experiences are likely to be retained regardless of future specialisation. Perhaps it is just as important for GPs to play a role in educating future tertiary care specialists, so all may better understand both the capabilities and limitations of community medicine. While I still wish to pursue a career in surgery, this placement has provided insight into the world of community medicine. The value of PGPPP truly extends beyond the attraction of prevocational doctors to general practice careers.

Conflict of interest

None declared.

Acknowledgements

Dr Marion Reeves, Dr Jekhsi Musa Othman and Dr Damien Limberger for their supervision and guidance through this clinical rotation.