Categories
Case Reports Articles

Dengue fever in a rural hospital: Issues concerning transmission

Introduction: Dengue is either endemic or epidemic in almost every country located in the tropics. Within northern Australia, dengue occurs in epidemics; however, the Aedes aegypti vector is widespread in the area and thus there is a threat that dengue may become endemic in future years. Case presentation: An 18 year old male was admitted to a rural north Queensland hospital with the provisional diagnosis of dengue fever. No specific consideration was given to the risk that this patient posed to other patients, including a 56 year old male with chronic myeloid leukaemia and prior exposure to dengue. Discussion: Much media and public attention has been given to dengue transmission in the scope of vector control in the community. Hospital-based dengue transmission from patient-to-patient requires consideration so as to minimise unnecessary morbidity and mortality. Vector control within the hospital setting appears to be an appropriate preventative measure in the context of the presented case. Transfusion and transplantation-related transmission of dengue between patients are important considerations. Vertical dengue infection is also noted to be possible. Conclusion: Numerous changes in the management of dengue-infected patients can be made that are economically feasible. Education of healthcare workers is essential to ensure the safety of all patients admitted to hospitals in dengue-affected areas. Bed management in particular is one area that may benefit from increased attention.

Introduction

Dengue is diagnosed annually in more than 50 million people worldwide and represents one of the most important arthropod-borne viral infections. [1-4] Estimates suggest that the potentially lethal complication of dengue haemorrhagic fever occurs in 500 000 people and an alarming 24 000 deaths result from infection annually. [1,2,4] Coupled with the increasing frequency and severity of outbreaks in recent years, dengue has been identified as a major and escalating public health concern. [2,4,5]

Whilst most of the burden of dengue occurs in developing countries, northern Australia is known to have epidemics. Suggestions have been made that dengue may become endemic in this region in future years based on increasing migration, international travel, population growth, climate change and widespread presence of vectors. [6-12] The vast majority of studies have focused on vector control in the community setting. [2,4,5,9] The purpose of this report is to discuss the risks of transmission of dengue in a hospital setting and in particular, patient- to-patient transmission. Transmission of dengue in a hospital is important to consider as immunological responses and health status of hospitalised patients can be poor. Inadequate management of dengue- infected patients may ultimately threaten the lives and complicate treatment of other patients, creating unnecessary economic costs and demands on healthcare. [12-14]

This case report highlights the difficulties of handling a suspected dengue-infected patient from the perspective of an Australian rural hospital. Recommendations are made to improve management of such patients, in particular, embracing technological advancements including digital medical records that are likely to become available in future years.

Case report

An 18 year old male, patient 1, presented to a rural north Queensland hospital emergency department with a four day history of fever, generalised myalgia and headache. He resided in an area that was known to be in the midst of a dengue outbreak. He had no past medical or surgical history and had never travelled. On examination, the patient’s tympanic temperature was 38.9°C and he had dry mucous membranes. No rash was observed and no other abnormal findings were noted. Laboratory investigations, which included dengue PCR and dengue serology, were taken. He was admitted for observation and given intravenous fluids. A provisional diagnosis of dengue fever was made.

The patient was subsequently placed in a room with four beds. Whilst two of the beds in the room did not have patients in them, the remaining bed was occupied by patient 2, a 56 year old male with chronic myeloid leukaemia (CML), who had been hospitalised the previous day with a lower respiratory tract infection. The patient’s medical history was notable for a past episode of dengue fever five years previously following an overseas holiday.

The patient with presumed dengue fever remained febrile for two days. He walked around the ward and went outside for cigarettes. He also opened the room window, which was unscreened. Tests subsequently confirmed that he had a dengue viral infection.

Whilst no dengue transmission occurred, the incident raised a number of issues for consideration, as no concerns regarding transmission was raised by staff or either patients.

Discussion

The dengue viruses are single positive-stranded RNA viruses belonging to the Flaviviridae family, with four distinct serotypes described. [4,12] Infection can range from asymptomatic, to a mild viral syndrome associated with fever, malaise, headache, myalgia and rash, or an eventual severe presentation characterised by haemorrhage and shock. [3,9] Currently the immunopathogenesis of severe dengue infection, which occurs in less than 5 percent of infections and includes dengue haemorrhagic fever and shock syndromes, is poorly defined. [2,3]

Whilst primary infection in the young and well nourished has been associated with the development of severe infection, the major aetiology of severe infection is thought to be secondary infection with a different serotype. [3,9] This has been hypothesised to be as a result of an antibody-mediated enhancement reaction, although authors also suggest that other factors are likely to contribute. [3,4,9] Untreated dengue haemorrhagic fever is characterised by increased capillary permeability and haemostatic changes and has a mortality rate of 10-20 percent. [2,3,5] This complication can further deteriorate into dengue shock syndrome. [3] Whilst research shows that the serious complications of dengue infection occurs mainly in children, adults with asthma, diabetes and other chronic diseases may be at increased risk and secondary dengue infections could be life threatening in these groups. [4,5,15]

The most commonly reported route of infection is via the bite of an infected Aedes mosquito, primarily Aedes aegypti. [2-14] This vector feeds during the day, prefers human blood and breeds in close proximity to humans. [5,12,13] The transmission of dengue has been widely reported in the urban setting and has a geographical distribution including more than 100 countries. [3,13] However, only one study has reported dengue vector transmission from within a hospital. [16] Kularatne et al. (2007) recently described a dengue outbreak that started within a hospital in Sri Lanka and was unique such that a building site next to the hospital provided breeding sites for mosquitoes. [16] Dengue infection was noted to cause significant cardiac dysfunction, and of particular note was that medical students, nurses, doctors and other hospital employees were the main targets. [16] The authors report that at the initial outbreak one medical student died due to shock and severe pulmonary oedema as a result of acute viral myocarditis. [16] This case highlights the risk of dengue transmission within a hospital setting.

In addition to the vector-borne transmission, dengue can be also be transmitted by other routes, including transfusion. [17,18] The incidence of blood transfusion-associated dengue infection has been one area of investigation that has primarily been reported in endemic countries. In one study conducted in Hong Kong by Chuang et al. (2008) the prevalence of this mode of transmission was 1 in 126. [17] Whilst rare in Australia, an investigation undertaken during the 2004 outbreak in Cairns, Queensland calculated the risk of transfusion- related dengue infection by mathematical modelling and reported the risk of collecting a viraemic donation as 1 in 1028 persons during the course of the epidemic. [18] Donations from the affected areas were not used for transfusion. [18]

Case reports have also been published demonstrating that transplantation can represent a route of dengue infection between hospitalised patients. [19,20] Rigau-Pérez and Laufer (2006) described a six year old child who developed fever four days post-bone marrow transplantation and subsequently died. [19] Dengue virus was isolated from the blood and tissues of the child and the donor was subsequently known to have become febrile with tests for dengue being found to be positive. [19] Dengue infection resulting from solid organ transplantation has also been described in a 23 year old male with end-stage renal failure. [20] The donor of the transplanted kidney had dengue fever six months prior to the transplant and the recipient of the organ had dengue fever five days postoperatively. [20] The recipient had a complicated recovery and required an emergency laparotomy and blood products to ensure survival. [20] The authors of this case report further discuss the fact that the patient in question had resided in a dengue-endemic region and therefore could not exclude the usual mode of infection. [20]

Whilst not applicable to the presented case, vertical transmission of dengue has also been noted to be an important consideration in hospitalised patients. Reports from endemic countries have suggested that transmission can occur if infection of the mother occurs within eight days of delivery. [9,21] One neonatal death has been reported as a result of dengue infection and a number of studies have reported peripartum complications requiring medical treatment in other neonates. [21,22] Interpretation of this result should be viewed with caution due to difficulties cited in the clinical diagnosis of dengue in neonates, as it is possible that vertical transmission may be underreported. [22]

Taking into account the reported case study and presented evidence, it is clear that patient 1 presented a risk to patient 2. It is essential to acknowledge that dengue transmission can occur within a hospital setting. Whilst only one study has reported vector transmission of dengue within a hospital, it does define the real possibility of transmission associated with close contact and a competent vector. [16] There is also a need to emphasise the fact that patient 1 walked outside the hospital on numerous occasions and that unscreened windows were open within the hospital ward room. Consequently, it can be stated that patient-to-patient dengue infection would have been possible not only for patient 2, but also other admitted patients. Additionally, healthcare workers and community members that lived within the area surrounding the hospital were also at risk.

In acknowledging that vector transmission within a hospital is the most important hazard in regards to transmission of dengue from patient-to-patient, numerous control measures can be implemented to decrease the risk of transmission. Infrastructure plans within hospitals are important, as screened windows would decrease the ability of mosquitoes to enter hospitals. In those hospitals where such changes may not be economically feasible, studies have reported that having patients spend as much time as possible under insecticide treated mosquito nets, limiting outdoor time for infected patients, wearing protective clothing and applying insecticide numerous times throughout the day may decrease the possibility of dengue infection within hospitals. [23-25]

Educational programs for healthcare professionals and patients also warrant consideration. Numerous programs have been established primarily in the developing world and have proven to be beneficial. [26,27] It is important to create innovative education programs aimed at educating those healthcare workers that care for suspected dengue- infected patients as well as members of the public. This is one area that needs to be explored in future years.

Additionally, this case study demonstrates that current protocols in bed management do not consider a past medical history of dengue infection when assigning patients to beds. This report draws attention to the importance of identifying those patients at risk of secondary infection with dengue. As electronic patient records are implemented in many countries throughout the world, a past history of confirmed dengue infection needs to be considered. This may mean when resources are available, that patients are not placed in the same room thereby avoiding unnecessarily placing patients at risk. Whilst this would not completely exclude the possibility of dengue transmission in a hospital, it may set the trend for improved protocols in infection control particularly when secondary infection is associated with poorer outcomes. [2-5,9]

Conclusion

Infection control is often targeted in tertiary referral centres. This report clearly highlights the importance of appreciating infection control within a rural setting. Dengue infection between patients is a possibility with available evidence suggesting that this is most likely to be from exposure of an infected individual to a competent vector. Numerous changes have the potential to decrease the likelihood of dengue infection. Healthcare worker education is a critical component of these changes so that suspected dengue infected patients may also be educated regarding the risk that they represent to members of the public. The utilisation of screened windows, insecticide treated mosquito nets, and patient measures such as wearing protective clothing and applying insect repellents are all preventative measures that need to be considered. Future research is likely to develop technological aides for appropriate bed assignment. This will ensure that unnecessary morbidity and mortality associated with dengue infection are avoided.

Consent declaration

Informed consent was obtained from the patients for publication of this report.

Conflict of interest

None declared.

Correspondence

R Smith: ross.smith@my.jcu.edu.au

 

Categories
Review Articles Articles

Treatment of persistent diabetic macular oedema – intravitreal bevacizumab versus laser photocoagulation: A critical appraisal of BOLT Study for an evidence based medicine clinical practice guideline

Laser photocoagulation has remained the standard of treatment for diabetic macular oedema (DME) for the past three decades. However, it has been shown to be unbeneficial in chronic diffuse DME. Intravitreal bevacizumab (ivB) has been proposed as an alternate and effective treatment of DME. This review evaluates the evidence behind comparing bevacizumab to laser photocoagulation in treating persisting DME. A structured systematic search of literature, with critical appraisal of retrieved trials, was performed. Four randomised controlled trials (RCTs) supported beneficial effects of ivB over laser photocoagulation. Only one RCT, the BOLT study, compared laser to ivB effect in persistent DME. The results from the study showed significant improvement in mean best corrected visual acuity (BCVA) and greater reduction in mean central macular thickness (CMT) in the ivB group, with no significant difference in safety outcome measures.

Introduction

Diabetic macular oedema is a frequent manifestation of diabetic retinopathy and is one of the leading causes of blindness and visual acuity loss worldwide. [1] The presence of DME varies directly in proportion with the duration and stage of diabetic retinopathy, with a prevalence of three percent in mild non-proliferating retinopathy, 38% in moderate-to-severe non-proliferating retinopathy and 71% with proliferative retinopathy. [2]

Diabetic macular oedema (DME) is a consequence of micro-vascular changes in the retina that lead to fluid/plasma constituent accumulation in the intra-retinal layers of the macula thereby increasing macular thickness. Clinically significant macular oedema (CSME) is present when there is thickening within or close to the central macula with hard exudates within 500μm of the centre of the macula and with retinal thickening of at least one disc area in size. [3,4] As measured in optical coherence tomography, central macular thickness (CMT) corresponds approximately to retinal thickness at the foveal region and can quantitatively reflect the amount of CSME a patient has. [5] Two different types of DME exist: focal DME (due to fluid accumulation from leaking micro-aneurysms) and diffuse DME (due to capillary incompetence and inner-retinal barrier breakdown).

Diabetic macular oedema pathogenesis is multi-factorial; influenced by diabetes duration, insulin dependence, HbA1C levels and hypertension. [6] Macular laser photocoagulation has remained the standard treatment for both focal and diffuse DME, based on the recommendations of the Early Treatment Diabetic Retinopathy Study (ETDRS) since 1985. This study showed the risk of CSME decreases by approximately 50% (from 24% to 12%) at three years with the use of macular laser photocoagulation. However, the improvement in visual acuity is modest, observed in less than three percent of patients. [3]

Recent research indicates that macular laser therapy is not always beneficial and has limited results, especially for chronic diffuse DME, [3,7] with visual acuity improving in only 14.5% of patients. [8] Following laser treatment, scars may develop and reduce the likelihood of vision improvement [3] hence alternate treatments for DME such as intravitreal triamcinolone (ivT), have been investigated. Intravitreal triamcinolone (ivT) works via a number of mechanisms including reducing vascular permeability and down regulating VEGF (vascular endothelial growth factor). Anti-VEGF therapies have been the focus of recent research, and those modalities have been shown to potently suppress angiogenesis and to decrease vascular permeability in ocular disease such as DME, leading to improvement in visual acuity. [9] The results of treating DME with anti-VEGFs are controversial and are in need of larger prospective RCTs. [10]

Currently used anti-VEGFs include bevacizumab, ranibizumab and pegatanib. Ranibizumab has been shown to be superior in treating DME, both in safety and efficacy, compared to laser therapy, in several studies that include RESTORE, RESOLVE, RISE and RIDE studies. [11-13] It has been recently approved by the Food and Drug Administration (FDA) for treating DME in the United States of America. [14] Bevacizumab (Avastin®) is a full length monoclonal antibody against VEGF, binding to all subtypes of VEGF. [10] In addition to treating metastatic colon cancer, bevacizumab is also used extensively off-label for many ocular conditions that include age related macular degeneration (AMD), DME, retinopathy of prematurity and macular oedema secondary to retinal vein occlusion. [15] Documented adverse effects of ivB include transiently elevated intraocular pressure (IOP) and endopthalmitis. [16] Systemic effects associated with ivB injection include rise in blood pressure, thrombo-embolic events, myocardial infarction (MI), transient ischemic attack and stroke. [16,17] Other significant adverse events of bevacizumab when given systemically include delayed wound healing, impaired female fertility, gastrointestinal perforations, haemorrhage, proteinuria, congestive heart failure and hypersensitivity reactions. [17] Although not currently approved, a 1.25-2.5mg infusion of ivB is used for treating DME without significant ocular/systemic toxicity. [15]

The DRCR.net study (2007) has shown that ivB can reduce DME. [18] In addition, several studies, which have been carried out on diabetic retinopathy patients with CSME evaluating the efficacy of ivB ± ivT versus laser, demonstrated better visual outcomes with BCVA. [6,19- 21] Meta-analysis of those studies indicated ivB to be an effective short-term treatment for DME, with efficacy waning after six weeks. [6] This review evaluates the evidence behind the effect of ivB, compared to laser, in treating persisting DME despite standard treatment.

Clinical question

Our clinical question for this focused evidence based medicine article has been constructed to address the four elements of the problem, the intervention, the comparison and the outcomes as recommended by Strauss et al. (2005) [22]. “In diabetic patients with persistent clinically significant macular oedema (CSME) is intravitreal Bevacizumab (Avastin®) injection better than focal/grid laser photocoagulation in preserving the best-corrected visual acuity (BCVA)?”

Methodology

Comprehensive electronic searches in the British Medical Journal, Medical Journal of Australia, Cochrane Central Register of Controlled Trials, MEDLINE and PUBMED were performed for relevant literature, using the search terms diabetic retinopathy, CSME, CMT, bevacizumab and laser photocoagulation. Additional information from the online search engine, Google, was also incorporated. Reference lists of studies were then hand-searched for relevant studies/trials.

Selection

Results were restricted to systematic reviews, meta-analysis and randomised clinical trials (RCTs). Overall six RCTs were identified, which evaluated the efficacy of ivB compared to lasers in treating DME. [18- 21,23,24] There was also one meta-analysis comparing ivB to non-drug control treatment (lasers or sham) in DME. [7] One study was published showing pilot study results of the main trial, so the final version was selected for consideration to avoid duplication of results. [20,23] One study was excluded because it excluded focal DME patients. [19] The DRCR study (2007) was excluded because it was not designed to evaluate if treatment with ivB was beneficial in DME patients. [18] A meta-analysis by Goyal et al. was also excluded because it evaluated bevacizumab with sham treatment and not laser therapy. [7]

Thus, three relevant RCTs were narrowed down for analysis (Table 1) in this evidence based medicine review. [20,21,24] However, only the BOLT study (2012) evaluated the above treatment modalities in persistent CSME. The other two RCTs evaluated the treatment efficacies in patients with no prior laser therapies for CSME/diabetic retinopathy. Hence, only the BOLT study (2012) has been critically appraised in this report. The study characteristics of the other relevant RCTs evaluating ivB versus lasers are represented in Table 1, and where possible will be included in the discussion.

Outcomes

The primary outcomes of interest are changes in BCVA and CMT, when treated with ivB or lasers for DME, whilst the secondary outcomes are any associated adverse events. All three studies were prospective RCTs with NHMRC level-II evidence. Table 1 summarises the overall characteristics of the studies.

Critical appraisal

The BOLT Study (2010) is a twelve month report of a two year long single centre, two arm, randomised, controlled, masked clinical trial from the United Kingdom (UK). As such, it qualifies for NHMRC [25] level-II quality of evidence. It is the only RCT that compared the efficacy of ivB with laser in patients with persistent CSME (both diffuse and focal DME) who had undergone at least one laser therapy for CSME previously. Comparison of study characteristics of the three RCTs chosen are presented in Table 2.

Major strengths of the BOLT Study compared to Soheilian et al. and Faghihi et al. studies include the duration of study and increased frequency of review of patients in ivB groups. The BOLT Study was a two year study, whereas the other two studies’ duration was limited to less than a year (Table 1). Because of its lengthy duration, it was possible to evaluate the safety outcome profile of ivB in the BOLT Study, unlike in the other two studies.

Research has indicated that the effects of ivB could last between two to six weeks, [6] and the effects of lasers could last until three to six months. [3] In BOLT, the ivB group was assessed every six weeks, and re-treatment provided with ivB as required, while the laser group were followed up every four months ensuring the preservation of efficacy profile and its reflection in the results. Whereas, in Sohelian et al., [20] follow up visits were scheduled every twelve weeks after the first visit, and in Faghihi et al., [21] follow up was at six and sixteen weeks. Therefore, there may have been a bias against the efficacy profile of ivB, given the insufficiency in the nature of follow up/treatment. Apart from the follow up and therapy modalities, the groups were treated equally in BOLT, preserving the analysis against treatment bias.

Weaknesses of the BOLT Study [24] include limited number of patients: 80 eyes in total, with 42 patients allocated to ivB and 38 patients to laser therapy. Of them, in the ivB group, six patients discontinued intervention; only 37 patients were included in the analysis at 24 months and five were excluded as the data was not available. Similarly, of the 38 patients allocated to the laser group, 13 patients discontinued the intervention; 28 patients were analysed overall where ten were excluded from analysis. However, the BOLT Study performed intention to treat analysis minimizing dropout effects. Given these, we feel the BOLT Study fulfills the criteria for a valid RCT with significant strengths.

Magnitude and precision of treatment effect from BOLT Study

Best corrected visual acuity outcomes

Significant difference existed between mean ETDRS BCVA at 24 months in the ivB group (64.4±13.3) compared to the laser group (54.8±12.6) with p=0.005 (Any p-value <0.05 indicates statistical significance between the groups under comparison). Furthermore, the study reports of the ivB group gaining a median of 9 ETDRS letters whereas the laser group gaining a median of 2.5 letters (p=0.005). Since there was a significant difference between the duration of CSME between the two groups, the authors of the study performed analysis after adjusting for this variable. They also adjusted for the baseline BCVA and for patients who had cataract surgery during the study. The mean BCVA still remained significantly higher in the ivB group compared to laser.

Marked difference has also been shown in the proportion of people who gained or lost vision between the two treatment groups. Approximately, 49% of patients in the ivB group gained more than or equal to ten ETDRS letters compared to seven percent of patients in laser group (p-value = 0.01). Similarly, none of the patients in the ivB group, compared to 86% in the laser group (p=0.002), lost fewer than 15 ETDRS letters. In addition, the study also implied that BCVA and CMT can be maintained long term with reduced injection frequency of six to twelve months. However, the authors also suggest that increasing the frequency of injections to every four weeks (rather than the six week frequency opted in the study) may provide better visual acuity gains as reported in RISE and RIDE studies. [13]

Central macular thickness outcomes

The mean change in the CMT over the 24 month period was -146±171μm in ivB group compared to -118±112μm in the laser group (p=0.62), showing statistically no significant difference in ivB/laser effectively reducing the CMT. This differed from the twelve month report of the same study that indicated improvement in CMT in the ivB group compared to the laser group.

Retinopathy

Results of the BOLT Study indicated a trend of reducing retinopathy severity level in the ivB group, while the laser group showed stabilised grading. However, the Mann-Whitney test indicated no significant difference between the groups (p=0.13). [24]

We summarised the results of the author’s analysis of the step-wise changes in retinopathy grading levels, for further analysis, into three categories: deteriorating, stable and improving (Table 3). As shown in the table, we calculated the p-values using the chi-square test between both groups for each category.

We attempted to further quantify the magnitude of ivB treatment compared to lasers on the retinopathy severity level by calculating the number needed to treat (NNT) using the data in Table 3. The results showed an absolute risk reduction of nine percent with an NNT of 10.9 (95% CI indicating harm in 21.6 harm to benefit in 3.6 patients treated). Since the confidence interval indicates an uncertainty between benefit and harm, this trial does not give sufficient information to inform clinical decision making regarding change in retinopathy severity levels with ivB treatment.

Safety outcome measures

As mentioned, one of the strengths of the BOLT Study is evaluating the safety profile of ivB given its two year duration. The study analysed the safety outcomes of macular perfusion and retinal nerve fibre layer (RNFL) thickness in detail. The results indicated no significant difference in the mean greatest linear diameter of foveal avascular zone between the laser and the ivB group, from baseline or in the worsening of severity grades. Similarly, no significant changes in median RNFL thickness have been reported between ivB and laser groups.

At 24 months, the number of observed adverse events, ocular and systemic, in the study was low. We have analysed the odds ratio (Table 4) as per the published results in the study. Statistically significant higher chances of having eye pain and irritation (eighteen times greater risk) during or after intervention, sustaining sub-conjunctiva haemorrhage and of having a red eye (eighteen times greater risk) was found in the ivB group compared to lasers. As can be further inferred from the table, no significant differences in sustaining other non-ocular adverse events, ocular serious adverse events or non-ocular serious adverse events including stroke/MI/other thrombo-embolic events were found between both the groups.

Clinical applicability of results

The BOLT Study participants were from Moorfields Eye Hospital (UK) and had comparable demographics and healthcare standards to Australia. In the study, both patient (BCVA, retinopathy severity level changes, adverse events) and disease-oriented outcomes (CMT) were considered, making the study both theoretically and practically relevant, informing both clinicians and researchers of the outcomes. Given this, clinical applicability of the results to the Australian population appears reasonable. All other personnel involved in the study (outcome assessors) and imaging technology are available as well, making the treatment feasible in our setting.

In Australia, the overall diabetic retinopathy prevalence is 24.5%, [6] the statistics associated with it rise every year due to the progressing obesity/diabetes epidemic. Bevacizumab is currently approved under the pharmaceutical benefits scheme for metastatic colon cancer.

It is being successfully used ‘off-label’ for the treatment of ocular conditions including age related macular degeneration and diabetic macular oedema. It costs about 1/40th the cost of ranibizumab, another anti-VEGF drug that has current approval for AMD treatment in Australia and FDA approval for DME treatment in America. [26] Since recent studies indicate no superior effect of ranibizumab versus bevacizumab in safety and efficacy profile in preserving visual acuity, [27,28] and since recent NICE guidelines also recommend not using ranibizumab for diabetic macular oedema due to high costs involved with the administration of that drug, [29] bevacizumab must be further considered and evaluated for cost effectiveness in routine usage in clinical practice.

Given the benefits with ivB, that is, improved BCVA, no significant adverse events and no risk of permanent laser scarring of the retina, and the aforementioned discussion, using ivB in treatment for persisting DME appears to be evidence based, and relatively safe practice.

Conclusion

The BOLT Study assessed the safety and efficacy of ivB in persistent DME despite previous laser therapy. The power of the study was 0.8 enabling it to detect BCVA differences between two groups. In line with many other previous studies evaluating ivB’s efficacy, the results indicate significant improvement in the mean ETDRS BCVA, and no significant differences in severe systemic/ocular adverse events compared to the laser group. This study supports the use of ivB in patients with CSME, with adequate precision. However the magnitude of the effect on changes in the severity of diabetic retinopathy, in CMT changes and other adverse events, needs to be evaluated further through large prospective RCTs.

Conflict of interest

None declared.

Correspondence

pavani.kurra@gmail.com

Categories
Case Reports Articles

Metastatic melanoma: a series of novel therapeutic approaches

The following report documents the case of a 63 year old male with metastatic melanoma following a primary cutaneous lesion. Investigation into the molecular basis of melanoma has identified crucial regulators in melanoma cell proliferation and survival, leading to the inception of targeted treatment and a shift toward personalised cancer therapy. Recently, the human monoclonal antibody ipilimumab and the targeted BRAF inhibitor vemurafenib have demonstrated promising results in improving both progression-free and overall survival.

Introduction

A diagnosis of metastatic melanoma confers a poor prognosis, with a median overall survival of six to ten months. [1-3] This aggressive disease process is of particular relevance in Australia, owing to a range of adverse risk factors including a predominantly fair-skinned Caucasian population and high levels of ultra-violet radiation. [4-6] While improved awareness and detection have helped to stabilise melanoma incidence rates, Australia and New Zealand continue to display the highest incidence of melanoma worldwide. [4-7] Clinical trials have led to two breakthroughs in the treatment of melanoma: ipilimumab, a fully human monoclonal antibody, and vemurafenib, a targeted inhibitor of BRAF V600E.

Case Presentation

The patient, a 63 year old male, initially presented to his general practitioner ten years ago with an enlarging pigmented lesion in the centre of his back. Subsequent biopsy revealed a grade IV cutaneous melanoma with a Breslow thickness of 5mm. A wide local excision was performed, with primary closure of the wound site. Sentinel node biopsy was not carried out, and a follow-up scan six months later found no evidence of melanoma metastasis.

In mid-2010, the patient noticed a large swelling in his left axilla. A CT/ PET scan demonstrated increased fluorodeoxyglucose avidity in this area, and an axillary dissection was performed to remove a tennis ball- sized mass that was histopathologically identified wholly as melanoma. A four week course of radiotherapy was commenced, followed by six weeks of interferon therapy. However, treatment was discontinued when he developed acute abdominal pain caused by pancreatitis.

CT/PET scans were implemented every three months; in early 2011 pancreatic metastases were detected.

The tumour was tested for a mutation in BRAF, a protein in the mitogen activating protein kinase (MAPK) signaling pathway. BRAF mutations are found in approximately half of all cutaneous melanoma, and this is a target for a recently developed inhibitor, vemurafenib. [8-11] The patient’s test was negative, and he was commenced on a clinical trial of nanoparticle albumin bound (nab) paclitaxel. He completed a nine month course of nab-paclitaxel, and experienced many adverse side effects including extreme fatigue, nausea, and arthralgia. A CT/PET scan demonstrated almost complete remission of his pancreatic lesions. Despite this progress, three months after completing treatment, a follow-up CT/PET scan revealed liver metastases that were confirmed by biopsy.

In 2012 he was commenced on the novel immunotherapy agent ipilimumab, which involved a series of four infusions of 10mg/kg three weeks apart. One week after his second dose, he was admitted to hospital with a two day history of maintained high fevers reaching above 40oC, rigors, sweats, and diffuse abdominal pain. These symptoms were preceded by a week long mild coryzal illness. On investigation he had elevated liver enzymes, more than double the reference range, and his blood cultures were negative. His symptoms settled within eight days, and he was discharged after an admission of two weeks in total.

The patient remains hopeful about his future, and is optimistic about the ‘fighting chance’ that this novel therapy has presented.

Discussion

The complexity of the melanoma pathogenome poses a major obstacle in developing efficacious treatments; however, the identification of novel signaling pathways and oncogenic mutations is challenging this paradigm. [12,13] The resultant development of targeted treatment strategies has clinical importance, with a number of new molecules targeting melanoma mutations and anomalies specifically. The promise of targeted treatments is evident for a number of other cancers, with agents such as trastuzumab in HER-2 positive breast cancer and imatinib in chronic myelogenous leukaemia now successfully employed as first-line options. [14,15]

This patient’s initial treatment with interferon alpha aimed to eradicate remaining micro-metastatic disease following tumour resection. While interferon-alpha has shown disease-free survival benefit, studies have failed to consistently demonstrate significant improvement in overall survival. [16-18]

Favourable outcomes in progression-free and median survival have been indicated for the taxane-based chemotherapy nab-paclitaxel that he next received; however, it has also been associated with concerning toxicity and side effect profiles. [19]

Ipilimumab is a promising development in immunotherapy for metastatic melanoma, with significant improvement in overall survival reported in two recent phase III randomised clinical trials. [20,21] This novel monoclonal antibody modulates the immune response by blocking cytotoxic T lymphocyte-associated antigen 4 (CTLA-4), which competitively binds with B7 on antigen presenting cells to prevent secondary signaling. When ipilimumab occupies CTLA-4, the immune response is upregulated and host versus tumour activity is improved. Native and tumour-specific immune response modification has led to a profile of adverse events associated with ipilimumab that is different from those seen with conventional chemotherapy. Immune- related dermatologic, gastrointestinal, and endocrine side effects have been observed, with the most common immune specific adverse events being diarrhoea, rash, and pruritis (see Table 1). [20,21] The resulting patterns of clinical response to ipilimumab also differ from conventional therapy. Clinical improvement and favourable outcomes may manifest as disease progression prior to response, durable stable disease, development of new lesions while the original tumours abate, or a reduction of baseline tumour burden without new lesions. [22]

Recently discovered clinical markers may offer predictive insight into ipilimumab benefit and toxicity, and are a key goal in the development of personalised medicine. Pharmacodynamic effects on gene expression have been demonstrated, with baseline and post-treatment alterations in CD4+ and CD8+ T cells implicated in both likelihood of relapse and occurrence of adverse events. [23] Novel biomarkers that may be associated with a positive clinical response include immune- related tumour biomarkers at baseline and a post-therapy increase in tumour-infiltrating lymphocytes. [24]

Overall survival was reported as 10 and 11.2 months for the two phase III studies compared with 6.4 and 9.1 months in the control arms. [20,21] Furthermore, recently published data on the durability of response to ipilimumab has indicated five year survival rates of 13%, 23%, and 25% for three separate earlier trials. [25]

Somatic genetic alterations in the MAPK signaling cascade have been identified as key oncogenic mutations in melanoma, and research into independent BRAF driver mutations has resulted in the development of highly selective molecules such as vemurafenib. Vemurafenib inhibits constitutive activation of mutated BRAF V600E, thereby preventing upregulated downstream effects that lead to melanoma proliferation and survival. [26,27] A multicentre phase II trial demonstrated a median overall survival of 15.9 months, and a subsequent phase III randomised clinical trial was ended prematurely after pre-specified statistical significance criteria was attained at interim analysis. [8,9] Crossover from the control arm to vemurafenib was recommended by an independent board for all surviving patients. [8] Conversely, in patients with mutated upstream RAS and wild-type BRAF mutation status, the use of vemurafenib is unadvisable on the basis of preclinical models. For these mutations, BRAF inhibition may lead to paradoxical gain-of-function mutations within the MAPK pathway, and drive tumourigenesis rather than promoting downregulation. [13] The complexity of BRAF signaling and reactivation of the MAPK pathway is highly relevant in the development of intrinsic and acquired drug resistance to vemurafenib. Although the presence of the V600E mutation generally predicts response, acquisition of secondary mutations has resulted in short-lived treatment duration. [28]

Ipilimumab and vemurafenib, when used individually, clearly demonstrate improvements in overall survival. Following the success of these two agents, a study examining combination therapy in patients testing positive to the BRAF V600E mutation is currently underway. [29]

With the availability of new treatments for melanoma, the associated health care economics of niche market therapies need to be acknowledged. It is likely that the cost of these drugs will be high, making it difficult to subsidise in countries such as Australia where public pharmaceutical subsidies exist. Decisions about public subsidy of drugs are often made on cost-benefit analyses, which may be inadequate in expressing the real life benefits of prolonging a patient’s lifespan in the face of a disease with a dismal prognosis. Non-subsidy may lead to the availability of these medicines to only those who can afford it, and it is concerning when treatment becomes a commodity stratified by individual wealth rather than need. This problem surrounding novel treatments is only expected to increase across many fields of medicine with the torrent of medical advances to come.

Conclusion

This case illustrates the shift in cancer therapy for melanoma towards a model of personalised medicine, where results of genomic investigations influence treatment choices by potentially targeting specific oncogenes driving the cancer.

Conflict of interest

None declared.

Consent declaration

Informed consent was obtained from the patient for publication of this case report.

Correspondence

J Read: jazlyn.read@griffithuni.edu.au

 

Categories
Case Reports Articles

An unusual case of bowel perforation in a 9 month old infant

In Australia, between 2009 and 2010 almost 290 000 cases of suspected child abuse and neglect were reported to Australian state and territory authorities. Child maltreatment may present insidiously, not allowing signs of the maltreatment to be elicited until after a culmination of events. Ms. LW, a 9-month- old Indigenous female, presented to the Alice Springs Hospital emergency department (ED) with complaints of bloody diarrhea. A provisional diagnosis of viral gastroenteritis was suggested and she was managed with fluids to which her vitals responded positively. She was discharged six hours post presentation but presented three days later in a worsened condition with a grossly distended abdomen. Exploratory laparotomy found a perforated jejunum, which was deemed as a non-accidental injury. This case outlines the pitfalls in collateral communication in which we discuss the lack of use of an interpreter or Aboriginal health worker. We also emphasise the onus on junior doctors to practice in a reflective manner with the burdens of ED, so that they do not miss key diagnostic clues. Early detection of chronic maltreatment is important in the prevention of toxic stress to the child, which has been shown to contribute to a greater burden on society in the form of chronic manifestations later in life.

Introduction

Maltreatment, especially that of children can be insidious in nature, whose signs may not be evident until a culmination of unfortunate events. In Australia, during 2010-2011, there were 286,437 [1] reports of suspected child abuse and neglect made to state and territory authorities with a total of 40,466 substantiations (Figure 1). These notifications include four maltreatment types: physical abuse, sexual abuse, emotional abuse and neglect (Figure 2). As of 30 June 2010, there were 11,468 Aboriginal and Torres Strait Islander children in out-of-home care as a result of this. The national rate of Indigenous children in out-of-home care was almost ten times higher than for non- Indigenous children. [1]

Child protection statistics shown above tells us how many children have come into contact with child protection services; however, they do not take in to account the silent statistics of those who suffer without seeking aid. In all jurisdictions in 2010-11, girls were much more likely than boys to be the subject of a substantiation of sexual abuse. In contrast, boys were more likely to be subject to physical abuse than girls in all jurisdictions except Tasmania and the Northern Territory. [1]

Unfortunately it is difficult to obtain accurate statistics regarding the number of children who die from child abuse or neglect in Australia, as currently comprehensive information is not collected in every jurisdiction. Taking this into account however latest data recorded indicated that in 2006, assault was the third most common type of injury causing death for Australian children aged 0-14 years, [2] and totaled 27 children mortalities in 2006-07. Medical practitioners must be aware of the signs of child maltreatment and their long- term consequences, as they possess the opportunity to intervene and change the consequences of this terrible burden on afflicted children.

Case Presentation

Ms. LW, a nine month old Indigenous female and her mother presented to the Alice Springs ED at 2100, with complaints of bloody diarrhea. Emergency department staff noted that on presentation the infant was notably uncomfortable and tearful. She was afebrile, with mild tachypnoea (50 respirations per min) all other vitals were normal. Examination of the infant revealed discomfort in the epigastric region with no other significant findings including no organomegaly or distention. No other abdominal signs in particular signs such as guarding or rigidity were noted on admission. Systemic review did not show any significant findings. Past medical history included recurrent chest infections with the last episode two months prior. No immunisation history was available. The staff had difficulty examining the child because she was highly irritable. It was also difficult to elicit a comprehensive history from the mother as she spoke minimal English and was relatively dismissive of questions. No interpreter was used in this setting.

The patient was diagnosed with viral gastroenteritis and treated conservatively by the administration of intravenous fluids to maintain hydration. After six hours of observation and a slight improvement in Ms. LW’s vitals she was sent home in the early morning hours after intense pressure from the family. No other treatments and investigations were done and the staff discharged her with the recommendation of returning if the symptoms worsened over the next day.

The patient returned three days later to ED with symptoms clearly of a different nature and not that of the previous diagnosis of gastroenteritis. On general observation the patient appeared unwell, irritable and was crying weakly. On examination she was found to be febrile (40°C) and toxic with tachycardia (168 bpm) tachypnoea (60 respirations per minute), and gross distention of her abdomen (Figure 3).

The case was referred to the on-call surgeon, who gave a provisional diagnosis of perforated bowel and decided to perform a laparotomy. She was immediately started on intravenous broad-spectrum antibiotics, ampicillin (200mg /6hourly), metronidazole (30mg /12 hourly) and gentamicin (20 mg/daily) before surgery.

Emergency laparotomy was performed, and on initial exploration it was found that the peritoneum contained foul smelling serous fluid with a mixture of blood and faecal matter. Further exploration found perforation of the jejunum with the mesentery torn from the fixed end of the jejunum (Figure 5). The surgeons resected the gangrenous portion of the jejunum and performed an end-to-end anastomosis of small bowel.

The abdomen was lavaged with copious amounts of warm saline and the abdominal wall was closed in interrupted layers. Post surgery the child remained intubated, ventilated and was admitted to the ICU. After 24 hours post surgery the infant was extubated successfully and oral feeding was commenced after 48 hours post surgery. The patient made an uneventful recovery and was later transferred to the paediatric ward.

The surgeons commented that the initial perforation to the jejunum fixed to the mesentery caused de-vascularisation of this portion, leading to the further degradation and gangrenous state of the intestine and thus worsening the child’s condition.

As the surgeons had indicated that this injury was of a non-accidental nature the parents of the infant were brought in to be interviewed by the consultant, with the aid of an interpreter. The parents denied any falls or injuries sustained in the events leading to the presentation, which the surgical team had already exclude, due to the absence of associated injuries and symptoms. The consultant noted that both parents were not forthcoming with information even with the aid of an interpreter. Further questioning from the allied health team finally led to an answer. The father admitted that on the morning of the initial presentation while he was sitting on the ground his daughter pulled his hair from behind him to which he responded by elbowing her in the mid-region of her abdomen. Upon obtaining this information a skeletal survey was undertaken, in which a hairline fracture of the shaft of the left humerus and minor bruising in this region was found.

Case resolution

The infant was assumed into care under the basis of neglect and the case was mandatorily reported to Child Protective Services. The parents were then reported to the police for further questioning and probable court hearings. Once the patient was stable, she was discharged into the care of her grandmother, with a further review to be made by Child Protective Services at a later date.

Discussion

Child abuse is still a cause for concern in Australia although there has been a decrease in substantiations since 2007. [3] Although the total substantiations have decreased, on a state level, Victoria, South Australia, Western Australia, Tasmania and the Northern Territory have recorded an increase in the number of abuse substantiations. The most common abuse type reported in the 2010-2011 was of emotional abuse (36%) followed by neglect (29%), physical abuse (22%) and sexual abuse (13%).

Children who suffer through maltreatment not only have physical burdens placed on them, they often have many associated long-term problems. [4] The term recently coined is ‘toxic stress’, which results from sustained neglect or abuse. Children are unable to cope and hence activate the body’s stress response (elevated cortisol levels). When this occurs over a prolonged period of time it can lead to permanent changes in the development of the immune and central nervous systems (e.g. hippocampus). [5] This combination results in cognitive deficits that result in unwanted manifestations during adult life including poor academic performance, substance abuse, smoking, depression, eating disorders, risky sexual behaviors, adult criminality and suicide. [6] These health issues contribute to a significant proportion of society’s health burden.

Medical practitioners and especially those working in ED, are in an advantageous position to be able to intervene in child toxic stress. It is important to be aware of signs or ‘red flags’ that may point to maltreatment including: failure to thrive, burn marks (cigarette), unusual bruising and injuries, symptoms that do not match the history, recurrent presentation to health services, recurrent vague symptoms, child being cold and withdrawn, lethargic appearance, immunodeficiency without specific pathology and less commonly Munchausen syndrome by proxy. [7]

Previously we alluded to the fact that the child protection data only reflects those reported to the child protective services. Economically disadvantaged families are more likely to come into contact with and be under the scrutiny of public authorities. This means that it is more likely that abuse and neglect will be identified in the economically disadvantaged, [4] however child abuse may occur in all socioeconomic demographics.

This case illustrates the common pitfalls in the clinical setting, one of these being the lack of a clear history obtained at initial presentation. It was mentioned that there was poor communication between the patient’s mother and the attending to gain any meaningful information, yet there was no use of an interpreting service or Aboriginal health workers. As Aboriginal health workers have usually lived in the community they work in and most have developed lasting relationships with the community and with the various government agencies. [8] This makes them experts at bridging the communication gap between the patient and the doctor.

Another clinical pitfall demonstrated by this case was the poor examination of this infant, and the failure to recognise important signs such as guarding and rigidity – highly suggestive of insidious pathology. These finding would lead a clinician to perform further investigations such as a CXR or CT-scan which would have determined the underlying pathology. Additionally, no systemic examination was conducted in the haste to discharge the patient from ED. However, this meant, another important sign of abuse – the bruising on the infant’s left arm, was missed. Additionally, no investigations were performed when the infant initially presented to ED and hence, the diagnosis of viral gastroenteritis was not confirmed. Furthermore, bacterial gastroenteritis was not properly excluded although it is highly likely in the context of bloody diarrhoea.

Emergency department physicians have many stressors and constant interruptions during their shifts and this combination is known to cause breaks in routine tasks. [9] In 2008, the Australian Medical Association conducted a survey of 914 junior doctors and found that the majority of individuals met well established criteria for low job satisfaction (71%), burnout (69%) and compassion fatigue (54%). [10] These factors indirectly affect patient outcomes and in particular, can lead to overlooking key diagnostic clues. With the recent introduction of the National Emergency Access Target (NEAT), also know as the ‘4 hour rule’, statistics have shown that there has been no change in mortality. [11,12] However, this is a recent implementation and there is a possibility that with junior doctors and nursing staff pushed for a high turnover of patients, that child maltreatment may be missed.

Recommendations

1. Early recognition of child abuse requires a high index of suspicion.

2. Be familiar with mandatory reporting legislation as it varies between state/territories.

3. As junior doctors it is imperative that we use all hospital services such as the interpreting services and the Aboriginal health workers. We can thus enhance optimum history taking.

4. It is important to practice in a reflective manner to prevent inexperience, external pressures and job dissatisfaction from affecting patient quality of care.

5. Services should be encouraged to have Indigenous social/case workers available for consultation.

Conclusion

Paediatric presentations within a hospital can be very challenging, and as junior doctors have the most contact with these patients, they must be aware of important signs of abuse and neglect. We have outlined the importance in communicating with Indigenous patients and the related pitfalls if this is done incorrectly. Doctors are in a position to detect child abuse and to intervene before the long-term consequences manifest.

Conflict of interest

None declared.

Consent declaration

Informed consent was obtained from the next-of-kin for publication of this case report and all accompanying figures.

Correspondence

M Jacob: matt.o.jacob@gmail.com

 

Categories
Articles Guest Articles

Health care to meet the future needs of New South Wales

When thinking about innovation and ways to transform how health care is delivered to the patients of today and tomorrow, the importance and growing potential of e-Health springs to mind.

In Opposition and now as Minister for Health and Minister for Medical Research I am absolutely convinced of the enormous gains to be made using e-Health technology, whether electronic patient records, telehealth connecting clinicians in remote settings, managing assets and other important clinical information and governance arrangements.

My years in Opposition did much to foster my knowledge, understanding and passion for the health system and I had the rewarding opportunity to meet with leaders in health care to discuss both challenges and future possibilities.

In 2007, as NSW Shadow Minister for Health, I was humbled to be invited to speak at Hewlett Packard’s Health and Life Sciences Symposium. Hosted at San Diego, the conference connected specialists in the e-Health field from across the globe to discuss its impacts and contribution to the wider health agenda.

The conference was incredibly inspiring for an aspiring health minister. While many there were caught up in the gadgetry, I found myself thinking a lot of about the experiences of patients back home and how they needed to be improved by advancements in technology.

Now, the NSW health system boasts one of the largest information and communication technology (ICT) portfolios of any government agency or corporate organisation in this country.

ICT has an important role to play in the delivery of health services; whether in acute hospital care, preventative health, patient self-care and those treatments provided in a range of health care settings – in a patient’s home, in the community, a private or not-for-profit facility or through the public health system.

And these services will be delivered by a range of health professionals, including those of you reading who hope to enter these fields.

For many years, I have been committed to enhancing e-Health services in this state as it is these very services that put the patient at the forefront while boosting contemporary methods of care.

NSW Health will spend more than $1.5 billion over the next 10 years on ICT to improve both care and patient outcomes across the state.

We’ve achieved a lot in this space in the past 12-18 months and are setting the foundations to do great things for the benefit of patients in the future.

We have developed numerous innovative e-Health programs across the health system including:

  • The use of telehealth to link patients in rural and regional NSW with face-to-face specialist care in tertiary hospitals, making services available anywhere, anytime.
  • We are collaborating with clinicians by activating voice recognition software in emergency departments to free-up precious time for patient care.
  • We have established real-time emergency department waiting time data for the community, published online and updated every fifteen minutes.
  • We have technology that now provides instant digital images, which can be reviewed and reported by specialist doctors even before the patient is back in the ward. This slashes waiting times for results and is seeing treatments delivered earlier than ever before.
  • We are developing and introducing apps and tablet technology to provide instant access to clinical research and digital medical libraries for better information sharing between clinicians and their colleagues.
  • We are supporting trials where electronic health records have revolutionised the speed and accuracy of medical information between hospital wards and between patients and their General practitioners.
  • We are using technology to better track financial and performance management, not only in clinical incident monitoring, but in preparations for an Activity Based Funding model and to ensure value for money for every tax-payer dollar spent.

These are not future ambitions. These are services being utilised today to ensure our patients are not just well-cared for but well-informed and connected to health services.

As the Minister for Health, I want to see these initiatives driving better performance in our state’s hospitals – leading to better outcomes for patients and their families.

Telehealth remains a particular passion of mine. From what I am seeing utilised by clinicians on the ground, it is an impressive tool and one that sees patients receiving the best possible care with treatment transcending geographical barriers.

Recently, I was in Canberra to launch the Improving Critical Care Outreach and Training in the ACT and South East NSW project.

This pilot telehealth system will connect Canberra Hospital emergency department and helicopter base with hospitals at Queanbeyan, Moruya, Batemans Bay and Cooma.

It utilises overbed cameras, microphones and speakers and has viewing monitors positioned in the resuscitation area of the Emergency Department in the NSW spoke sites. The system uses the ACT and NSW videoconferencing networks to transmit images and vital signs to the referral or hub site in the ACT.

Telehealth initiatives have become a key component of clinical care and improving access to services in NSW.

We currently have more than 600 videoconferencing locations across the state which are used for a range of services in the areas of mental health; critical and emergency care; oncology; radiology; diabetic foot care; genetic services; and, chronic disease management.

The NSW Government is committed to supporting innovative projects such as this for the benefit of patients across NSW. We currently oversee the provision of telehealth technology in a variety of health facilities across regional NSW including Goulburn, Queanbeyan, Yass, Braidwood, Crookwell, Moruya, Bega, Batemans Bay, Cooma, Pambula and Bombala.

Telehealth affords local patients the opportunity to be treated locally with the support, guidance and expertise of clinicians at tertiary teaching hospitals..

Do medical students have a role to play in the state’s e-Health agenda? Absolutely.

Starting out in politics almost two decades ago, e-Health was considered the stuff of science fiction. Now, we’re seeing its use move from the bench to the bedside for the benefit of patients.

As our tech-savvy workforce increases, we will get smarter and more innovative.

I want a resilient system but one that is flexible and able to innovate to achieve greater efficiencies.

Above all, I want a health system that can deliver the highest quality care to patients.

Health often gets lost in statistics but technology does not substitute for the high quality care provided by clinicians, rather it enhances it.

By providing both current and future clinicians with the modern tools and information they need, we are going a long way to empowering them to achieve much more for their patients.

Categories
Case Reports Articles

Blood culture negative endocarditis – a suggested diagnostic approach

This case report describes a previously healthy male patient with a subacute presentation of severe constitutional symptoms, progressing to acute pulmonary oedema, and a subsequent diagnosis of blood culture negative endocarditis with severe aortic regurgitation. Blood culture negative endocarditis represents an epidemiologically varying subset of endocarditis patients, as well as a unique diagnostic dilemma. The cornerstones of diagnosis lay in careful clinical assessment and exposure history, as well as knowledge of common aetiologies and appropriate investigations. The issues of clinically informed judgement and having a systematic approach to the diagnosis of these patients, especially within an Australian context, are discussed. Aetiological diagnosis of these patients modifies and directs treatment, which is fundamental in minimising the high morbidity and mortality associated with endocarditis.

Case

Mr NP was a previously healthy, 47 year old Caucasian male who presented to a small metropolitan emergency department with two days of severe, progressive dyspnoea which was subsequently diagnosed as acute pulmonary oedema (APO). This occurred on a three month background of dry cough, malaise, lethargy and an unintentional weight loss of 10 kilograms.

History

Apart from the aforementioned, Mr NP’s history of the presenting complaint was unremarkable. In the preceding three months Mr NP was previously treated in the community for pertussis and atypical pneumonia, resulting in no significant improvement. Notably, this therapy included two courses of antibiotics (the specifics unable to be remembered by the patient), with the latest course completed the week prior to admission. He had no relevant past medical or family history, specifically denying a history of tuberculosis, malignancy, and heart and lung disease. There were no current medications or known allergies; he denied intravenous or other recreational drug use, reported minimal alcohol use, and had never smoked.

Mr NP lived in suburban Melbourne with his wife and children. He kept two healthy dogs at home. There had been no sick contacts and no obvious animal or occupational exposures, although he noted that he occasionally stopped cattle trucks on the highway as part of his occupation, but had no direct contact with the cattle. He recently travelled to Auckland, New Zealand for two weeks, two months prior. There were no stopovers, notable exposures or travel throughout the country.

During the initial assessment of Mr NP’s acute pulmonary oedema, blood cultures were drawn with a note made of oral antibiotics during the preceding week. A transthoracic echocardiogram (TTE) found moderate aortic regurgitation with left ventricular dilatation. A subsequent transoesophageal echocardiogram (TOE) noted severe aortic regurgitation, a one centimetre vegetation on the aortic valve with destruction of the coronary leaflet, LV dilation with preserved ejection fraction greater than 50%. Blood cultures, held for 21 days, revealed no growth.

Empirical antibiotics were started and Mr NP was transferred to a large quaternary hospital for further assessment and aortic valve replacement surgery.

Table 1. A suggested schema for assessing exposures to infectious diseases during the clinical history, illustrated using the commonly used CHOCOLATES mnemonic.

Exposure Assessment Schemata: CHOCOLATES mnemonic
Country of origin

Household environment

Occupation

Contacts

Other: Immunisations, intravenous drug user, immunosuppression,

splenectomy, etc.

Leisure activities/hobbies

Animal exposures

Travel and prophylaxis prior

Eating and drinking

Sexual contact

AVR – Aortic valve replacement; ANA – Antinuclear antibodies; ENA – Extractable nuclear antigens

Examination

Examination of Mr NP, after transfer and admission, showed an alert man, pale but with warm extremities, with no signs of shock or sepsis. Vital signs revealed a temperature of 36.2°C, heart rate of 88 beats per minute, blood pressure of 152/50 mmHg (wide pulse pressure of 102 mmHg) and respiratory rate of 18 breaths per minute, saturating at 99% on room air.

No peripheral stigmata of endocarditis were noted, and there was no lymphadenopathy. Examination of the heart and lungs noted a loud diastolic murmur through the entire precordium, which increased with full expiration, but was otherwise normal with no signs of pulmonary oedema. His abdomen was soft and non-tender with no organomegaly noted.

Workup and Progress

Table 2 shows relevant investigations and results from Mr NP.

Table 2. Table outlining the relevant investigation results for Mr NP performed for further assessment of blood culture negative endocarditis.


 

Investigation Result
Blood Cultures Repeat Blood Cultures x 3 (on antibiotics) No growth until date; held for 21 days
Autoimmune Rheumatoid Factor Weak Positive – 16 [N <11]
ANA Negative
ENA Negative
Serology Q Fever Phase I Negative Phase II Negative
Bartonella Negative
Atypical Organisms; (Legionella, Mycoplasma) Negative
Valve Tissue (post AVR) Histopathology Non-specific chronic inflammation and fibrosis
Tissue Microscopy and Culture Gram positive cocci seen. No growth until date.
16S rRNA Streptococcus mitis
18S rRNA Negative

Empirical antibiotics for culture negative endocarditis were initiated during the initial presentation and were continued after transfer and admission:

Benzylpenicillin for streptococci and enterococci

Doxycycline for atypical organisms and zoonoses

Ceftriaxone for HACEK organisms

Vancomycin for staphylococcus and resistant gram positive bacteria.

During his admission, doxycycline was ceased after negative serology testing and microscopy identifying gram positive cocci. Benzylpenicillin was changed to ampicillin after a possible allergic rash. Ceftriaxone, ampicillin and vancomycin were continued until the final 16S rRNA result from valvular tissue identifying Streptococcus mitis, a viridians group Streptococci.

The patient underwent a successful aortic valve replacement (AVR) and was routinely admitted to the intensive care unit (ICU) post cardiac surgery. He developed acute renal failure, most likely due to acute tubular necrosis from a combination of bacteraemia, angiogram contrast, vancomycin, and the stresses of surgery and bypass. Renal functional gradually returned after resolution of contributing factors without the need for removal of vancomycin, and Mr NP was discharged to the ward on day six ICU.

Clinical improvement was seen in Mr NP, as well as through a declining white cell count and a return to normal renal function. He was discharged successfully with Hospital in the Home for continued outpatient IV vancomycin for a combined total duration of four weeks and for follow up review in clinic.

Discussion

There is an old medical adage, that “persistent bacteraemia is the sine qua non of endovascular infection.” The corollary is that persistently positive blood cultures is a sign of an infection within the vascular system. In most clinical situations this is either primary bacteraemia or infective endocarditis, although other interesting, but less common differentials, exist (e.g. septic thrombophlebitis/Lemierre’s Syndrome, septic aneurysms, aortitis, etc.). Consequently, blood culture negative endocarditis (BCNE) becomes both an oxymoron, and a unique clinical scenario.

BCNE can be strictly defined as endocarditis (as per Duke criteria) without known aetiology after three separate blood cultures with no growth after at least seven days, [1] although less rigid definitions have been used throughout the literature. The incidence is approximately 2-7% of endocarditis cases, although it can be as much as 31%, due to multiple factors such as regional epidemiology, the administration of prior antibiotics and the definition of BCNE used. [1-3] Importantly, the morbidity and mortality associated with endocarditis remains high despite multiple advances, and early diagnosis and treatment remains fundamental. [1,4,5]

The most common reason for BCNE is prior antibiotic treatment before blood culture collection, [1-3] as was the case with Mr NP. Additional associated factors for BCNE include exposure to zoonotic agents, underlying valvular disease, right-sided endocarditis and presence of a pacemaker. [1,3]

Figure 1 shows the aetiology of BCNE; Table 3 lists clinical associations and epidemiology of common organisms which may be identified during assessment. Notably, there is a high prevalence of zoonotic infections, as well as a large portion remaining unidentified. [2] Additionally, the incidence of normal endocarditis organisms is comparatively high, which in most cases have been suppressed through prior antibiotic use. [2]

Table 3. Common aetiologies in BCNE and associated clinical features and epidemiology. [1,2,5-9]

Aetiology Clinical Associations and Epidemiology
Q Fever (Coxiella burnetii) Zoonosis: contact with farm animals (commonly cattle, sheep, and goats). Farmers, abattoir workers, veterinarians, etc. Check for vaccination in aforementioned high risk groups.
Bartonella spp. Zoonosis: contact with cats (B henselae); transmitted by lice, poor hygiene, homelessness (B quintana).
Mycoplasma spp. Ubiquitous. Droplet spread from person to person, increased with crowding. Usually causes asymptomatic or respiratory illness, rarely endocarditis.
Legionella spp. Usually L pneumophila; L longbeachae common in Australia. Environmental exposures through drinking/inhalation. Colonises warm water, and soil sediments. Cooling towers, air conditioners, etc. help aerosolise bacteria. Urinary antigen only for L pneumophila serogroup 1. Usually respiratory illness, rarely endocarditis.
Tropheryma whipplei Associations with soil, animal and sewerage exposures. Wide spectrum of clinical manifestations. Causative organism of Whipple’s Disease (malabsorptive diarrhoeal illness).
Fungi Usually with Candida spp. Normal GIT flora. Associated with candidaemia, HIV/immunosuppression, intravascular device infections, IVDU, prosthetic valves, ICU admission, parenteral feeding, broad spectrum antibiotic use. Associated with larger valvular vegetations.
HACEK organisms* Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella spp. Fastidious Gram negative rods. Normal flora of mouth and upper GI. Associated with poor dentition and dental work. Associated with larger valvular vegetations.
Streptococcus viridans group* Umbrella term for alpha haemolytic streptococci commonly found as mouth flora. Associated with poor dentition and dental work.
Streptococcus bovis* Associated with breaches of colonic mucosa: colorectal carcinoma, inflammatory bowel disease and colonoscopies.
Staphylococcus aureus* Normal skin flora. IVDU, intravascular device infections, post-operative valve infections.

IVDU – Intravenous drug user; GIT – Gastrointestinal tract.

* Traditional IE organisms. Most BCNE cases with usual IE bacteria isolated where antibiotics given before culture. [1-3]


 

The HACEK organisms (Haemophilus, Actinobacillus, Cardiobacterium, Eikenella, and Kingella) are fastidious (i.e. difficult to grow), gram negative oral flora. Consequently (and as a general principle for other fastidious organisms), these slow growing organisms tend to produce both more subacute presentations as well as larger vegetations at presentation. They have been traditionally associated with causing culture negative endocarditis, but advancements in microbiological techniques have resulted in the majority of these organisms being able to be cultured within five days, and now have a low incidence in true BCNE. [1]

Q Fever is of particular importance as it is both the most common identified aetiology of BCNE, as well as an important offender in Australia, given the large presence of primary industry and the consequent potential for exposure. [1-3,6] Q Fever is caused by the Gram negative obligate intracellular bacteria Coxiella burnetii, (named after Australian Nobel laureate Sir Frank Macfarlane Burnet), and is associated in particular with various farm animal exposures (see Table 4). The manifestations of this condition are variable and nonspecific, and the key to diagnosis often lies in an appropriate index of suspicion and an exposure history. [6] In addition, Q fever is a very uncommon cause of BCNE in Northern Europe and UK, and patient exposures in this region may be less significant. [1,2,6]

The clinical syndrome is separated into acute and chronic Q Fever. This differentiation is important to note for two reasons: firstly, Q fever endocarditis is a manifestation of chronic, not acute, Q fever, and secondly because of the implication on serological testing. [6] Q fever serology is the most common diagnostic method used, and is separated into Phase II (Acute Q Fever) and Phase I (Chronic Q Fever) serologies. Accordingly, to investigate Q fever endocarditis, Phase I serology must be performed. [6]

Given the large incidence of zoonotic aetiologies, the modified Duke criteria suggests that positive blood culture or serology for Q fever be classed as a major criterion for diagnosis of endocarditis. [10] However, Lamas and Eykyn [3] found that even with the modifications to the traditional Duke criteria this is still a poor predictor for BCNE, identifying only 32% of their pathologically proven endocarditis patients. Consequently, they suggest the addition of minor criteria to improve sensitivity, making particular note of rapid onset splenomegaly or clubbing which can occur especially in patients with zoonotic BCNE. [3]

Figure 2 outlines the suggested diagnostic approach, modified from the original detailed by Fournier et al. [2] The initial steps are aimed at high incidence aetiologies and to rule out non-infectious causes, with stepwise progression to less common causes. Additionally, testing of valvular tissue plays a valuable role in aiding diagnosis in situations where this is available. [1,2,11,12]

16S ribosomal RNA (rRNA) gene sequence analysis and 18S rRNA gene sequence analysis are broad range PCR tests, which can be used to amplify genetic material that may be present inside a sample. Specifically, it identifies sections of rRNA which are highly preserved against mutation, and are specific to a species of organism. When a genetic sequence has been identified, it is compared against a library of known genetic codes to identify the organism if listed. 16S identify prokaryotic bacteria, and 18S is the eukaryotic fungal equivalent. These tests can play a fundamental role in the identification of aetiology where cultures are unsuccessful, although they must be interpreted with caution and clinical judgement, as they are highly susceptible to contamination and false positives due to their high sensitivity. [11-13] Importantly, antibiotic sensitivity testing is unable to be performed on these results, as there is no living microorganism isolated. This may necessitate broader spectrum antibiotics to allow for potential unknown resistance – as was demonstrated by the choice of vancomycin in the case of Mr NP.

The best use of 16S and 18S rRNA testing in the diagnosis of BCNE is upon valvular tissue; testing of blood is not very effective and not widely performed. [2,11,13] Notwithstanding, 18S rRNA testing on blood may be appropriate in certain situations where first line BCNE investigations are negative, and fungal aetiologies become much more likely. [2] This can be prudent given that most empirical treatment regimes do not include fungal cover.

Fournier et al. [2] suggested the use of a Septifast© multiplex PCR (F Hoffmann-La Roche Ltd, Switzerland) – a PCR kit designed to identify 25 common bacteria often implicated in sepsis – in patients who have had prior antibiotic administration. Although studies have shown its usefulness in this context, it has been excluded from Figure 2 because, to the best of the author’s knowledge, this is not a commonly used test in Australia. The original diagnostic approach from Fournier et al. [2] identified aetiology in 64.6% of cases, with the remainder being of unknown aetiology.

Conclusion

BCNE represents a unique and interesting, although uncommon, clinical scenario. Knowledge of the common aetiologies and appropriate testing underpins the timely and effective diagnosis of this condition, which in turn modifies and directs treatment. This is especially important due to the high morbidity and mortality rate of endocarditis and the unique spectrum of aetiological organisms which may not be covered by empirical treatment.

Acknowledgements

The author would like to thank Dr Adam Jenney and Dr Iain Abbott for their advice regarding this case.

Consent declaration

Informed consent was obtained from the patient for publication of this case report and accompanying figures.

Conflict of interest

None declared.

Correspondence

S Khan: sadid.khan@gmail.com

References

[1] Raoult D, Sexton DJ. Culture negative endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[2] Fournier PE, Thuny F, Richet H, Lepidi H, Casalta JP, Arzouni JP, Maurin M, Célard M, Mainardi JL, Caus T, Collart F, Habib G, Raoult D. Comprehensive diagnostic strategy for blood culture negative endocarditis: a prospective study of 819 new cases. CID. 2010; 51(2):131-40.
[3] Lamas CC, Eykyn SJ. Blood culture negative endocarditis: analysis of 63 cases presenting over 25 years. Heart. 2003;89:258-62.
[4] Wallace SM, Walton BI, Kharbanda RK, Hardy R, Wilson AP, Swanton RH. Mortality from infective endocarditis: clinical predictors of outcome.
[5] Sexton DJ. Epidemiology, risk factors & microbiology of infective endocarditis. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[6] Fournier PE, Marrie TJ, Raoult D. Diagnosis of Q Fever. J. Clin. Microbiol. 1998, 36(7):1823.
[7] Apstein MD, Schneider T. Whipple’s Disease. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[8] Baum SG. Mycoplasma pneumonia infection in adults. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[9] Pedro-Botet ML, Stout JE, Yu VL. Epidemiology and pathogenesis of Legionella infection. In: UpToDate, Basow, DS (Ed), UpToDate, Waltham, MA, 2012.
[10] Li JS, Sexton DJ, Mick N, Nettles R, Fowler VG Jr, Ryan T, Bashore T, Corey GR. Proposed modifications to the Duke criteria for the diagnosis of infective endocarditis. CID. 2000; 30:633-38.
[11] Vondracek M, Sartipy U, Aufwerber E, Julander I, Lindblom D, Westling K. 16S rDNA sequencing of valve tissue improves microbiological diagnosis in surgically treated patients with infective endocarditis. J Infect. 2011; 62(6):472-78
[12] Houpikian P & Raoult D. Diagnostic methods: Current best practices and guidelines for identification of difficult-to-culture pathogens in infective endocarditis. Infect Dis Clin N Am. 2002; 16:377–92
[13] Muñoz P, Bouza E, Marín M, Alcalá L, Créixems MR, Valerio M, Pinto A. Heart Valves Should Not Be Routinely Cultured. J Clin Microbiol. 2008; 46(9):2897.

Categories
Feature Articles Articles

The history of abdominal aortic repair: from Egypt to EVAR

Introduction

An arterial aneurysm is defined as a localised dilation of an artery to greater than 50% of its normal diameter. [1] Abdominal aortic aneurysm (AAA) is common with an incidence five times greater in men than women. [2] In Australia the prevalence of AAAs is 4.8% in men aged 65-69 years rising to 10.8% in those aged 80 years and over. [3] The mortality from ruptured AAA is very high, approximately 80%, [4] whilst the aneurysm-related mortality of surgically treated, asymptomatic AAA is around five percent. [5] In Australia AAAs make up 2.4% of the burden of cardiovascular disease, contributing 14,375 disability adjusted life years (DALYs), ahead of hypertension (14,324) and valvular heart disease (13,995). [6] Risk factors for AAA of greater than four centimetres include smoking (RR=3-5), family history (OR=1.94), coronary artery disease (OR= 1.52), hypercholesterolaemia (OR= 1.44) and cerebrovascular disease (OR= 1.28). [7] Currently, the approach to AAA management involves active surveillance, risk factor reduction and surgical intervention. [8]

The surgical management of AAAs dates back over 3000 years and has evolved greatly since its conception. Over the course of surgical history arose three landmark developments in aortic surgery: crude ligation, open repair and endovascular AAA repair (EVAR). This paper aims to examine the development of surgical interventions for AAA, from its experimental beginnings in ancient Egypt to current evidence based practice defining EVAR therapy, and to pay homage to the surgical and anatomical masters who made significant advances in this field.

Early definition

The word aneurysm is derived from the Greek aneurysma, for ‘widening’. The first written evidence of AAA is recorded in the ‘Book of Hearts’ from the Eber Scolls of ancient Egypt, dating back to 1550 BC. [9] It stated that “only magic can cure tumours of the arteries.” India’s Sushruta (800 ~ 600 BC) mentions aneurysm, or ‘Granthi’, in chapter 17 of his great medical text ‘Sushruta Samhita’. [10] Although undistinguished from painful varicose veins in his text, Sushruta shared a similar sentiment to the Egyptians when he wrote “[Granthi] can be cured only with the greatest difficulty”. Galen (126-c216 AD), a surgeon of ancient Rome, first formally described these ‘tumours’ as localised pulsatile swellings that disappear with pressure. [11] He was also first to draw anatomical diagrams of the heart and great vessels. His work with wounded gladiators and that of the Greek surgeon Antyllus in the same period helped to define traumatic false aneurysms as morphologically rounded, distinct from true, cylindrical aneurysms caused by degenerative dilatation. [12] This work formed the basis of the modern definition.

Early ligation

Antyllus is also credited with performing the first recorded surgical interventions for the treatment of AAA. His method involved midline laparotomy, proximal and distal ligation of the aorta, central incision of the aneurysm sac and evacuation of thrombotic material. [13] Remarkably, a few patients treated without aseptic technique or anaesthetic managed to survive for some period. Antyllus’ method was further described in the seventh century by Aetius, whose detailed paper ‘On the Dilation of Blood Vessels,’ described the development and repair of AAA. [14] His approach involved stuffing the evacuated sac with incense and spices to promote pus formation in the belief that this would aid wound healing. Although this belief would wane as knowledge of the process of wound healing improved, Antyllus’s method would remain largely unchanged until the late nineteenth century.

Anatomy

The Renaissance saw the birth of modern anatomy, and with it a proper understanding of aortic morphology. In 1554 Vesalius (1514-1564) produced the first true anatomical plates based on cadaveric dissection, in ‘De Humani Corporis Fabrica.’ [15] A year later he provided the first accurate diagnosis and illustrations of AAA pathology. In total, Vesalius corrected over 200 of Galen’s anatomical mistakes and is regarded as the father of modern anatomy. [16] His discoveries began almost 300 years of medical progress characterised by the ‘surgeon-anatomist’, paving the way for the anatomical greats of the sixteenth, seventeenth and eighteenth centuries. It was during this period that the great developments in the anatomical and pathological understanding of aneurysms took place.

Pathogenesis

Ambroise Pare (1510-1590) noted that aneurysms seemed to manifest following syphilis, however he attributed the arterial disease to syphilis treatment rather than the illness itself. [17] Stress on the arteries from hard work, shouting, trumpet playing and childbirth were considered other possible causes. Morgagni (1682-1771) described in detail the luetic pathology of ruptured sacular aortic aneurysms in syphilitic prostitutes, [18] whilst Monro (1697-1767) described the intima, media and adventitia of arterial walls. [19] These key advances in arterial pathology paved the way for the Hunter Brothers of London (William Hunter [1718-1783] and John Hunter [1728-1793]) to develop the modern definitions of true, false and mixed aneurysms. Aneurysms were now accepted to be caused by ‘a disproportion between the force of the blood and the strength of the artery’, with syphilis as a risk factor rather than a sole aetiology. [12] As life expectancy rose dramatically in the twentieth century, it became clear that syphilis was not the only cause of arterial aneurysms, as the great vascular surgeon Rudolf Matas (1860-1957) stated: “The sins, vices, luxuries and worries of civilisation clog the arteries with the rust of premature senility, known as arteriosclerosis or atheroma, which is the chief factor in the production of aneurysm.” [20]

Modern ligation

The modern period of AAA surgery began in 1817 when Cooper first ligated the aortic bifurcation for a ruptured left external iliac aneurysm in a 38 year old man. The patient died four hours later; however, this did not discourage others from attempting similar procedures. [21]

Ten further unsuccessful cases were recorded prior to the turn of the twentieth century. It was not until a century later, in 1923, that Matas performed the first successful complete ligation of the aorta for aneurysm, with the patient surviving seventeen months and dying from tuberculosis. [22] Described by Osler as the ‘modern father of vascular surgery’, Matas also developed the technique of endoaneurysmorrhaphy, which involved ligating the aneurysmal sac upon itself to restore normal luminal flow. This was the first recorded technique aiming to spare blood flow to the lower limbs, an early prelude to the homograft, synthetic graft and EVAR.

Early Alternatives to Ligation

Despite Matas’ landmark success, the majority of surgeons of the era shared Suchruta’s millennia-old fear of aortic surgery. The American Surgical Association wrote in 1940, “the results obtained by surgical intervention have been discouraging.” Such fear prompted a resurgence of techniques introducing foreign material into the aneurismal lumen with the hope of promoting thrombosis. First attempted by Velpeau [23] with sewing needles in 1831, this technique was modified by Moore [24] in 1965 using 26 yards of iron wire. Failure of aneurysm thrombosis was blamed on ‘under packing’ the aneurysm. Corradi used a similar technique, passing electric current through the wire to introduce thrombosis. This technique became known as fili-galvanopuncture or the ‘Moore-Corradi method’. Although this technique lost popularity for aortic procedures, it marked the beginning of electrothrombosis and coiling of intracranial aneurysms in the latter half of the twentieth century. [25]

Another alternative was wrapping the aneurysm with material in an attempt to induce fibrosis and contain the aneurysm sac. AAA wrapping with cellophane was investigated by Pearse in 1940 [26] and Harrison in 1943. [27] Most notably, Nissen, the pioneer of Nissen fundoplication for hiatus hernia, famously wrapped Albert Einstein’s AAA with cellophane in 1948. [28] The aneurysm finally ruptured in 1955, with Einstein refusing surgery: “I want to go when I want. It is tasteless to prolong life artificially.” [28]

Anastomosis

Many would argue that the true father of modern vascular techniques is Alexis Carrel. He conducted the first saphenous vein bypass in 1948, the first successful kidney transplant in 1955 and the first human limb re-implantation in 1962. [13,29] Friedman states that “there are few innovations in cardiac and vascular surgery today that do not have roots in his work.” [13] Perhaps of greatest note was Carrel’s development of the triangulation technique for vessel anastomosis.

This technique was utilised by Crafoord in Sweden in 1944, in the first correction of aortic coarctation, and by Shumacker [30] in 1947 to correct a four centimetre thoracic aortic aneurysm secondary to coarctation. Prior to this time, coarctation was treated in a similar fashion to AAA, with ligation proximal and distal to the defect. [31] These developments would prove to be great milestones in AAA surgery as the first successful aortic aneurysm resection with restoration of arterial continuity.

Biological grafts

Despite this success, restoration of arterial continuity was limited to the thoracic aorta. Abdominal aneurysms remained too large to be anastomosed directly and a different technique was needed. Carrel played a key role in the development of arterial grafting, used when end-to-end anastomosis was unfeasible. The original work was performed by Carrel and Guthrie (1880-1963) with experiments transplanting human and canine vessels. [32,33] Their 1907 paper ‘Heterotransplantation of blood vessels’ [34] began with:

“It has been shown that segments of blood vessels removed from animals may be caused to regain and indefinitely retain their function.”

This discovery led to the first replacement of a thrombosed aortic bifurcation by Jacques Oudot (1913-1953) with an arterial homograft in 1950. The patient recovered well, and Oudot went on to perform four similar procedures. The landmark first AAA resection with restoration of arterial continuity can be credited to Charles Dubost (1914-1991) in 1951. [35] His patient, a 51 year old man, received the aorta of a young girl harvested three weeks previously. This brief period of excitement quickly subsided when it was realised that the long-term patency of aortic homografts was poor. It did, however, lay the foundations for the age of synthetic aortic grafts.

Synthetic grafts

Arthur Voorhees (1921-1992) can be credited with the invention of synthetic arterial prosthetics. In 1948, during experimental mitral valve replacement in dogs, Voorhees noticed that a misplaced suture had later become enveloped in endocardium. He postulated that, “a cloth tube, acting as a lattice work of threads, might indeed serve as an arterial prosthesis.” [36] Voorhees went on to test a wide variety of materials as possible candidates from synthetic tube grafts, resulting in the use of vinyon-N, the material used in parachutes. [37] His work with animal models would lead to a list of essential structural properties of arterial prostheses. [38]

Vinyon-N proved robust, and was introduced by Voorhees, Jaretski and Blakemore. In 1952 Voorhees inserted the first synthetic graft into a ruptured AAA. Although the vinyon-N graft was successfully implanted, the patient died shortly afterwards from a myocardial infarction. [39] By 1954, Voorhees had successfully implanted 17 AAAs with similar grafts. Schumacker and Muhm would simultaneously conduct similar procedures with nylon grafts. [40] Vinyon-N and nylon were quickly supplanted by Orlon. Similar materials with improved tensile strength are used in open AAA repair today, including Teflon, Dacron and expanded Polytetrafluoroethylene (PTFE). [41]

Modern open surgery

With the development of suitable graft material began the golden age of open AAA repair. The focus would now be largely on the Americans, particularly with surgeons DeBakey (1908-2008) and Cooley (1920) leading the way in Houston, Texas. In the early 1950s, DeBakey and Cooley developed and refined an astounding number of aortic surgical techniques. Debakey would also classify aortic dissection into different types depending on their site. In 1952, a year after Dubost’s first success in France, the pair would perform the first repair of thoracic aneurysm, [42] and a year later, the first aortic arch aneurysm repair. [43] It was around this time that the risks of spinal cord ischaemia during aortic surgery became apparent. Moderate hypothermia was first used and then enhanced in 1957, with Gerbode’s development of extracorporeal circulation, coined ‘left heart bypass’. In 1963, Gott expanded on this idea with a heparin-treated polyvinyl shunt from ascending to descending aorta. By 1970, centrifuge-powered, left-heart bypass with selective visceral perfusion had been developed. [44] In 1973, Crawford simplified DeBakey and Cooley’s technique by introducing sequential clamping of the aorta. By moving clamps distally, Crawford allowed for reperfusion of segments following the anastomoses of what had now become increasingly more complex grafts. [45] The work of DeBakey, Cooley and Crawford paved the way for the remarkable outcomes available to modern patients undergoing open AAA repair. Where once feared by surgeons and patients alike, in-hospital mortality following elective, open AAA now has a 30-day all-cause mortality of around five percent. [58]

Imaging

It must not be overlooked that significant advances in medical imaging have played a major role in reducing the incidence of ruptured AAAs and the morbidity and mortality associated with AAAs in general. The development of diagnostic ultrasound began in the late 1940s and 50s, with simultaneous research by John Wild in the United States, Inge Elder and Carl Hertz in Sweden and Ian Donald in Scotland. [46] It was the latter who published ‘Investigation of Abdominal Masses by Pulsed Ultrasound,’ regarded as one of the most important papers in diagnostic imaging. [47] By the 1960s, Doppler ultrasound would provide clinicians with both a structural and functional view of vessels, with colour flow Doppler in the 1980s allowing images to represent the direction of blood flow. The Multicentre Aneurysm Study showed that ultrasound screening resulted in a 42% reduction in mortality from ruptured AAAs over four years to 2002. [48] Ultrasound screening has resulted in an overall increase in hospital admissions for asymptomatic aneurysms; however, increases in recent years cannot be attributed to improved diagnosis alone, as it is known that the true incidence of AAA is also increasing in concordance with Western vascular pathology trends. [49]

In addition to the investigative power of ultrasound imaging, computed tomography (CT) scanners became available in the early 1970s. As faster, higher-resolution spiral CT scanners became more accessible in the 1980s, the diagnosis and management of AAAs became significantly more refined. [50] CT angiography has emerged as the gold standard for defining aneurysm morphology and planning surgical intervention. It is crucial in determining when emergent treatment is necessary, when calcification and soft tissue may be unstable, when the aortic wall is thickened or adhered to surrounding structures, and when rupture is imminent. [51] Overall operative mortality from ruptured AAA fell by 3.5% per decade from 1954-1997. [52] This was due to both a significant leap forward in surgical techniques in combination with drastically improved imaging modalities.

EVAR

The advent of successful open surgical repair of AAAs using synthetic grafts in the 1950s proved to be the first definitive treatment for AAA. However, the procedure remained highly invasive and many patients were excluded due to medical and anatomical contraindications. [53] Juan Parodi’s work with Julio Palmaz and Héctor Barone in the late 1980s aimed to rectify this issue. Parodi developed the first catheter-based arterial approach to AAA intervention. The first successful EVAR operation was completed by Parodi in Argentina on seventh September 1990. [54] The aneurysm was approached intravascularly via a femoral cutdown. Restoration of normal luminal blood flow was achieved with the deployment of a Dacron graft mounted on a Palmaz stent. [55] There was no need for aortic cross-clamping or major abdominal surgery. Similar non-invasive strategies were explored independently and concurrently by Volodos, [56] Lazarus [57] and Balko. [58]

During this early period of development there was significant Australian involvement. The work of Michael Lawrence-Brown and David Hartley at the Royal Perth Hospital led to the manufacture of the Zenith endovascular graft in 1993, a key milestone in the development of modern-day endovascular aortic stent-grafts. [59] The first bifurcated graft was successfully implanted one year later. [60] Prof James May and his team at the Royal Prince Alfred Hospital in Sydney conducted further key research, investigating the causes of aortic stent failure and complications. [61] This group went on to pioneer the modular design of present day aortic prostheses. [62]

The FDA approved the first two AAA stent grafts for widespread use in 1999. Since then, technical improvements in device design have resulted in improved surgical outcomes and increased ability to treat patients with difficult aneurysmal morphology. Slimmer device profiles have allowed easier device insertion through tortuous iliac vessels. [63] Furthermore, fenestrated and branched grafts have made possible the stent-grafting of juxtarenal AAA, where suboptimal proximal neck anatomy once meant traditional stenting would lead to renal failure and mesenteric ischaemia. [64]

AAA intervention now and beyond

Today, surgical intervention is generally reserved for AAAs greater than 5.5cm diameter and may be achieved by either open or endoluminal access. The UK small aneurysm trial determined that there is no survival benefit to elective open repair of aneurysms of less than 5.5cm. [8] The EVAR-1 trial (2005) found EVAR to reduce aneurysm related mortality by three percent at four years when compared to open repair; however, EVAR remains significantly more expensive and requires more re-interventions. Furthermore, it offers no advantage with respect to all cause mortality or health related quality of life. [5] These findings raised significant debate over the role of EVAR in patients fit for open repair. This controversy was furthered by the findings of the EVAR-2 trial (2005), which saw risk factor modification (fitness and lifestyle) as a better alternative to EVAR in patients unfit for open repair. [65] Many would argue that these figures are obsolete, with Criado stating, “it would not be unreasonable to postulate that endovascular experts today can achieve far better results than those produced by the EVAR-1 trial.” [53] It is undisputed that EVAR has dramatically changed the landscape of surgical intervention for AAA. By 2005, EVAR accounted for 56% of all non-ruptured AAA repairs but only 27% of operative mortality. Since 1993, deaths related to AAA have decreased dramatically, by 42%. [53] EVAR’s shortcomings of high long-term rates of complications and re-interventions, as well as questions of device performance beyond ten years, appear balanced by the procedure’s improved operative mortality and minimally invasive approach. [54]

Conclusion

The journey towards truly effective surgical intervention for AAA has been a long and experimental one. Once regarded as one of the most deadly pathologies, with little chance of a favourable surgical outcome, AAAs can now be successfully treated with minimally invasive procedures. Sushruta’s millennia-old fear of abdominal aortic surgery appears well and truly overcome.

Conflict of interest

None declared.

Correspondence

A Wilton: awil2853@uni.sydney.edu.au

References

[1] Kumar V et al. Robbins and Cotran Pathologic Basis of Disease 8th ed. Elsevier. 2010.
[2] Semmens J, Norman PE, Lawrence-Brown MMD, Holman CDJ. Influence of gender on outcome from ruptured abdominal aortic aneurysm. British Journal of Surgery. 2000;87:191-4.
[3] Jamrozik K, Norman PE, Spencer CA. et al. Screening for abdominal aortic aneurysms: lessons from a population-based study. Med. J. Aust. 2000;173:345-50.
[4]Semmens J, Lawrence-Brown MMD, Norman PE, Codde JP, Holman, CDJ. The Quality of Surgical Care Project: Benchmark standards of open resection for abdominal aortic aneurysm in Western Australia. Aust N Z J Surg. 1998;68:404-10.
[5] The EVAR trial participants. EVAR-1 (EndoVascular Aneurysm Repair): EVAR vs open repair in patients with abdomial aortic aneurym. Lancet. 2005;365:2179-86.
[6] National Heart Foundation of Australia. The Shifting Burden of Cardiovascular Disease. 2005.
[7] Fleming C, Whitlock EP, Beil TL, Lederle FA. Screening for abdominal aortic aneurysm: a best-evidence systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2005;142(3):203-11.
[8] United Kingdom Small Aneurysm Trial Participants. UK Small Aneurysm Trial. N Eng J Med. 2002;346:1445-52.
[9] Ghalioungui P. Magic and Medical Science in Ancient Egypt. Hodder and Stoughton Ltd. 1963.
[10] Bhishagratna KKL. An English Translation of The Sushruta Samhita. Calcutta: Self Published; 1916.
[11] Lytton DG, Resuhr LM. Galen on Abnormal Swellings. J Hist Med Allied Sci. 1978;33(4):531-49.
[12] Suy R. The Varying Morphology and Aetiology of the Arterial Aneurysm. A Historical Review. Acta Chir Belg. 2006;106:354-60.
[13] Friedman SG. A History of Vascular Surgery. New York: Futura Publishing Company 1989;74-89.
[14] Stehbens WE. History of Aneurysms. Med Hist 1958;2(4):274–80.
[15] Van Hee R. Andreas Vesalius and Surgery. Verh K Acad Geneeskd Belg. 1993;55(6):515-32.
[16] Kulkarni NV. Clinical Anatomy: A problem solving approach. New Delhi: Jaypee Brothers Medical Publishers. 2012;4.
[17] Paré A. Les OEuvres d’Ambroise Paré. Paris: Gabriel Buon; 1585.
[18] Morgagni GB. Librum quo agitur de morbis thoracis. Italy: Lovanni; 1767 p270-1.
[19] Monro DP. Remarks on the coats of arteries, their diseases, and particularly on the formation of aneurysm. Medical essays and Observations. Edinburgh, 1733.
[20] Matas R. Surgery of the Vascular System. AMA Arch Surg. 1956;72(1):1-19.
[21] Brock RC. The life and work of Sir Astley Cooper. Ann R Coll Surg Engl.1969; 44:1.
[22] Matas R. Aneurysm of the abdominal aorta at its bifurcation into the common iliac arteries. A pictorial supplement illustration the history of Corrinne D, previously reported as the first recored instance of cure of an aneurysm of the abdominal aorta by ligation. Ann Surg. 1940;122:909.
[23] Velpeau AA. Memoire sur la figure de l’acupuncture des arteres dans le traitement des anevrismes. Gaz Med. 1831;2:1.
[24] Moore CH, Murchison C. On a method of procuring the consolidation of fibrin in certain incurable aneurysms. With the report of a case in which an aneurysm of the ascending aorta was treated by the insertion of wire. Med Chir Trans. 1864;47:129.
[25] Siddique K, Alvernia J, Frazer K, Lanzino G, Treatment of aneurysms with wires and electricity: a historical overview. J Neurosurg. 2003;99:1102–7.
[26] Pearse HE. Experimental studies on the gradual occlusion of large arteries. Ann Surg. 1940;112:923.
[27] Harrison PW, Chandy J. A subclavian aneurysm cured by cellophane fibrosis. Ann Surg. 1943;118:478.
[28] Cohen JR, Graver LM. The ruptured abdominal aortic aneurysm of Albert Einstein. Surg Gynecol Obstet. 1990;170:455-8.
[29] Edwards WS, Edwards PD. Alexis Carrel: Visionary surgeon. Springfield, IL: Charles C Thomas Publisher, Ltd 1974;64–83.
[30] Shumacker HB Jr. Coarctation and aneurysm of the aorta. Report of a case treated by excision and end-to-end suture of aorta. Ann Surg. 1948;127:655.
[31] Alexander J, Byron FX. Aortectomy for thoracic aneurysm. JAMA 1944;126:1139.
[32] Carrel A. Ultimate results of aortic transplantation, J Exp Med. 1912;15:389–92.
[33] Carrel A. Heterotransplantation of blood vessels preserved in cold storage, J Exp Med. 1907;9:226–8.
[34] Guthrie CC. Heterotransplantation of blood vessels, Am J Physiol 1907;19:482–7.
[35] Dubost C. First successful resection of an aneurysm of an abdominal aorta with restoration of the continuity by human arterial graft. World J Surg. 1982;6:256.
[36] Voorhees AB. The origin of the permeable arterial prosthesis: a personal reminiscence. Surg Rounds. 1988;2:79-84.
[37] Voorhees AB. The development of arterial prostheses: a personal view. Arch Surg. 1985;120:289-95.
[38] Voorhees AB. How it all began. In: Sawyer PN, Kaplitt MJ, eds. Vascular Grafts. New York: Appleton-Century-Crofts 1978;3-4.
[39] Blakemore AH, Voorhees AB Jr. The use of tubes constructed from vinyon “N” cloth in bridging arterial defects – experimental and clinical. Ann Surg. 1954;140:324.
[40] Schumacker HB, Muhm HY. Arterial suture techniques and grafts: past, present, and future. Surgery. 1969;66:419-33.
[41] Lidman H, Faibisoff B, Daniel RK. Expanded Polytetrafluoroethene as a microvascular stent graft: An experimental study. Journal of Microsurgery. 1980;1:447-56.
[42] Cooley DA, DeBakey ME. Surgical considerations of intrathoracic aneurysms of the aorta and great vessels. Ann Surg. 1952;135:660–80.
[43] DeBakey ME. Successful resection of aneurysm of distal aortic arch and replacement by graft. J Am Med Assoc. 1954;155:1398–403.
[44] Argenteri A. The recent history of aortic surgery from 1950 to the present. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele 2008;200-25.
[45] Green Sy, LeMaire SA, Coselli JS. History of aortic surgery in Houston. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele. 2008;39-73.
[46] Edler I, Hertz CH. The use of ultrasonic reflectoscope for the continuous recording
of the movements of heart walls. Clin Physiol Funct I. 2004;24:118–36.
[47] Ian D. The investigation of abdominal masses by pulsd ultrasound. The Lancet June 1958;271(7032):1188-95.
[48] Thompson SG, Ashton HA, Gao L, Scott RAP. Screening men for abdominal aortic aneurysm: 10 year mortality and cost effectiveness results from the randomised Multicentre Aneurysm Screening Study. BMJ. 2009;338:2307.
[49] Filipovic M, Goldacre MJ, Robert SE, Yeates D, Duncan ME, Cook-Mozaffari P. Trends in mortality and hospital admission rates for abdominal aortic aneurysm in England and Wales. 1979-1999. BJS 2005;92(8):968-75.
[50] Kevles BH. Naked to the Bone: Medical Imagine in the Twentieth Century. New Brunswick, NJ: Rutgers University Press 1997;242-3.
[51] Ascher E, Veith FJ, Gloviczki P, Kent KC, Lawrence PF, Calligaro KD et al. Haimovici’s vascular surgery. 6th ed. Blackwell Publishing Ltd. 2012;86-92.
[52] Bown MJ, Sutton AJ, Bell PRF, Sayers RD. A meta-analysis of 50 years of ruptured abdominal aortic aneurysm repair. British Journal of Surgery. 2002;89(6):714-30.
[53] Criado FJ. The EVAR Landscape in 2011: A status report on AAA therapy. Endovascular Today. 2011;3:40-58.
[54] Criado FJ. EVAR at 20: The unfolding of a revolutionary new technique that changed everything. J Endovasc Ther. 2010;17:789-96.
[55] Hendriks Johanna M, van Dijk Lukas C, van Sambeek Marc RHM. Indications for
endovascular abdominal aortic aneurysms treatment. Interventional Cardiology.
2006;1(1):63-64.
[56] Volodos NL, Shekhanin VE, Karpovich IP, et al. A self-fixing synthetic blood vessel endoprosthesis (in Russian). Vestn Khir Im I I Grek. 1986;137:123-5.
[57] Lazarus HM. Intraluminal graft device, system and method. US patent 4,787,899 1988.
[58] Balko A, Piasecki GJ, Shah DM, et al. Transluminal placement of intraluminal polyurethane prosthesis for abdominal aortic aneurysm. J Surg Res. 1986;40:305-9.
[59] Lawrence-Brown M, Hartley D, MacSweeney ST et al. The Perth endoluminal bifurcated graft system—development and early experience. Cardiovasc Surg. 1996;4:706–12.
[60] White GH, Yu W, May J, Stephen MS, Waugh RC. A new nonstented balloon-expandable graft for straight or bifurcated endoluminal bypass. J Endovasc Surg. 1994;1:16-24.
[61] May J, White GH, Yu W, Waugh RC, McGahan T, Stephen MS, Harris JP. Endoluminal grafting of abdominal aortic aneurysms: cause of failure and their prevention. J Endovasc Surg. 1994;1:44-52.
[62] May J, White GH, Yu W, Ly CN, Waugh R, Stephen MS, Arulchelvam M, Harris JP. Concurrent comparison of endoluminal versus open repair in the treatment of abdominal aortic aneurysms: analysis of 303 patients by life table method. J Vasc Surg. 1998;27(2):213-20.
[63] Omran R. Abul-Khouodud Intervention for Peripheral Vascular Disease Endovascular AAA Repair: Conduit Challenges. J Invasive Cardiol. 2000;12(4).
[64] West CA, Noel AA, Bower TC, et al. Factors affecting outcomes of open surgical repair of pararenal aortic aneurysms: A 10-year experience. J Vasc Surg. 2006;43:921–7.
[65] The EVAR trial participants. EVAR-2 (EndoVascular Aneurysm Repair): EVAR in patients unfit for open repair. Lancet. 2005;365:2187-92.

Categories
Feature Articles Articles

Eye protection in the operating theatre: Why prescription glasses don’t cut it

Introduction
Needle-stick injury represents a serious occupational hazard for medical professionals, and much time is spent on educating students and practitioners on its prevention. Acquiring a blood-borne viral infection such as Human Immunodeficiency Virus (HIV), Hepatitis B or C from a patient is a rare yet devastating event. While most often associated with ‘sharps’ injuries, viral transmission is possible across any mucocutaneous membrane – including the eye. Infection via the transconjunctival route is a particularly relevant occupational hazard for operating room personnel, where bodily fluids are commonly encountered. Published cases of HIV seroconversion after ocular blood splash reinforce the importance of eye protection. [1]

Surgical operations carry an inherent risk of blood splash injury – masks with visors are provided in operating theatres for this reason. However, many surgeons and operating personnel rely solely upon prescription glasses for eye protection, despite spectacles being shown to offer an ineffective safeguard against blood splash injury. [2]

Incidence of blood splash injury
The incidence of blood splash is understandably more prevalent in some surgical specialties, such as orthopaedics, where power tools and other instruments increase the likelihood of blood spray. [3] Within these specialties, the risk is acknowledged and the use of more comprehensive eye protection is usually routine.

Laparoscopic and endoscopic procedures may particularly be viewed as low-risk, despite the rates of positive blood splash evident on post-operative examination of eye protection in one prospective study approaching 50%. [4] These results imply that even minimally invasive procedures need to be treated with a high level of vigilance.

The prevalence of blood splash during general surgical operations is highlighted by a study that followed one surgeon over a 12 month period and recorded all bodily fluids evident on protective eyewear following each procedure. [5] Overall, 45% of surgeries performed resulted in blood splash and an even higher incidence (79%) was found in vascular procedures. In addition, half of the laparoscopic cases were associated with blood recorded on the protective eyewear postoperatively.

A similar prospective trial undertaken in Australia found that protective eye shields tested positive for blood in 44% of operations, yet the surgeon was only aware of the incident in 18% of these cases. [6] Much blood spray during surgery does not occur at a visually perceivable level, with this study demonstrating that the incidence of blood splash during a procedure may be considerably higher than is realised.

Despite the predominance of blood splash occurring within the operating theatre, the risks of these injuries are not limited to surgeons and theatre staff – even minor surgery carries a considerable risk of blood splash. A review of 500 simple skin lesion excisions in a procedural dermatology unit revealed positive blood splash on facemask or visor in 66% of cases, which highlights the need for protective eyewear in all surgical settings. [7]

Risk of blood splash injury
Although a rare occurrence, even a basic procedure such as venepuncture can result in ocular blood splash injury. Several cases of confirmed HCV transmission via the conjunctival route have been reported. [8-10]

Although the rates of blood-borne infectious disease are reasonably low within Australia, and likewise the rates of conversion from a blood splash injury are low at around 2%, [9] the consequences of contracting HIV, HBV or HCV from a seropositive patient are potentially serious and require strict adherence to post exposure prophylaxis protocols. [11] Exposure to bodily fluids, particularly blood, is an unavoidable occupational risk for most health care professionals, but personal risk can be minimised by using appropriate universal precautions.

For those operating theatre personnel who wear prescription glasses, there exists a common belief that no additional eye protection is necessary. The 2007 Waikato Eye Protection Study [2] surveyed 71 practicing surgeons, of which 45.1% required prescription glasses while operating. Of the respondents, 84.5% had experienced prior periorbital blood splash during their operating careers, and 2.8% had gone on to contract an illness from such an event. While nearly 80% of the participants routinely used eye protection, amongst those who wore prescription glasses, 68% used them as sole eye protection.

A 2009 in vitro study examining the effectiveness of various forms of eye protection in orthopaedic surgery [12] employed a simulation model, with a mannequin head placed in a typical position in the operating field, with femoral osteotomy performed on a cadaveric thigh. The resulting blood splash on six different types of protective eyewear was measured, and found that prescription glasses offered no benefit over control (no protection). While none of the eye protection methods tested offered complete protection, significantly lower rates of conjunctival contamination were recorded for recommended eyewear, including facemask and eyeshield, hard plastic glasses and disposable plastic glasses.

Prevention and management of blood splash injury
Given that blood splash is an occupational hazard, the onus is on the hospital and clinical administration to ensure that there are adequate supplies of protective eye equipment available. Disposable surgical masks with full-face visors have been shown to offer the highest level of protection from blood splash injury [12] and ought to be readily accessible for all staff involved in procedures or settings where contact with bodily fluids is possible. The use of masks and visors should be standard practice for all theatre staff, including assistants, scrub nurses and observers, regardless of the use of prescription spectacles.

Should an incident occur, a procedure similar to that used for needle-stick injury may be followed to minimise the risk of infection. The eye should first be rinsed thoroughly to remove as much of the fluid as possible and serology should be ordered promptly to obtain a baseline for future comparisons. An HIV screen and acute hepatitis panel (HAV IgM, HB core IgM, HBsAg, HCV and HB surface antibody for immunised individuals) are indicated. Post-exposure prophylaxis (PEP) should be initiated as soon as practicable unless the patient is known to be HIV, HBV and HCV negative. [13]

Conclusion
Universal precautions are recommended in all instances where there is the potential for exposure to patient bodily fluids, with an emphasis on appropriate eye protection. Prescription glasses are unsuitable for use as the sole source of eye protection from blood splash injury. In light of the fact that a blood splash injury can occur without knowledge of the event, regular blood tests for health care workers involved in regular procedural activity may allow for early detection and intervention of workplace acquired infection.

Conflict of interest
None declared.

Correspondence
S Campbell: shaun.p.campbell@gmail.com

References
[1] Eberle J, Habermann J, Gurtler LG. HIV-1 infection transmitted by serum droplets into the eye: a case report. AIDS. 2000;14(2):206–7.
[2] Chong SJ, Smith C, Bialostocki A, McEwan CN. Do modern spectacles endanger surgeons? The Waikato Eye Protection Study. Ann Surg. 2007;245(3):495-501
[3] Alani A, Modi C, Almedghio S, Mackie I. The risks of splash injury when using power tools during orthopaedic surgery: a prospective study. Acta Orthop Belg. 2008;74(5):678-82.
[4] Wines MP, Lamb A, Argyropoulos AN, Caviezel A, Gannicliffe C, Tolley D. Blood splash injury: an underestimated risk in endourology. J Endourol. 2008;22(6):1183-7.
[5] Davies CG, Khan MN, Ghauri AS, Ranaboldo CJ. Blood and body fluid splashes during surgery – the need for eye protection and masks. Ann R Coll Surg Engl. 2007;89(8):770-2.
[6] Marasco S, Woods S. The risk of eye splash injuries in surgery. Aust N Z J Surg. 1998;68(11):785-7.
[7] Holzmann RD, Liang M, Nadiminti H, McCarthy J, Gharia M, Jones J et al. Blood exposure risk during procedural dermatology. J Am Acad Dermatol. 2008;58(5):817-25.
[8] Sartori M, La Terra G, Aglietta M, Manzin A, Navino C, Verzetti G. Transmission of hepatitis C via blood splash into conjunctiva. Scand J Infect Dis 1993;25:270-1.
[9] Hosoglu S, Celen MK, Akalin S, Geyik MF, Soyoral Y, Kara IH. Transmission of hepatitis C by blood splash into conjunctiva in a nurse. American Journal of Infection Control 2003;31(8):502-504.
[10] Rosen HR. Acquisition of hepatitis C by a conjunctival splash. Am J Infect Control 1997;25:242-7.
[11] NSW Health Policy Directive, AIDS/Infectious Diseases Branch. HIV, Hepatitis B and Hepatitis C – Management of Health Care Workers Potentially Exposed. 2010;Circular No 2003/39. File No 98/1833.
[12] Mansour AA, Even JL, Phillips S, Halpern JL. Eye protection in orthopaedic surgery. An in vitro study of various forms of eye protection and their effectiveness. J Bone Joint Surg Am. 2009 May;91(5):1050-4.
[13] Klein SM, Foltin J, Gomella LG. Emergency Medicine on Call. New York: McGraw-Hill; 2003. p. 288.

Categories
Feature Articles Articles

The risks and rewards of direct-to-consumer genetic tests: A primer for Australian medical students

Introduction
Over the last five years, a number of overseas companies, such as 23andMe, have begun to offer direct-to-consumer (DTC) genetic tests to estimate the probability of an individual developing various diseases. Although no Australian DTC companies exist due to regulations mandating the involvement of a health practitioner, Australian consumers are free to access overseas mail-order services. In theory, DTC testing carries huge potential for preventing the onset of disease by lifestyle modification and targeted surveillance programs. However, the current system of mail-order genetic testing raises serious concerns related to test quality, psychological impacts on users, and integration with the health system. There are also issues with protecting genetic privacy, and ethical concerns about making medical decisions based on pre-emptive knowledge. This paper presents an overview of the ethical, legal and practical issues of DTC testing in an Australian context. The paper concludes by proposing five conditions that will be key for harnessing the potential of DTC testing technology. These include improved clinical utility, updated anti-discrimination legislation, accessible genetic counselling, Therapeutic Goods Administration (TGA) monitoring, and mechanisms for identity verification. Based on these conditions, the current system of mail-order testing is unviable as a scalable medical model. For the long term, the most sustainable solution is integration of pre-symptomatic genetic testing with the healthcare system.

The rise of direct-to-consumer testing
“Be on the lookout now.” This is the slogan of 23andMe.com, a Californian biotechnology company that has been offering personal genetic testing since late 2007. Clients mail a in a sample of their saliva and, for the humble fee of US$299, 23andMe will isolate their DNA and scan across key regions to estimate that individual’s risk of developing different diseases. [1] Over 200 different diseases in fact – everything from widespread, life-threatening conditions including breast cancer and coronary artery disease, to the comparatively obscure such as restless legs syndrome. Table 1 gives an example of the risk profile with which an individual may be faced.

Genetic testing has existed for decades as a diagnostic modality. Since the 1980s, clinicians have used genetic data to detect monogenic conditions such as cystic fibrosis and thalassaemia. [2] These studies were conducted in patients already showing symptoms of the disease in order to confirm a suspected diagnosis. 23andMe does something quite different: it takes asymptomatic people and calculates the risk of diseases emerging in the long term. It is a pre-emptive test rather than a diagnostic one.

23andMe is not the only service of its kind. There is a growing family of these direct-to-consumer (DTC) genetic tests: Navigenics (US), deCODEme (Iceland) and Genetic Health (UK) all offer a comprehensive disease screen for under $1000 AUD. There are currently no Australian companies that offer DTC disease scans due to regulations that require the involvement of a health professional. [3] However, Australian consumers are still free to access overseas services. Although no Australian retail figures exist, the global market for pre-symptomatic genetic testing is growing rapidly: 23andMe reported that 150,000 customers worldwide have used their test, [4] and in a recent European survey 64% of respondents said they would use a genetic test to detect possible future disease. [5] The Australian market for DTC testing, buoyed by increasing public awareness and decreasing product costs, is also set to grow.

Australian stakeholders have so far been divided on the issue of DTC testing. Certain parties have embraced it. In 2010 the Australian insurance company NIB offered 5000 of its customers a half-price genetic test through the US company Navigenics. [6] However, controversy arose over the fine-print at the end of NIB’s offer letter: “You may be required to disclose genetic test results, including any underlying health risks and conditions which the tests reveal, to life insurance or superannuation providers.” [6]

Most professional and regulatory bodies have expressed concern over the risks of DTC testing in an Australian context. In a 2012 paper, the Australian Medical Association argued that health-related genetic testing “should only be undertaken with a referral from a medical practitioner.” [7] It also highlighted issues surrounding the accreditation of overseas laboratories and the accuracy of the test results. Meanwhile, the Human Genetics Society of Australasia has stressed the importance of educating the public about the risks of DTC tests: “The best way to get rid of the market for DTC genetic testing may be to eliminate consumer demand through education … rather than driving the market underground or overseas.” [8]

Despite the deficiencies in the current model of mail-order services, personal genetic testing carries huge potential benefits from a healthcare perspective. The 2011 National Health and Medical Research Council (NHMRC) publication entitled The Clinical Utility of Personalised Medicine highlights some of the potential applications of genetic tests: targeting clinical screening programs based on disease risk, predicting drug susceptibility and adverse reactions and initiating preventative therapy before disease onset. [9] Genetic risk analysis has the potential to revolutionise preventative medicine in the 21st century.

The question is whether free-market DTC testing is a positive step towards an era of genetically-derived preventative therapy. Perhaps it creates more problems than it solves. What is the clinical utility of these tests? Is it responsible to give untrained individuals this kind of risk information? Could test results get into the wrong hands? These are the practical issues that will directly impact Australian medical professionals as genetic data infiltrates further into daily practice. This paper aims to grapple with some of these issues in an attempt to tease out how we as a healthcare community can best adapt to this new technology.

What is the clinical utility of these tests?
In 2010, a Cambridge University professor sent his own DNA off for analysis by two different DTC testing companies – 23andMe and deCODEme. He found that for approximately one third of the tested diseases, he was classed in a different risk category by the two companies. [10] A similar experiment carried out by a British journalist also revealed some major discrepancies. In one test, his risk of a myocardial infarction was 6% above average, while on another it was 18% below. [11]

This variability is a reflection of the current level of uncertainty about precisely how genes contribute to many diseases. Most diseases are polygenic, with an array of contributing environmental and lifestyle factors also playing a role in disease onset. [12] Hence, in all but a handful of diseases where robust genetic markers have been identified (such as the BRCA mutations for breast and ovarian cancers), these DTC test results are of questionable validity. An individual’s risk of Type 2 Diabetes Mellitus cannot simply be distilled down into a single numerical value.

Even for those diseases where isolated genetic markers have been identified in the literature, the findings are specific to the population studied. The majority of linkage analyses are performed in North American or European populations and may not be directly applicable to an Australasian context. Population bias aside, there is also a high level of ambiguity in how various genetic markers interact. As an example, consider two alleles that have each been shown to increase the risk of macular degeneration by 10%. It is not valid to say that the presence of both alleles signifies a 20% risk increase. This relates to the concept of epistasis in statistical genetics – the combined phenotypic effect of two alleles may differ from the sum of the individual effects. The algorithms currently used by DTC testing companies do not account for the complexity of gene-phenotype relationships.

For these reasons, the NHMRC states in its guide to the public about DTC testing: “At this time, studies have yet to prove that such susceptibility tests give accurate results to consumers.” [12] At best, current DTC testing is only valid as a rough guide to identify any risks that are particularly high or low. At worst, it is a blatantly misleading risk estimate based on insufficient molecular and clinical data. However, as our understanding of genetic markers improves, so too will the utility of these tests.

Can customers handle the results?
Assuming test quality improves, the next question is whether average individuals can deal with this type of risk information. What may the psychological consequences be if a healthy 25-year-old discovered that they had a 35% chance of developing ischaemic heart disease at some time during their life?

One risk is that people with an unfavourable prognosis may become discouraged from caring about their health at all, because they feel imprisoned within an immutable ‘genetic destiny.’ [13] As disease is written into their genes, they may as well surrender and accept it. Even someone with an average disease risk may feel an impending sense of doom when confronted with the vast array of diseases that may one day debilitate them. Could endless accounting of genetic risks overshadow the joy of living?

It is fair to say that DTC testing will only be useful if individuals have the right attitude – if they use this foreknowledge to take preventative measures. But do genetic test results really cause behaviour modification? A fascinating study in the New England Journal of Medicine in 2011 analysed the behavioural patterns of 2037 patients before and after a DTC genetic test. [14] They found no difference in exercise behaviour or dietary fat intake, suggesting that the genetic risk analysis did not translate into measurable lifestyle modification.

In order for individuals to interpret and use this genetic information effectively, they will need advice from healthcare professionals. Many of the DTC testing companies offer their own genetic counselling services; however, only 10% of clients reported accessing these. [15] The current position of the Australian Medical Association is that patients should consult a general practitioner when interpreting the results of a DTC genetic test. [7] However, a forced marriage between commercial sequencing companies and the healthcare system threatens to create problems of its own.

How should the health system adapt?
A 2011 study in North Carolina found that one in five family physicians had already been asked a question about pre-symptomatic genetic tests, yet 85% of the surveyed doctors reported that they were not sufficiently prepared to interpret test data [16]. In Australia, the healthcare system needs to adapt to this emerging trend. The question is – to what extent?

One controversial issue is whether it should be mandatory for doctors to be consulted when an individual orders a genetic test. Australia currently requires the involvement of a health practitioner to perform a disease-related genetic test. [3] Many countries, with the notable exception of the United States, share this stance. The German government ruled in early 2010 that pre-symptomatic testing could only be ordered by doctors trained in genetic counselling. [11] However, critics argue that mandatory doctor involvement would add medical legitimacy to a technology still in its infancy. [17] There is also an ethical argument that individuals should have the right to know about their own genes independent of the health system. [18]

Then there is the issue of how DTC genetic data should influence treatment. For example, should someone genetically predisposed to Type 2 Diabetes Mellitus be screened more regularly than others? Or, in a more extreme scenario: should those with more favourable genetic outlooks be prioritised for high-demand procedures such as transplant surgery?

These are serious ethical dilemmas; however, the medical community has had to deal with such issues before, whenever a new technology has arisen. With appropriate consultation from ethics committees (such as the NHMRC-affiliated Human Genetics Society of Australasia) and improved genetic literacy among healthcare professionals, it is possible to imagine a symbiotic partnership between the health system and free-market genetic testing.

How do we safeguard genetic privacy?
If DTC testing is indeed here to stay, a further concern is raised: how do we protect genetic privacy? Suppose a potential employer were to gain access to genetic data – the consequences could be disastrous for those with a poor prognosis. The outcome may be even worse if these data were made available to their insurance company.

In Australia, the disclosure of an individual’s genetic data by third parties (such as a genetic testing company) is tightly regulated under the Privacy Act 1988, which forbids its use for any purpose beyond that for which it was collected. [19] The only exception, based on the Privacy Legislation Amendment Act 2006, is for genetic data to be released to ‘genetic relatives’ in situations where disclosure could significantly benefit their health. [19]

In spite of the Privacy Act, individuals may still be forced to disclose their own test results to a third party such as an insurer or employer. There have been numerous reports of discrimination on the basis of genetic data in an Australian context. [20-22] The Australian Genetic Discrimination Project has been surveying the experiences of clients visiting clinical geneticists for ‘predictive or pre-symptomatic’ genetic testing since 1998. The pilot data, published in 2008, showed that 10% of the 951 subjects reported some negative treatment as a result of their genetic results. [23] Of the alleged incidents of discrimination, 42% were related to insurance and 5% to employment.

The use of genetic data by insurance companies is a complex issue. Although private health insurance in Australia is priced purely on basic demographic data, life and disability insurance is contingent on an individual’s prior medical record. This means that customers must disclose the results of any genetic testing (DTC or otherwise) they may have undergone. This presents a serious disincentive for purchasing a DTC test. The Australian Law Reform Commission, in its landmark report Essentially Yours: the Protection of Human Genetic Information in Australia, discusses the possibility of a two-tier system where insurance below a specific value would not require disclosure of any genetic information. [22] Sweden and the United Kingdom have both implemented such systems in the past; however insurers have argued that the Australian insurance market is not sufficiently large to accommodate a two-tiered model. [22]

As genetic testing becomes increasingly widespread, a significant issue will be whether insurance companies should be allowed to request genetic data as a standard component of insurance applications. Currently, the Investment and Financial Services Association of Australia, which represents all major insurance companies, has stated that no individual will be forced to have a genetic test. [24] But how long will this moratorium last?

Suffice to say that the privacy and anti-discrimination legislature needs to adapt to the times. There needs to be careful regulation of how these genomics companies use and protect sensitive data, and robust legislation against genetic discrimination. Organisations such as the Australian Law Reform Commission and the Human Genetics Society of Australasia will continue to play an integral role in this process.

However, there are some fundamental issues that even legislation cannot fix. For example, with the current system of mail-order genetic testing, there is no way of verifying the identity of the person ordering the test. This means that someone could easily send in DNA that is not their own. In addition, an individual’s genetic results reveal a great deal about their close family members. Consequently, someone who does not wish to know their genetic risks might be forcibly confronted with this information through a relative’s results. We somehow need to construct a system that preserves an individual’s right of autonomy over their own genetic data.

What does the future hold?
DTC genetic testing is clearly a technology still in its infancy, with many problems yet to be overcome. There are issues regarding test quality, psychological ramifications, integration with the health system and genetic privacy. On closer inspection, this risk-detection tool turns out to be a significant risk in itself. So does pre-symptomatic genetic testing have a future?

The current business platform, wherein clients mail their DNA to overseas companies, is unviable as a scalable medical model. This paper proposes that the following five conditions are necessary (although not sufficient) for pre-symptomatic genetic testing to persist into the future in an acceptable form:
• Improved clinical utility
• Updated anti-discrimination legislation pertaining to genetic test data
• Accessible genetic counselling facilities and community education about interpreting genetic results
• Monitoring of DTC companies by regulatory bodies such as the Therapeutic Goods Administration (TGA)
• Mechanism for identity verification to prevent fraudulent DNA analysis

Let us analyse each of these propositions. Condition (i) will be gradually fulfilled as our understanding of genetic markers and bioinformatics develops. A wealth of new data is emerging from large-scale sequencing studies spanning diverse populations, with advanced modeling for gene-gene interactions. [25,26] Condition (ii) is also a likely future prospect – the report by the Australian Law Reform Commission is evidence of a responsive legislative landscape. [22] Condition (iii) is feasible, contingent on adequate funding for publicly accessible genetic counselling services and education programs. However, given that the clinical utility of DTC risk analysis is currently low, it would be difficult in the short term to justify any public expenditure on counselling services targeted at test users.

Conditions (iv) and (v) are more difficult to satisfy. Since DTC companies are all located overseas, they fall outside the jurisdiction of the Australian TGA. Given that consumers may make important healthcare choices based on DTC results, it is imperative that this industry be regulated. We have three options. First, we could rely on appropriate monitoring by foreign regulatory bodies. In the US, DTC genetic tests are classed as an ‘in vitro diagnostic device’ (IVD), meaning they fall subject to FDA regulation. However, in a testimony before the US government’s Subcommittee on Oversight and Investigations in July 2010, the FDA stated that it has “generally exercised enforcement discretion” in regulating IVDs. [27] It went on to admit that “none of the genetic tests now offered directly to consumers has undergone premarket review by the FDA to ensure that the test results being provided to patients are accurate, reliable, and clinically meaningful.” This is an area of active reform in the US; however, it seems unwise for Australia to blindly accept the standards of overseas regulators.

The second option is to sanction overseas DTC testing for Australian consumers. Many prescription medicines are subject to import controls if they are shipped into Australia. In theory, the same regulations could be applied to genetic test kits. However, it is not difficult to imagine ways around this ban, e.g. simply posting an oral swab and receiving the results online.

A third option is to open the market for Australian DTC testing companies, which could compete with overseas services while remaining under TGA surveillance. In other words, we could cultivate a domestic industry. However, it may not be possible for fledgling Australian companies to compete on price with the large-scale US operations. It would also be hard to justify the change in policy before conditions (i) to (iii) are fulfilled. That said, of the three options discussed, this appears to be the most viable in the long term.

Finally, condition (v) presents one of the fundamental flaws with DTC testing. If the health system was formally involved in the testing process, the medical practitioner would be responsible for identity verification. However, it is simply not possible to reliably check identity in a mail-order system. The only way DTC testing can verify identity is to have customers come in person to a DTC facility and provide proof when their DNA is collected. However, such a regulation would make it even more difficult for any Australian company to compete against online services.

Conclusion
In summary, it is very difficult to construct a practical model that addresses conditions (iv) and (v) in an Australian context. Hence, for the short term, DTC testing will likely remain a controversial, unregulated market run through overseas websites. It is the duty of the TGA to inform the public about the risks of these products, and the duty of the health system to support those who do choose to purchase a test.
For the longer term, it seems that the only sustainable solution is to move towards an Australian-based testing infrastructure linked into the healthcare system (for referrals and post-test counselling). There are many hurdles to overcome; however, one might envisage a situation, twenty years from now, where a genetic risk analysis is a standard medical procedure offered to all adults and subsidised by the health system, and where individuals particularly susceptible to certain conditions can maximise their quality of life by making educated lifestyle changes and choosing medications that best suit their genetic profiles. [28]
As a medical community, therefore, we should be wary of the current range of DTC tests, but also open-minded about the possibilities for a future partnership. If we get it right, the potential payoff for preventative medicine is huge.

Conflict of interest
None declared.

Correspondence
M Seneviratne: msen5354@uni.sydney.edu.au

References
[1] 23andMe. Genetic testing for health, disease and ancestry. Available from: www.23andme.com.
[2] Antonarakis SE. Diagnosis of genetic disorders at the DNA level. N Engl J Med. 1989;320(3):153-63.
[3] Trent R, Otlowski M, Ralston M, Lonsdale L, Young M-A, Suther G, et al. Medical Genetic Testing: Information for health professionals. Canberra: National Health and Medical Research Council, 2010.
[4] Perrone M. 23andMe’s DNA test seeks FDA approval. USA Today Business. 2012.
[5] Ramani D, Saviane C. Genetic tests: Between risks and opportunities EMBO Reports. 2010;11:910-13.
[6] Miller N. Fine print hides risk of genetic test offer. The Age. 2010.
[7] Position statement on genetic testing – 2012. Australian Medical Association, 2012.
[8] Human Genetic Society of Australia. Issue Paper: Direct to consumer genetic testing. 2007.
[9] Clinical Utility of Personalised Medicine. NHMRC. 2011.
[10] Knight C, Rodder S, Sturdy S. Response to Nuffield Bioethics Consultation Paper ESRC Genomics Policy and Research Forum. 2009.
[11] Hood C, Khaw KT, Liddel K, Mendus S. Medical profiling and online medicine: The ethics of personalised healthcare in a consumer age. Nuffield Council on Bioethics. 2010.
[12] Direct to Consumer Genetic Testing: An information resource for consumers. NHMRC. 2012.
[13] Green SK. Getting personal with DNA: From genome to me-ome Virtual Mentor. 2009;11(9):714-20.
[14] Bloss CSP, Schork NJP, Topol EJM. Effect of Direct-to-consumer genomewide profiling to assess disease risk. N Engl J Med. 2011;364(6):524-34.
[15] Caulfield T, McGuire AL. Direct-to-consumer genetic testing: Perceptions, problems, and policy responses. Annu Rev Med. 2012;63(1):23-33.
[16] Powell K, Cogswell W, Christianson C, Dave G, Verma A, Eubanks S, et al. Primary Care Physicians’ Awareness, Experience and Opinions of Direct-to-Consumer Genetic Testing. J Genet Couns.1-14.
[17] Frueh FW, Greely HT, Green RC. The future of direct-to-consumer clinical genetic tests Nat Rev Gene. 2011;12:511-15.
[18] Sandroff R. Direct-to-consumer genetic tests and the right to know. Hastings Center Report. 2010;40(5):24-5.
[19] Use and disclosure of genetic information to a patient’s genetic relatives under section 95AA of the Privacy Act 1988 (Cth). NHMRC / Office of the Privacy Commissioner, 2009.
[20] Barlow-Stewart K, Keays D. Genetic Discrimination in Australia. Journal of Law and Medicine. 2001;8:250-63.
[21] Otlowski M. Investigating genetic discrimination in the Australian life insurance sector: The use of genetic test results in underwriting, 1999-2003. Journal of Law and Medicine. 2007;14:367.
[22] Essentially Yours: The protection of human genetic information in Australia (ALRC Report 96). Australian Law Reform Commission, 2003.
[23] Taylor S, Treloar S, Barlow-Stewart K, Stranger M, Otlowski M. Investigating genetic discrimination in Australia: A large-scale survey of clinical genetics clients. Clinical Genetics. 2008;74(1):20-30.
[24] Barlow-Stewart K. Life Insurance products and genetic testing in Australia. Centre for Genetics Education, 2007.
[25] Davey JW, Hohenlohe PA, Etter PD, Boone JQ, Catchen JM, Blaxter ML. Genome-wide genetic marker discovery and genotyping using next-generation sequencing. Nat Rev Genet. 2011;12(7):499-510.
[26] Saunders CL, Chiodini BD, Sham P, Lewis CM, Abkevich V, Adeyemo AA, et al. Meta-Analysis of Genome-wide Linkage Studies in BMI and Obesity. Obesity. 2007;15(9):2263-75.
[27] Food and Drug Administration CeLnter for Devices and Radiological Health. Direct-to-Consumer Genetic Testing and the Consequences to the Public. Subcommittee on Oversight and Investigations, Committee on Energy and Commerce, US House of Representatives; 2010.
[28] Mrazek DA, Lerman C. Facilitating Clinical Implementation of Pharmacogenomics. JAMA: The Journal of the American Medical Association. 2011;306(3):304-5.

Categories
Feature Articles Articles

Student-led malaria projects – can they be effective?

Introduction
In this article we give an account of establishing a sustainable project in Uganda. We describe our experiences, both positive and negative, and discuss how such endeavours are beneficial to both students and universities. The substantial work contributed by an increasing group of students at our university and around Australia demonstrates an increasing push towards a greater national contribution to global health. Undoubtedly, student bodies have the potential to become major players in global health initiatives, but first we must see increased financial and academic investment by universities in this particular area of medicine.

Background
There are an estimated three billion people at risk of infection from malaria, with an estimated one million deaths annually. The greatest burden of malaria exists in Sub-Saharan Africa. [1,2] Amongst the Ugandan population of 26.9 million, malaria is the leading cause of morbidity and mortality, with 8 to 13 million episodes reported. [3] The World Malaria Report estimated that there were 43 490 malaria-related deaths in Uganda in 2008, ranking it third in the world behind Nigeria and the Democratic Republic of Congo. [4] In 2011, the situation remained alarming, with 90% of the population living in areas of high malaria transmission. [5]

The focus of this report is the Biharwe region of south-west Uganda. Due to a lack of reliable epidemiological data regarding the south-west of Uganda, it is difficult to evaluate the effectiveness of current malaria intervention strategies. However, Uganda is a country with relatively stable political and economic factors, [6] making it a strong candidate for the creation of sustainable intervention programs.

Insecticide Treated Nets (ITN)
Insecticide treated nets are a core method of malaria prevention and reduce disease-related mortality. [5] The World Health Organisation (WHO) Global Malaria Programme report states that an insecticide-treated net is a mosquito net that repels, disables and/or kills mosquitoes that come into contact with the insecticide. There are two categories of ITNs: conventionally treated nets, and long-lasting insecticidal nets (LLINs). The WHO recommends the distribution of LLINs rather than conventionally treated nets as LLINs are designed to maintain their biological efficacy against vector mosquitoes for at least three years in the field under recommended conditions of use, removing the need for regular insecticide treatment. [7]

Long-lasting insecticide nets have been reported to reduce all-cause child mortality by an average of eighteen percent in Sub-Saharan Africa (with a range of 14-29%). This implies that 5.5 lives could be saved per 1000 children under five years of age per year. [8] Use of LLINs in Africa increased mean birth weight by 55 g, reduced low birth weight by 23%, and reduced miscarriages/stillbirths by 33% in the first few pregnancies when compared with a control arm in which there were no mosquito nets. [9]

Use of LLINs is one of the most cost-effective interventions against malaria. In high-transmission areas where most of the malaria burden occurs in children under the age of five years, the use of LLINs is four to five times cheaper than the alternate strategy of indoor residual spraying. [10] Systematic delivery of LLINs through distribution projects can be a cost-effective way to make a significant impact on a local community. This makes the distribution of LLINs an ideal project for student-led groups with limited budgets.

Our experience implementing a sustainable intervention project in Uganda
This article comments on student-led research performed in Biharwe, which aimed to evaluate the Biharwe community’s current knowledge of malaria prevention techniques; to assess how people used their ITNs and to investigate from where they sourced their ITNs. We also aimed to alleviate the high malaria burden in Biharwe through the distribution of ITNs. We fundraised in Tasmania, with financial support being garnered from local Rotarian groups and student societies. Approximately five thousand dollars was raised which we used to purchase ITNs. Simultaneously we began contacting a local non-governmental organisation (NGO) and a student body from Mbarara University, the largest university in south-west Uganda. We felt we had laid the foundation for a successful overseas trip.

Our endeavours suffered initial setbacks due to the observation of a local organisation we were working with misusing the funds of other projects. We felt that in order to avoid a similar fate we would need to cut ties, and decided to seek out other local groups. We made contact with the Mbarara University students and they pointed us towards the Biharwe sub-county as a region of particular neglect with regards to previous government and NGO ITN distribution programs. At their recommendation we travelled to villages in the area. Access to these villages was obtained through respectfully approaching the village representatives and their councils, and asking their permission to engage with the local community.

Despite all our preparations before heading to Uganda, we were still not fully prepared for the stark realities of everyday life in East Africa. One problem we encountered was the misuse and misunderstanding of the ITN distribution program by locals. We also encountered local ‘gangs’ who would collect free ITNs from our distribution programs and then sell them at the market place for a profit; people who used their ITNs as materials to build their chicken coups; and widespread myths about the effects of ITNs. To combat this we sought the advice of a local priest who requested that the village heads put together a list of households as a means of minimising the fraudulent distribution of our nets. While not ideal, this approach did give us greater confidence when distributing the ITNs. As Uganda is a religious nation the support of a well-respected local priest made local leaders more receptive to our program.

It became apparent that we had to strengthen our understanding of local attitudes towards and usage of ITNs if we were to create a long-term, meaningful relationship with people in the area. At the suggestion of Mbarara University students, we commissioned DEKA Consult Limited, a local research group, to conduct qualitative epidemiological research in villages in these communities. Data collected was useful in identifying the scope of the problem. It identified that community members already had a significant amount of knowledge on the use of ITNs and that those who owned mosquito nets had purchased them from local suppliers. Local ethics approval and permission for access to local community members was gained by DEKA Consult Limited.

Evaluating local knowledge on malaria prevention
The study commissioned addressed community attitudes towards malaria prevention by surveying two distinct groups living in the Biharwe sub-county of south-west Uganda. Through questionnaires and focus group discussions, local researchers gathered information concerning attitudes towards and usage of mosquito nets in the area. One of the key findings was that ITNs were nominated as the main preventative technique by the respondents (33.3%). This is congruent with previous data indicating an increase in awareness of ITNs in Uganda following the Roll Back Malaria Abuja Summit. [11] A majority of respondents indicated some knowledge of the appropriate use of these mosquito nets (83.3%), meaning though that one in six of the Biharwe community members were unsure of how to correctly use ITNs. The research also explored common reasons why people neglected to sleep under ITNs in the Biharwe sub-county. Common misperceptions such as ITNs causing impotence and leading to burns were identified as barriers to people using their mosquito nets, and were issues that would need to be addressed in future education seminars. The findings indicate that assessment of existing knowledge and perceptions of a community are crucial in identifying obstacles that must be overcome during the implementation of an effective intervention project. Activities promoting education can then be moulded around the particular culture and social dynamic of a community, which will lead to maximal project impact. [12, 13] We believe this data indicates that the distribution of ITNs would be improved if it was accompanied by robust educational initiates that are tailored to local community needs.

Our way forward
In the summer of 2011-2012 another group of students from UTAS implemented an LLIN distribution project in the south-west of Uganda. They furthered the work outlined in this report. Our experiences and connections provided an excellent foundation for them to implement expanded projects. A further group of UTAS students has been assembled and is planning to travel to Uganda this coming summer, once again with the aim of building on the previous two visits. With the generous assistance of the Menzies Institute and UTAS School of Medicine, plans for a more robust epidemiology project have been formulated in order to measure the efficacy of future projects in Uganda. We believe the sustainability and effectiveness of these programs relies on both the development of a long-term relationship between our student organisation and the local community, as well as appropriate evaluation of all our projects.

Free distribution or subsidised LLINs
The majority of the malaria burden exists in the poorest, most rural communities, yet it is these regions that are often neglected in widespread ITN distribution programs. [14]

Our data indicates that only a minority of the households in the rural Biharwe sub-county own ITNs (11.1%), and that all of these ITNs have been purchased through the commercial sector. Again methodological disparities need to be addressed in order to confirm the validity of these results. However it does raise the important question of whether the commercial sector, rather than the public/non-governmental organisation (NGO) sector, would be better placed to serve their local communities.

Our dilemma serves as a microcosm for a much larger debate that has been occurring over the last decade regarding the most effective means of delivering ITNs in order to achieve the greatest national coverage.[15] Free distribution of ITNs is far more equitable and effective at reaching the poor. [16] However, utilisation of the commercial sector through subsidies, vouchers or a stratification model [17] is more sustainable, because a portion of the losses may be recovered. Populations, including those in the rural Biharwe sub-region, that have been neglected from ITN schemes such as Roll Back Malaria, [5] may stand to benefit from free targeted distribution of nets. Collaborations with both local and international students are well placed to combine local knowledge and financial support to best implement such initiatives.

The role of students in malaria prevention and international development projects
Organisations such as the World Health Organisation, when involved in widespread ITN distribution, [5] have far greater capabilities than any student-led project. However, due to shortfalls in funding and co-ordination, these schemes will not be able to reach all at-risk populations, particularly the poorest rural areas. [5] Small scale and independently funded student-led projects can fill a void in this neglected population. In order to achieve the maximal impact with a malaria intervention project, students should identify areas with a low rate of household ITN ownership, as well as areas with a low percentage of the owned ITNs being donated. It is these areas that ultimately stand to make the greatest progress in terms of ITN coverage amongst vulnerable individuals, resulting in a decrease in morbidity and mortality from malaria. [18] With locally-specific research, strong relationships with the community and the community leaders, and appropriate evaluation processes in place, students can make the maximal impact on reducing morbidity and mortality from malaria with limited funds. [19]

The aim should always be for a long-term partnership between the community [19] and student-led organisations who are willing to promote sustainability. This has the greatest opportunity to provide long-term benefits for both parties. Our experience is that medical students provide a continuous stream of like-minded youth who have been able to rise to the challenge and continue the work of previous students. Through bilateral exchanges between students and overseas partners, trust and friendship are able to be fostered, which further encourages participation in the project upon returning. Important information regarding the social hierarchy is also gained, which greatly helps with gaining access to the local decision makers. In turn, this creates greater understanding of the health problems, culture and reasons why particular communities have been left behind. Student-led organisations are perfectly placed to deliver these educational programs, as they constitute a long-term pool of motivated, altruistic skilled workers who are able to learn from their predecessors. Individual students also stand to benefit through increased cultural understanding, application of learned skill sets and an opportunity which can enhance their career paths. [19] Through appropriate long-term trial, error and proper evaluation, systems of program implementation can be formulated which may then be applied to similar communities elsewhere.

The Role of Universities
Preparing students for a leadership role in global health and its related fields is critical. University curricula should reflect today’s problems and those that are likely to be present in the coming decades. [20] It is our opinion that students are increasingly becoming aware and more willing to be involved in providing solutions, no matter how small, to current international issues, thanks mainly to a surge in the exposure to social media. When universities do not explore such issues deeply in their curricula, and do not provide the support for active student involvement, it may lead students to perceive that universities are about something other than the realities of the world. [21] Encouraging participation in international health projects has been reported to encourage students to better examine cross cultural issues, to improve their problem solving skills and to help improve the delivery of healthcare for under-privileged people. [22] These are transferable skills that are vital in the Australian health care system.

North American and European universities continue to lead the way; however, Australian universities are starting to become more involved with global health issues. The Australian Medical Students Association’s Global Health Committee aims to link and empower groups of students from each Australian medical school. [23] The Melbourne University Health Initiative, which oversees the Victorian Student’s Aid Program, aims to help students make a difference in health issues on a local and international level by running events on campus to promote awareness about several health issues, and by organising public health lectures to promote awareness in the community. [24] The Training for Health Equity Network (THEnet) is a composition of ten schools from around the world, including James Cook and Flinders Universities, who have committed to ensure that teaching, research and service activities address priority health needs, using a focus on underserved communities. [25] A focus of THEnet is on social accountability, with a framework to assess whether the schools are contributing to the improvement of health conditions within their local communities. [26]

In our view, there is no doubt that there needs to be more penetration of such initiatives into each of the universities’ curriculum. Should this occur, Australia may be able to produce a generation of graduates who will be well placed to address the numerous complex global health issues we are facing today, and that we will inevitably face in the future.

Conflict of interest
None declared.

Correspondence
B Wood: benjaminmwood88@gmail.com

References
[1] Greenwood BM, Bojang K, Whitty CJM, Targett GAT. Malaria, The Lancet. 2005 Apr 23-29; 365 (9469): 1487-98.
[2] Snow RW, Guerra CA, Noor AM, Myint HY, Hay SI. The global distribution of clinical episodes of Plasmodium falciparum malaria. Nature. 2005 Mar 2010; 434 (7030): 214-7.
[3] Uganda. Uganda Ministry of Health. Uganda Malaria Control Strategic Plan 2005/06 – 2009/10: Roll Back Malaria; 2003.
[4] Aregawi M, Cibulskis R, Williams R. World Malaria Report 2008. Switzerland: World Health Organisation; 2008.
[5] Aregawi M, Cibulskis R, Lynch M, Williams R. World Malaria Report 2011. Switzerland: World Health Organisation; 2011.
[6] Yeka A, Gasasira A, Mpimbaza A, Achan J, Nankabirwa J, Nsobya S, et al., Malaria in Uganda: Challenges to control on the long road to elimination: I. Epidemiology and current control efforts. Acta Tropica. 2012 Mar; 121 (3); 184-95.
[7] Fifty-eighth World Health Assembly: Resolution WHA58.2 Malaria Control [Internet Article]. Geneva: World Health Organisation; May 2005 [cited 2012 12th April]. Available from: http://apps.who.int/gb/ebwha/pdf_files/WHA58-REC1/english/A58_2005_REC1-en.pdf
[8] Lengeler C. Insecticide-treated bed nets and curtains for preventing malaria. Cochrane Database of Systemic Review (Online). 2004; 2: CD000363.
[9] Gamble C, Ekwaru JP, Ter Kuile FO. Insecticide-treated nets for preventing malaria in pregnancy. Cochrane Database of Systematic Reviews (Online). 2006 April 19; 2: CD003755.
[10] Yukich J, Tediosi F, Lengeler C. Comparative cost-effectiveness of ITNs or IRS in Sub-Saharan Africa. Malaria Matters (Issue 18). 2007 July 12: pg. 2-4.
[11] Baume CA, Marin MC. Gains in awareness, ownership and use of insecticide-treated nets in Nigeria, Senegal, Uganda and Zambia. Malaria J. 2008 Aug 7; 7: 153.
[12] Williams PCM, Martina A, Cumming RG, Hall J. Malaria prevention in Sub-Saharan Africa: A field study in rural Uganda. J Community Health. 2009 April; 34:288-94.
[13] Marsh VM, Mutemi W, Some ES, Haaland A, Snow RW. Evaluating the community education programme of an insecticide-treated bed net trial on the Kenyan coast. Health Policy Plan. 1996 Sep; 11(3): 280-91.
[14] Webster J, Lines J, Bruce J, Armstrong Schellenberg JR, Hanson K. Which delivery systems reach the poor? A review of equity of coverage of ever-treated nets, never-treated nets, and immunisation to reduce child mortality in Africa. Lancet Infect Dis. 2005 Nov; 5(11): 709-11.
[15] Sexton A. Best practices for an insecticide-treated bed net distribution programme in sub-Saharan eastern Africa. Malaria J. 2011. Jun 8; 10:157.
[16] Noor AM, Mutheu JJ, Tatem AJ, Hay SI, Snow RW. Insecticide-treated net coverage in Africa: mapping progress in 2000-07. Lancet. 2009 Nov 18; 373 (9657): 58-67.
[17] Noor AM, Amin AA, Akhwale WS, Snow RW. Increasing coverage and decreasing inequity in insecticide-treated bed net use among rural Kenyan children. PLoS Medicine. 2007 Aug 21; 4(8): e255.
[18] Cohen J, Dupas P. Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment. The Quarterly Journal of Economics. 2010; 125 (1): 1-45.
[19] Glew RH. Promoting collaborations between biomedical scholars in the U.S. and Sub-Saharan Africa. Experimental Biology and Medicine. 2008 Mar; 233(3): 277-85.
[20] Bryant JH, Velji. Global health and the role of universities in the twenty-first century,.Infect Dis Clin North Am 2011 Jun; 25(2): 311-21.
[21] Crabtree RD. Mutual empowerment in cross-cultural participatory development and service learning: Lessons in communication and social justice from projects. J Appl Commun Res. 1998; 26 (2): 182-209.
[22] Harth SC, Leonard NA, Fitzgerald SM, Thong YH. The educational value of clinical electives. Medical Education. 1990 Jul; 24 (4) :344–53.
[23] Murphy A. AMSA Global Health Committee [Internet]. 2012 [cited 2012 April 10]. Available from: http://ghn.amsa.org.au/
[24] Melbourne University Health Initiative [Internet]. 2012 [cited 2012 April 3]. Available from: http://muhi-gh.org/about-muhi
[25] THEnet. Training for Health Equity Network [Internet]. 2012 [cited 2012 March 30]. Available from: http://www.thenetcommunity.org/
[26] The Training for Health Equity Network. THEnet’s Social Accountability Evaluation Framework Version 1. Monograph I (1 ed.). The Training for Health Equity Network, 2011.