Categories
Feature Articles Articles

The risks and rewards of direct-to-consumer genetic tests: A primer for Australian medical students

Introduction
Over the last five years, a number of overseas companies, such as 23andMe, have begun to offer direct-to-consumer (DTC) genetic tests to estimate the probability of an individual developing various diseases. Although no Australian DTC companies exist due to regulations mandating the involvement of a health practitioner, Australian consumers are free to access overseas mail-order services. In theory, DTC testing carries huge potential for preventing the onset of disease by lifestyle modification and targeted surveillance programs. However, the current system of mail-order genetic testing raises serious concerns related to test quality, psychological impacts on users, and integration with the health system. There are also issues with protecting genetic privacy, and ethical concerns about making medical decisions based on pre-emptive knowledge. This paper presents an overview of the ethical, legal and practical issues of DTC testing in an Australian context. The paper concludes by proposing five conditions that will be key for harnessing the potential of DTC testing technology. These include improved clinical utility, updated anti-discrimination legislation, accessible genetic counselling, Therapeutic Goods Administration (TGA) monitoring, and mechanisms for identity verification. Based on these conditions, the current system of mail-order testing is unviable as a scalable medical model. For the long term, the most sustainable solution is integration of pre-symptomatic genetic testing with the healthcare system.

The rise of direct-to-consumer testing
“Be on the lookout now.” This is the slogan of 23andMe.com, a Californian biotechnology company that has been offering personal genetic testing since late 2007. Clients mail a in a sample of their saliva and, for the humble fee of US$299, 23andMe will isolate their DNA and scan across key regions to estimate that individual’s risk of developing different diseases. [1] Over 200 different diseases in fact – everything from widespread, life-threatening conditions including breast cancer and coronary artery disease, to the comparatively obscure such as restless legs syndrome. Table 1 gives an example of the risk profile with which an individual may be faced.

Genetic testing has existed for decades as a diagnostic modality. Since the 1980s, clinicians have used genetic data to detect monogenic conditions such as cystic fibrosis and thalassaemia. [2] These studies were conducted in patients already showing symptoms of the disease in order to confirm a suspected diagnosis. 23andMe does something quite different: it takes asymptomatic people and calculates the risk of diseases emerging in the long term. It is a pre-emptive test rather than a diagnostic one.

23andMe is not the only service of its kind. There is a growing family of these direct-to-consumer (DTC) genetic tests: Navigenics (US), deCODEme (Iceland) and Genetic Health (UK) all offer a comprehensive disease screen for under $1000 AUD. There are currently no Australian companies that offer DTC disease scans due to regulations that require the involvement of a health professional. [3] However, Australian consumers are still free to access overseas services. Although no Australian retail figures exist, the global market for pre-symptomatic genetic testing is growing rapidly: 23andMe reported that 150,000 customers worldwide have used their test, [4] and in a recent European survey 64% of respondents said they would use a genetic test to detect possible future disease. [5] The Australian market for DTC testing, buoyed by increasing public awareness and decreasing product costs, is also set to grow.

Australian stakeholders have so far been divided on the issue of DTC testing. Certain parties have embraced it. In 2010 the Australian insurance company NIB offered 5000 of its customers a half-price genetic test through the US company Navigenics. [6] However, controversy arose over the fine-print at the end of NIB’s offer letter: “You may be required to disclose genetic test results, including any underlying health risks and conditions which the tests reveal, to life insurance or superannuation providers.” [6]

Most professional and regulatory bodies have expressed concern over the risks of DTC testing in an Australian context. In a 2012 paper, the Australian Medical Association argued that health-related genetic testing “should only be undertaken with a referral from a medical practitioner.” [7] It also highlighted issues surrounding the accreditation of overseas laboratories and the accuracy of the test results. Meanwhile, the Human Genetics Society of Australasia has stressed the importance of educating the public about the risks of DTC tests: “The best way to get rid of the market for DTC genetic testing may be to eliminate consumer demand through education … rather than driving the market underground or overseas.” [8]

Despite the deficiencies in the current model of mail-order services, personal genetic testing carries huge potential benefits from a healthcare perspective. The 2011 National Health and Medical Research Council (NHMRC) publication entitled The Clinical Utility of Personalised Medicine highlights some of the potential applications of genetic tests: targeting clinical screening programs based on disease risk, predicting drug susceptibility and adverse reactions and initiating preventative therapy before disease onset. [9] Genetic risk analysis has the potential to revolutionise preventative medicine in the 21st century.

The question is whether free-market DTC testing is a positive step towards an era of genetically-derived preventative therapy. Perhaps it creates more problems than it solves. What is the clinical utility of these tests? Is it responsible to give untrained individuals this kind of risk information? Could test results get into the wrong hands? These are the practical issues that will directly impact Australian medical professionals as genetic data infiltrates further into daily practice. This paper aims to grapple with some of these issues in an attempt to tease out how we as a healthcare community can best adapt to this new technology.

What is the clinical utility of these tests?
In 2010, a Cambridge University professor sent his own DNA off for analysis by two different DTC testing companies – 23andMe and deCODEme. He found that for approximately one third of the tested diseases, he was classed in a different risk category by the two companies. [10] A similar experiment carried out by a British journalist also revealed some major discrepancies. In one test, his risk of a myocardial infarction was 6% above average, while on another it was 18% below. [11]

This variability is a reflection of the current level of uncertainty about precisely how genes contribute to many diseases. Most diseases are polygenic, with an array of contributing environmental and lifestyle factors also playing a role in disease onset. [12] Hence, in all but a handful of diseases where robust genetic markers have been identified (such as the BRCA mutations for breast and ovarian cancers), these DTC test results are of questionable validity. An individual’s risk of Type 2 Diabetes Mellitus cannot simply be distilled down into a single numerical value.

Even for those diseases where isolated genetic markers have been identified in the literature, the findings are specific to the population studied. The majority of linkage analyses are performed in North American or European populations and may not be directly applicable to an Australasian context. Population bias aside, there is also a high level of ambiguity in how various genetic markers interact. As an example, consider two alleles that have each been shown to increase the risk of macular degeneration by 10%. It is not valid to say that the presence of both alleles signifies a 20% risk increase. This relates to the concept of epistasis in statistical genetics – the combined phenotypic effect of two alleles may differ from the sum of the individual effects. The algorithms currently used by DTC testing companies do not account for the complexity of gene-phenotype relationships.

For these reasons, the NHMRC states in its guide to the public about DTC testing: “At this time, studies have yet to prove that such susceptibility tests give accurate results to consumers.” [12] At best, current DTC testing is only valid as a rough guide to identify any risks that are particularly high or low. At worst, it is a blatantly misleading risk estimate based on insufficient molecular and clinical data. However, as our understanding of genetic markers improves, so too will the utility of these tests.

Can customers handle the results?
Assuming test quality improves, the next question is whether average individuals can deal with this type of risk information. What may the psychological consequences be if a healthy 25-year-old discovered that they had a 35% chance of developing ischaemic heart disease at some time during their life?

One risk is that people with an unfavourable prognosis may become discouraged from caring about their health at all, because they feel imprisoned within an immutable ‘genetic destiny.’ [13] As disease is written into their genes, they may as well surrender and accept it. Even someone with an average disease risk may feel an impending sense of doom when confronted with the vast array of diseases that may one day debilitate them. Could endless accounting of genetic risks overshadow the joy of living?

It is fair to say that DTC testing will only be useful if individuals have the right attitude – if they use this foreknowledge to take preventative measures. But do genetic test results really cause behaviour modification? A fascinating study in the New England Journal of Medicine in 2011 analysed the behavioural patterns of 2037 patients before and after a DTC genetic test. [14] They found no difference in exercise behaviour or dietary fat intake, suggesting that the genetic risk analysis did not translate into measurable lifestyle modification.

In order for individuals to interpret and use this genetic information effectively, they will need advice from healthcare professionals. Many of the DTC testing companies offer their own genetic counselling services; however, only 10% of clients reported accessing these. [15] The current position of the Australian Medical Association is that patients should consult a general practitioner when interpreting the results of a DTC genetic test. [7] However, a forced marriage between commercial sequencing companies and the healthcare system threatens to create problems of its own.

How should the health system adapt?
A 2011 study in North Carolina found that one in five family physicians had already been asked a question about pre-symptomatic genetic tests, yet 85% of the surveyed doctors reported that they were not sufficiently prepared to interpret test data [16]. In Australia, the healthcare system needs to adapt to this emerging trend. The question is – to what extent?

One controversial issue is whether it should be mandatory for doctors to be consulted when an individual orders a genetic test. Australia currently requires the involvement of a health practitioner to perform a disease-related genetic test. [3] Many countries, with the notable exception of the United States, share this stance. The German government ruled in early 2010 that pre-symptomatic testing could only be ordered by doctors trained in genetic counselling. [11] However, critics argue that mandatory doctor involvement would add medical legitimacy to a technology still in its infancy. [17] There is also an ethical argument that individuals should have the right to know about their own genes independent of the health system. [18]

Then there is the issue of how DTC genetic data should influence treatment. For example, should someone genetically predisposed to Type 2 Diabetes Mellitus be screened more regularly than others? Or, in a more extreme scenario: should those with more favourable genetic outlooks be prioritised for high-demand procedures such as transplant surgery?

These are serious ethical dilemmas; however, the medical community has had to deal with such issues before, whenever a new technology has arisen. With appropriate consultation from ethics committees (such as the NHMRC-affiliated Human Genetics Society of Australasia) and improved genetic literacy among healthcare professionals, it is possible to imagine a symbiotic partnership between the health system and free-market genetic testing.

How do we safeguard genetic privacy?
If DTC testing is indeed here to stay, a further concern is raised: how do we protect genetic privacy? Suppose a potential employer were to gain access to genetic data – the consequences could be disastrous for those with a poor prognosis. The outcome may be even worse if these data were made available to their insurance company.

In Australia, the disclosure of an individual’s genetic data by third parties (such as a genetic testing company) is tightly regulated under the Privacy Act 1988, which forbids its use for any purpose beyond that for which it was collected. [19] The only exception, based on the Privacy Legislation Amendment Act 2006, is for genetic data to be released to ‘genetic relatives’ in situations where disclosure could significantly benefit their health. [19]

In spite of the Privacy Act, individuals may still be forced to disclose their own test results to a third party such as an insurer or employer. There have been numerous reports of discrimination on the basis of genetic data in an Australian context. [20-22] The Australian Genetic Discrimination Project has been surveying the experiences of clients visiting clinical geneticists for ‘predictive or pre-symptomatic’ genetic testing since 1998. The pilot data, published in 2008, showed that 10% of the 951 subjects reported some negative treatment as a result of their genetic results. [23] Of the alleged incidents of discrimination, 42% were related to insurance and 5% to employment.

The use of genetic data by insurance companies is a complex issue. Although private health insurance in Australia is priced purely on basic demographic data, life and disability insurance is contingent on an individual’s prior medical record. This means that customers must disclose the results of any genetic testing (DTC or otherwise) they may have undergone. This presents a serious disincentive for purchasing a DTC test. The Australian Law Reform Commission, in its landmark report Essentially Yours: the Protection of Human Genetic Information in Australia, discusses the possibility of a two-tier system where insurance below a specific value would not require disclosure of any genetic information. [22] Sweden and the United Kingdom have both implemented such systems in the past; however insurers have argued that the Australian insurance market is not sufficiently large to accommodate a two-tiered model. [22]

As genetic testing becomes increasingly widespread, a significant issue will be whether insurance companies should be allowed to request genetic data as a standard component of insurance applications. Currently, the Investment and Financial Services Association of Australia, which represents all major insurance companies, has stated that no individual will be forced to have a genetic test. [24] But how long will this moratorium last?

Suffice to say that the privacy and anti-discrimination legislature needs to adapt to the times. There needs to be careful regulation of how these genomics companies use and protect sensitive data, and robust legislation against genetic discrimination. Organisations such as the Australian Law Reform Commission and the Human Genetics Society of Australasia will continue to play an integral role in this process.

However, there are some fundamental issues that even legislation cannot fix. For example, with the current system of mail-order genetic testing, there is no way of verifying the identity of the person ordering the test. This means that someone could easily send in DNA that is not their own. In addition, an individual’s genetic results reveal a great deal about their close family members. Consequently, someone who does not wish to know their genetic risks might be forcibly confronted with this information through a relative’s results. We somehow need to construct a system that preserves an individual’s right of autonomy over their own genetic data.

What does the future hold?
DTC genetic testing is clearly a technology still in its infancy, with many problems yet to be overcome. There are issues regarding test quality, psychological ramifications, integration with the health system and genetic privacy. On closer inspection, this risk-detection tool turns out to be a significant risk in itself. So does pre-symptomatic genetic testing have a future?

The current business platform, wherein clients mail their DNA to overseas companies, is unviable as a scalable medical model. This paper proposes that the following five conditions are necessary (although not sufficient) for pre-symptomatic genetic testing to persist into the future in an acceptable form:
• Improved clinical utility
• Updated anti-discrimination legislation pertaining to genetic test data
• Accessible genetic counselling facilities and community education about interpreting genetic results
• Monitoring of DTC companies by regulatory bodies such as the Therapeutic Goods Administration (TGA)
• Mechanism for identity verification to prevent fraudulent DNA analysis

Let us analyse each of these propositions. Condition (i) will be gradually fulfilled as our understanding of genetic markers and bioinformatics develops. A wealth of new data is emerging from large-scale sequencing studies spanning diverse populations, with advanced modeling for gene-gene interactions. [25,26] Condition (ii) is also a likely future prospect – the report by the Australian Law Reform Commission is evidence of a responsive legislative landscape. [22] Condition (iii) is feasible, contingent on adequate funding for publicly accessible genetic counselling services and education programs. However, given that the clinical utility of DTC risk analysis is currently low, it would be difficult in the short term to justify any public expenditure on counselling services targeted at test users.

Conditions (iv) and (v) are more difficult to satisfy. Since DTC companies are all located overseas, they fall outside the jurisdiction of the Australian TGA. Given that consumers may make important healthcare choices based on DTC results, it is imperative that this industry be regulated. We have three options. First, we could rely on appropriate monitoring by foreign regulatory bodies. In the US, DTC genetic tests are classed as an ‘in vitro diagnostic device’ (IVD), meaning they fall subject to FDA regulation. However, in a testimony before the US government’s Subcommittee on Oversight and Investigations in July 2010, the FDA stated that it has “generally exercised enforcement discretion” in regulating IVDs. [27] It went on to admit that “none of the genetic tests now offered directly to consumers has undergone premarket review by the FDA to ensure that the test results being provided to patients are accurate, reliable, and clinically meaningful.” This is an area of active reform in the US; however, it seems unwise for Australia to blindly accept the standards of overseas regulators.

The second option is to sanction overseas DTC testing for Australian consumers. Many prescription medicines are subject to import controls if they are shipped into Australia. In theory, the same regulations could be applied to genetic test kits. However, it is not difficult to imagine ways around this ban, e.g. simply posting an oral swab and receiving the results online.

A third option is to open the market for Australian DTC testing companies, which could compete with overseas services while remaining under TGA surveillance. In other words, we could cultivate a domestic industry. However, it may not be possible for fledgling Australian companies to compete on price with the large-scale US operations. It would also be hard to justify the change in policy before conditions (i) to (iii) are fulfilled. That said, of the three options discussed, this appears to be the most viable in the long term.

Finally, condition (v) presents one of the fundamental flaws with DTC testing. If the health system was formally involved in the testing process, the medical practitioner would be responsible for identity verification. However, it is simply not possible to reliably check identity in a mail-order system. The only way DTC testing can verify identity is to have customers come in person to a DTC facility and provide proof when their DNA is collected. However, such a regulation would make it even more difficult for any Australian company to compete against online services.

Conclusion
In summary, it is very difficult to construct a practical model that addresses conditions (iv) and (v) in an Australian context. Hence, for the short term, DTC testing will likely remain a controversial, unregulated market run through overseas websites. It is the duty of the TGA to inform the public about the risks of these products, and the duty of the health system to support those who do choose to purchase a test.
For the longer term, it seems that the only sustainable solution is to move towards an Australian-based testing infrastructure linked into the healthcare system (for referrals and post-test counselling). There are many hurdles to overcome; however, one might envisage a situation, twenty years from now, where a genetic risk analysis is a standard medical procedure offered to all adults and subsidised by the health system, and where individuals particularly susceptible to certain conditions can maximise their quality of life by making educated lifestyle changes and choosing medications that best suit their genetic profiles. [28]
As a medical community, therefore, we should be wary of the current range of DTC tests, but also open-minded about the possibilities for a future partnership. If we get it right, the potential payoff for preventative medicine is huge.

Conflict of interest
None declared.

Correspondence
M Seneviratne: msen5354@uni.sydney.edu.au

References
[1] 23andMe. Genetic testing for health, disease and ancestry. Available from: www.23andme.com.
[2] Antonarakis SE. Diagnosis of genetic disorders at the DNA level. N Engl J Med. 1989;320(3):153-63.
[3] Trent R, Otlowski M, Ralston M, Lonsdale L, Young M-A, Suther G, et al. Medical Genetic Testing: Information for health professionals. Canberra: National Health and Medical Research Council, 2010.
[4] Perrone M. 23andMe’s DNA test seeks FDA approval. USA Today Business. 2012.
[5] Ramani D, Saviane C. Genetic tests: Between risks and opportunities EMBO Reports. 2010;11:910-13.
[6] Miller N. Fine print hides risk of genetic test offer. The Age. 2010.
[7] Position statement on genetic testing – 2012. Australian Medical Association, 2012.
[8] Human Genetic Society of Australia. Issue Paper: Direct to consumer genetic testing. 2007.
[9] Clinical Utility of Personalised Medicine. NHMRC. 2011.
[10] Knight C, Rodder S, Sturdy S. Response to Nuffield Bioethics Consultation Paper ESRC Genomics Policy and Research Forum. 2009.
[11] Hood C, Khaw KT, Liddel K, Mendus S. Medical profiling and online medicine: The ethics of personalised healthcare in a consumer age. Nuffield Council on Bioethics. 2010.
[12] Direct to Consumer Genetic Testing: An information resource for consumers. NHMRC. 2012.
[13] Green SK. Getting personal with DNA: From genome to me-ome Virtual Mentor. 2009;11(9):714-20.
[14] Bloss CSP, Schork NJP, Topol EJM. Effect of Direct-to-consumer genomewide profiling to assess disease risk. N Engl J Med. 2011;364(6):524-34.
[15] Caulfield T, McGuire AL. Direct-to-consumer genetic testing: Perceptions, problems, and policy responses. Annu Rev Med. 2012;63(1):23-33.
[16] Powell K, Cogswell W, Christianson C, Dave G, Verma A, Eubanks S, et al. Primary Care Physicians’ Awareness, Experience and Opinions of Direct-to-Consumer Genetic Testing. J Genet Couns.1-14.
[17] Frueh FW, Greely HT, Green RC. The future of direct-to-consumer clinical genetic tests Nat Rev Gene. 2011;12:511-15.
[18] Sandroff R. Direct-to-consumer genetic tests and the right to know. Hastings Center Report. 2010;40(5):24-5.
[19] Use and disclosure of genetic information to a patient’s genetic relatives under section 95AA of the Privacy Act 1988 (Cth). NHMRC / Office of the Privacy Commissioner, 2009.
[20] Barlow-Stewart K, Keays D. Genetic Discrimination in Australia. Journal of Law and Medicine. 2001;8:250-63.
[21] Otlowski M. Investigating genetic discrimination in the Australian life insurance sector: The use of genetic test results in underwriting, 1999-2003. Journal of Law and Medicine. 2007;14:367.
[22] Essentially Yours: The protection of human genetic information in Australia (ALRC Report 96). Australian Law Reform Commission, 2003.
[23] Taylor S, Treloar S, Barlow-Stewart K, Stranger M, Otlowski M. Investigating genetic discrimination in Australia: A large-scale survey of clinical genetics clients. Clinical Genetics. 2008;74(1):20-30.
[24] Barlow-Stewart K. Life Insurance products and genetic testing in Australia. Centre for Genetics Education, 2007.
[25] Davey JW, Hohenlohe PA, Etter PD, Boone JQ, Catchen JM, Blaxter ML. Genome-wide genetic marker discovery and genotyping using next-generation sequencing. Nat Rev Genet. 2011;12(7):499-510.
[26] Saunders CL, Chiodini BD, Sham P, Lewis CM, Abkevich V, Adeyemo AA, et al. Meta-Analysis of Genome-wide Linkage Studies in BMI and Obesity. Obesity. 2007;15(9):2263-75.
[27] Food and Drug Administration CeLnter for Devices and Radiological Health. Direct-to-Consumer Genetic Testing and the Consequences to the Public. Subcommittee on Oversight and Investigations, Committee on Energy and Commerce, US House of Representatives; 2010.
[28] Mrazek DA, Lerman C. Facilitating Clinical Implementation of Pharmacogenomics. JAMA: The Journal of the American Medical Association. 2011;306(3):304-5.

Categories
Feature Articles Articles

Eye protection in the operating theatre: Why prescription glasses don’t cut it

Introduction
Needle-stick injury represents a serious occupational hazard for medical professionals, and much time is spent on educating students and practitioners on its prevention. Acquiring a blood-borne viral infection such as Human Immunodeficiency Virus (HIV), Hepatitis B or C from a patient is a rare yet devastating event. While most often associated with ‘sharps’ injuries, viral transmission is possible across any mucocutaneous membrane – including the eye. Infection via the transconjunctival route is a particularly relevant occupational hazard for operating room personnel, where bodily fluids are commonly encountered. Published cases of HIV seroconversion after ocular blood splash reinforce the importance of eye protection. [1]

Surgical operations carry an inherent risk of blood splash injury – masks with visors are provided in operating theatres for this reason. However, many surgeons and operating personnel rely solely upon prescription glasses for eye protection, despite spectacles being shown to offer an ineffective safeguard against blood splash injury. [2]

Incidence of blood splash injury
The incidence of blood splash is understandably more prevalent in some surgical specialties, such as orthopaedics, where power tools and other instruments increase the likelihood of blood spray. [3] Within these specialties, the risk is acknowledged and the use of more comprehensive eye protection is usually routine.

Laparoscopic and endoscopic procedures may particularly be viewed as low-risk, despite the rates of positive blood splash evident on post-operative examination of eye protection in one prospective study approaching 50%. [4] These results imply that even minimally invasive procedures need to be treated with a high level of vigilance.

The prevalence of blood splash during general surgical operations is highlighted by a study that followed one surgeon over a 12 month period and recorded all bodily fluids evident on protective eyewear following each procedure. [5] Overall, 45% of surgeries performed resulted in blood splash and an even higher incidence (79%) was found in vascular procedures. In addition, half of the laparoscopic cases were associated with blood recorded on the protective eyewear postoperatively.

A similar prospective trial undertaken in Australia found that protective eye shields tested positive for blood in 44% of operations, yet the surgeon was only aware of the incident in 18% of these cases. [6] Much blood spray during surgery does not occur at a visually perceivable level, with this study demonstrating that the incidence of blood splash during a procedure may be considerably higher than is realised.

Despite the predominance of blood splash occurring within the operating theatre, the risks of these injuries are not limited to surgeons and theatre staff – even minor surgery carries a considerable risk of blood splash. A review of 500 simple skin lesion excisions in a procedural dermatology unit revealed positive blood splash on facemask or visor in 66% of cases, which highlights the need for protective eyewear in all surgical settings. [7]

Risk of blood splash injury
Although a rare occurrence, even a basic procedure such as venepuncture can result in ocular blood splash injury. Several cases of confirmed HCV transmission via the conjunctival route have been reported. [8-10]

Although the rates of blood-borne infectious disease are reasonably low within Australia, and likewise the rates of conversion from a blood splash injury are low at around 2%, [9] the consequences of contracting HIV, HBV or HCV from a seropositive patient are potentially serious and require strict adherence to post exposure prophylaxis protocols. [11] Exposure to bodily fluids, particularly blood, is an unavoidable occupational risk for most health care professionals, but personal risk can be minimised by using appropriate universal precautions.

For those operating theatre personnel who wear prescription glasses, there exists a common belief that no additional eye protection is necessary. The 2007 Waikato Eye Protection Study [2] surveyed 71 practicing surgeons, of which 45.1% required prescription glasses while operating. Of the respondents, 84.5% had experienced prior periorbital blood splash during their operating careers, and 2.8% had gone on to contract an illness from such an event. While nearly 80% of the participants routinely used eye protection, amongst those who wore prescription glasses, 68% used them as sole eye protection.

A 2009 in vitro study examining the effectiveness of various forms of eye protection in orthopaedic surgery [12] employed a simulation model, with a mannequin head placed in a typical position in the operating field, with femoral osteotomy performed on a cadaveric thigh. The resulting blood splash on six different types of protective eyewear was measured, and found that prescription glasses offered no benefit over control (no protection). While none of the eye protection methods tested offered complete protection, significantly lower rates of conjunctival contamination were recorded for recommended eyewear, including facemask and eyeshield, hard plastic glasses and disposable plastic glasses.

Prevention and management of blood splash injury
Given that blood splash is an occupational hazard, the onus is on the hospital and clinical administration to ensure that there are adequate supplies of protective eye equipment available. Disposable surgical masks with full-face visors have been shown to offer the highest level of protection from blood splash injury [12] and ought to be readily accessible for all staff involved in procedures or settings where contact with bodily fluids is possible. The use of masks and visors should be standard practice for all theatre staff, including assistants, scrub nurses and observers, regardless of the use of prescription spectacles.

Should an incident occur, a procedure similar to that used for needle-stick injury may be followed to minimise the risk of infection. The eye should first be rinsed thoroughly to remove as much of the fluid as possible and serology should be ordered promptly to obtain a baseline for future comparisons. An HIV screen and acute hepatitis panel (HAV IgM, HB core IgM, HBsAg, HCV and HB surface antibody for immunised individuals) are indicated. Post-exposure prophylaxis (PEP) should be initiated as soon as practicable unless the patient is known to be HIV, HBV and HCV negative. [13]

Conclusion
Universal precautions are recommended in all instances where there is the potential for exposure to patient bodily fluids, with an emphasis on appropriate eye protection. Prescription glasses are unsuitable for use as the sole source of eye protection from blood splash injury. In light of the fact that a blood splash injury can occur without knowledge of the event, regular blood tests for health care workers involved in regular procedural activity may allow for early detection and intervention of workplace acquired infection.

Conflict of interest
None declared.

Correspondence
S Campbell: shaun.p.campbell@gmail.com

References
[1] Eberle J, Habermann J, Gurtler LG. HIV-1 infection transmitted by serum droplets into the eye: a case report. AIDS. 2000;14(2):206–7.
[2] Chong SJ, Smith C, Bialostocki A, McEwan CN. Do modern spectacles endanger surgeons? The Waikato Eye Protection Study. Ann Surg. 2007;245(3):495-501
[3] Alani A, Modi C, Almedghio S, Mackie I. The risks of splash injury when using power tools during orthopaedic surgery: a prospective study. Acta Orthop Belg. 2008;74(5):678-82.
[4] Wines MP, Lamb A, Argyropoulos AN, Caviezel A, Gannicliffe C, Tolley D. Blood splash injury: an underestimated risk in endourology. J Endourol. 2008;22(6):1183-7.
[5] Davies CG, Khan MN, Ghauri AS, Ranaboldo CJ. Blood and body fluid splashes during surgery – the need for eye protection and masks. Ann R Coll Surg Engl. 2007;89(8):770-2.
[6] Marasco S, Woods S. The risk of eye splash injuries in surgery. Aust N Z J Surg. 1998;68(11):785-7.
[7] Holzmann RD, Liang M, Nadiminti H, McCarthy J, Gharia M, Jones J et al. Blood exposure risk during procedural dermatology. J Am Acad Dermatol. 2008;58(5):817-25.
[8] Sartori M, La Terra G, Aglietta M, Manzin A, Navino C, Verzetti G. Transmission of hepatitis C via blood splash into conjunctiva. Scand J Infect Dis 1993;25:270-1.
[9] Hosoglu S, Celen MK, Akalin S, Geyik MF, Soyoral Y, Kara IH. Transmission of hepatitis C by blood splash into conjunctiva in a nurse. American Journal of Infection Control 2003;31(8):502-504.
[10] Rosen HR. Acquisition of hepatitis C by a conjunctival splash. Am J Infect Control 1997;25:242-7.
[11] NSW Health Policy Directive, AIDS/Infectious Diseases Branch. HIV, Hepatitis B and Hepatitis C – Management of Health Care Workers Potentially Exposed. 2010;Circular No 2003/39. File No 98/1833.
[12] Mansour AA, Even JL, Phillips S, Halpern JL. Eye protection in orthopaedic surgery. An in vitro study of various forms of eye protection and their effectiveness. J Bone Joint Surg Am. 2009 May;91(5):1050-4.
[13] Klein SM, Foltin J, Gomella LG. Emergency Medicine on Call. New York: McGraw-Hill; 2003. p. 288.

Categories
Feature Articles Articles

The history of abdominal aortic repair: from Egypt to EVAR

Introduction

An arterial aneurysm is defined as a localised dilation of an artery to greater than 50% of its normal diameter. [1] Abdominal aortic aneurysm (AAA) is common with an incidence five times greater in men than women. [2] In Australia the prevalence of AAAs is 4.8% in men aged 65-69 years rising to 10.8% in those aged 80 years and over. [3] The mortality from ruptured AAA is very high, approximately 80%, [4] whilst the aneurysm-related mortality of surgically treated, asymptomatic AAA is around five percent. [5] In Australia AAAs make up 2.4% of the burden of cardiovascular disease, contributing 14,375 disability adjusted life years (DALYs), ahead of hypertension (14,324) and valvular heart disease (13,995). [6] Risk factors for AAA of greater than four centimetres include smoking (RR=3-5), family history (OR=1.94), coronary artery disease (OR= 1.52), hypercholesterolaemia (OR= 1.44) and cerebrovascular disease (OR= 1.28). [7] Currently, the approach to AAA management involves active surveillance, risk factor reduction and surgical intervention. [8]

The surgical management of AAAs dates back over 3000 years and has evolved greatly since its conception. Over the course of surgical history arose three landmark developments in aortic surgery: crude ligation, open repair and endovascular AAA repair (EVAR). This paper aims to examine the development of surgical interventions for AAA, from its experimental beginnings in ancient Egypt to current evidence based practice defining EVAR therapy, and to pay homage to the surgical and anatomical masters who made significant advances in this field.

Early definition

The word aneurysm is derived from the Greek aneurysma, for ‘widening’. The first written evidence of AAA is recorded in the ‘Book of Hearts’ from the Eber Scolls of ancient Egypt, dating back to 1550 BC. [9] It stated that “only magic can cure tumours of the arteries.” India’s Sushruta (800 ~ 600 BC) mentions aneurysm, or ‘Granthi’, in chapter 17 of his great medical text ‘Sushruta Samhita’. [10] Although undistinguished from painful varicose veins in his text, Sushruta shared a similar sentiment to the Egyptians when he wrote “[Granthi] can be cured only with the greatest difficulty”. Galen (126-c216 AD), a surgeon of ancient Rome, first formally described these ‘tumours’ as localised pulsatile swellings that disappear with pressure. [11] He was also first to draw anatomical diagrams of the heart and great vessels. His work with wounded gladiators and that of the Greek surgeon Antyllus in the same period helped to define traumatic false aneurysms as morphologically rounded, distinct from true, cylindrical aneurysms caused by degenerative dilatation. [12] This work formed the basis of the modern definition.

Early ligation

Antyllus is also credited with performing the first recorded surgical interventions for the treatment of AAA. His method involved midline laparotomy, proximal and distal ligation of the aorta, central incision of the aneurysm sac and evacuation of thrombotic material. [13] Remarkably, a few patients treated without aseptic technique or anaesthetic managed to survive for some period. Antyllus’ method was further described in the seventh century by Aetius, whose detailed paper ‘On the Dilation of Blood Vessels,’ described the development and repair of AAA. [14] His approach involved stuffing the evacuated sac with incense and spices to promote pus formation in the belief that this would aid wound healing. Although this belief would wane as knowledge of the process of wound healing improved, Antyllus’s method would remain largely unchanged until the late nineteenth century.

Anatomy

The Renaissance saw the birth of modern anatomy, and with it a proper understanding of aortic morphology. In 1554 Vesalius (1514-1564) produced the first true anatomical plates based on cadaveric dissection, in ‘De Humani Corporis Fabrica.’ [15] A year later he provided the first accurate diagnosis and illustrations of AAA pathology. In total, Vesalius corrected over 200 of Galen’s anatomical mistakes and is regarded as the father of modern anatomy. [16] His discoveries began almost 300 years of medical progress characterised by the ‘surgeon-anatomist’, paving the way for the anatomical greats of the sixteenth, seventeenth and eighteenth centuries. It was during this period that the great developments in the anatomical and pathological understanding of aneurysms took place.

Pathogenesis

Ambroise Pare (1510-1590) noted that aneurysms seemed to manifest following syphilis, however he attributed the arterial disease to syphilis treatment rather than the illness itself. [17] Stress on the arteries from hard work, shouting, trumpet playing and childbirth were considered other possible causes. Morgagni (1682-1771) described in detail the luetic pathology of ruptured sacular aortic aneurysms in syphilitic prostitutes, [18] whilst Monro (1697-1767) described the intima, media and adventitia of arterial walls. [19] These key advances in arterial pathology paved the way for the Hunter Brothers of London (William Hunter [1718-1783] and John Hunter [1728-1793]) to develop the modern definitions of true, false and mixed aneurysms. Aneurysms were now accepted to be caused by ‘a disproportion between the force of the blood and the strength of the artery’, with syphilis as a risk factor rather than a sole aetiology. [12] As life expectancy rose dramatically in the twentieth century, it became clear that syphilis was not the only cause of arterial aneurysms, as the great vascular surgeon Rudolf Matas (1860-1957) stated: “The sins, vices, luxuries and worries of civilisation clog the arteries with the rust of premature senility, known as arteriosclerosis or atheroma, which is the chief factor in the production of aneurysm.” [20]

Modern ligation

The modern period of AAA surgery began in 1817 when Cooper first ligated the aortic bifurcation for a ruptured left external iliac aneurysm in a 38 year old man. The patient died four hours later; however, this did not discourage others from attempting similar procedures. [21]

Ten further unsuccessful cases were recorded prior to the turn of the twentieth century. It was not until a century later, in 1923, that Matas performed the first successful complete ligation of the aorta for aneurysm, with the patient surviving seventeen months and dying from tuberculosis. [22] Described by Osler as the ‘modern father of vascular surgery’, Matas also developed the technique of endoaneurysmorrhaphy, which involved ligating the aneurysmal sac upon itself to restore normal luminal flow. This was the first recorded technique aiming to spare blood flow to the lower limbs, an early prelude to the homograft, synthetic graft and EVAR.

Early Alternatives to Ligation

Despite Matas’ landmark success, the majority of surgeons of the era shared Suchruta’s millennia-old fear of aortic surgery. The American Surgical Association wrote in 1940, “the results obtained by surgical intervention have been discouraging.” Such fear prompted a resurgence of techniques introducing foreign material into the aneurismal lumen with the hope of promoting thrombosis. First attempted by Velpeau [23] with sewing needles in 1831, this technique was modified by Moore [24] in 1965 using 26 yards of iron wire. Failure of aneurysm thrombosis was blamed on ‘under packing’ the aneurysm. Corradi used a similar technique, passing electric current through the wire to introduce thrombosis. This technique became known as fili-galvanopuncture or the ‘Moore-Corradi method’. Although this technique lost popularity for aortic procedures, it marked the beginning of electrothrombosis and coiling of intracranial aneurysms in the latter half of the twentieth century. [25]

Another alternative was wrapping the aneurysm with material in an attempt to induce fibrosis and contain the aneurysm sac. AAA wrapping with cellophane was investigated by Pearse in 1940 [26] and Harrison in 1943. [27] Most notably, Nissen, the pioneer of Nissen fundoplication for hiatus hernia, famously wrapped Albert Einstein’s AAA with cellophane in 1948. [28] The aneurysm finally ruptured in 1955, with Einstein refusing surgery: “I want to go when I want. It is tasteless to prolong life artificially.” [28]

Anastomosis

Many would argue that the true father of modern vascular techniques is Alexis Carrel. He conducted the first saphenous vein bypass in 1948, the first successful kidney transplant in 1955 and the first human limb re-implantation in 1962. [13,29] Friedman states that “there are few innovations in cardiac and vascular surgery today that do not have roots in his work.” [13] Perhaps of greatest note was Carrel’s development of the triangulation technique for vessel anastomosis.

This technique was utilised by Crafoord in Sweden in 1944, in the first correction of aortic coarctation, and by Shumacker [30] in 1947 to correct a four centimetre thoracic aortic aneurysm secondary to coarctation. Prior to this time, coarctation was treated in a similar fashion to AAA, with ligation proximal and distal to the defect. [31] These developments would prove to be great milestones in AAA surgery as the first successful aortic aneurysm resection with restoration of arterial continuity.

Biological grafts

Despite this success, restoration of arterial continuity was limited to the thoracic aorta. Abdominal aneurysms remained too large to be anastomosed directly and a different technique was needed. Carrel played a key role in the development of arterial grafting, used when end-to-end anastomosis was unfeasible. The original work was performed by Carrel and Guthrie (1880-1963) with experiments transplanting human and canine vessels. [32,33] Their 1907 paper ‘Heterotransplantation of blood vessels’ [34] began with:

“It has been shown that segments of blood vessels removed from animals may be caused to regain and indefinitely retain their function.”

This discovery led to the first replacement of a thrombosed aortic bifurcation by Jacques Oudot (1913-1953) with an arterial homograft in 1950. The patient recovered well, and Oudot went on to perform four similar procedures. The landmark first AAA resection with restoration of arterial continuity can be credited to Charles Dubost (1914-1991) in 1951. [35] His patient, a 51 year old man, received the aorta of a young girl harvested three weeks previously. This brief period of excitement quickly subsided when it was realised that the long-term patency of aortic homografts was poor. It did, however, lay the foundations for the age of synthetic aortic grafts.

Synthetic grafts

Arthur Voorhees (1921-1992) can be credited with the invention of synthetic arterial prosthetics. In 1948, during experimental mitral valve replacement in dogs, Voorhees noticed that a misplaced suture had later become enveloped in endocardium. He postulated that, “a cloth tube, acting as a lattice work of threads, might indeed serve as an arterial prosthesis.” [36] Voorhees went on to test a wide variety of materials as possible candidates from synthetic tube grafts, resulting in the use of vinyon-N, the material used in parachutes. [37] His work with animal models would lead to a list of essential structural properties of arterial prostheses. [38]

Vinyon-N proved robust, and was introduced by Voorhees, Jaretski and Blakemore. In 1952 Voorhees inserted the first synthetic graft into a ruptured AAA. Although the vinyon-N graft was successfully implanted, the patient died shortly afterwards from a myocardial infarction. [39] By 1954, Voorhees had successfully implanted 17 AAAs with similar grafts. Schumacker and Muhm would simultaneously conduct similar procedures with nylon grafts. [40] Vinyon-N and nylon were quickly supplanted by Orlon. Similar materials with improved tensile strength are used in open AAA repair today, including Teflon, Dacron and expanded Polytetrafluoroethylene (PTFE). [41]

Modern open surgery

With the development of suitable graft material began the golden age of open AAA repair. The focus would now be largely on the Americans, particularly with surgeons DeBakey (1908-2008) and Cooley (1920) leading the way in Houston, Texas. In the early 1950s, DeBakey and Cooley developed and refined an astounding number of aortic surgical techniques. Debakey would also classify aortic dissection into different types depending on their site. In 1952, a year after Dubost’s first success in France, the pair would perform the first repair of thoracic aneurysm, [42] and a year later, the first aortic arch aneurysm repair. [43] It was around this time that the risks of spinal cord ischaemia during aortic surgery became apparent. Moderate hypothermia was first used and then enhanced in 1957, with Gerbode’s development of extracorporeal circulation, coined ‘left heart bypass’. In 1963, Gott expanded on this idea with a heparin-treated polyvinyl shunt from ascending to descending aorta. By 1970, centrifuge-powered, left-heart bypass with selective visceral perfusion had been developed. [44] In 1973, Crawford simplified DeBakey and Cooley’s technique by introducing sequential clamping of the aorta. By moving clamps distally, Crawford allowed for reperfusion of segments following the anastomoses of what had now become increasingly more complex grafts. [45] The work of DeBakey, Cooley and Crawford paved the way for the remarkable outcomes available to modern patients undergoing open AAA repair. Where once feared by surgeons and patients alike, in-hospital mortality following elective, open AAA now has a 30-day all-cause mortality of around five percent. [58]

Imaging

It must not be overlooked that significant advances in medical imaging have played a major role in reducing the incidence of ruptured AAAs and the morbidity and mortality associated with AAAs in general. The development of diagnostic ultrasound began in the late 1940s and 50s, with simultaneous research by John Wild in the United States, Inge Elder and Carl Hertz in Sweden and Ian Donald in Scotland. [46] It was the latter who published ‘Investigation of Abdominal Masses by Pulsed Ultrasound,’ regarded as one of the most important papers in diagnostic imaging. [47] By the 1960s, Doppler ultrasound would provide clinicians with both a structural and functional view of vessels, with colour flow Doppler in the 1980s allowing images to represent the direction of blood flow. The Multicentre Aneurysm Study showed that ultrasound screening resulted in a 42% reduction in mortality from ruptured AAAs over four years to 2002. [48] Ultrasound screening has resulted in an overall increase in hospital admissions for asymptomatic aneurysms; however, increases in recent years cannot be attributed to improved diagnosis alone, as it is known that the true incidence of AAA is also increasing in concordance with Western vascular pathology trends. [49]

In addition to the investigative power of ultrasound imaging, computed tomography (CT) scanners became available in the early 1970s. As faster, higher-resolution spiral CT scanners became more accessible in the 1980s, the diagnosis and management of AAAs became significantly more refined. [50] CT angiography has emerged as the gold standard for defining aneurysm morphology and planning surgical intervention. It is crucial in determining when emergent treatment is necessary, when calcification and soft tissue may be unstable, when the aortic wall is thickened or adhered to surrounding structures, and when rupture is imminent. [51] Overall operative mortality from ruptured AAA fell by 3.5% per decade from 1954-1997. [52] This was due to both a significant leap forward in surgical techniques in combination with drastically improved imaging modalities.

EVAR

The advent of successful open surgical repair of AAAs using synthetic grafts in the 1950s proved to be the first definitive treatment for AAA. However, the procedure remained highly invasive and many patients were excluded due to medical and anatomical contraindications. [53] Juan Parodi’s work with Julio Palmaz and Héctor Barone in the late 1980s aimed to rectify this issue. Parodi developed the first catheter-based arterial approach to AAA intervention. The first successful EVAR operation was completed by Parodi in Argentina on seventh September 1990. [54] The aneurysm was approached intravascularly via a femoral cutdown. Restoration of normal luminal blood flow was achieved with the deployment of a Dacron graft mounted on a Palmaz stent. [55] There was no need for aortic cross-clamping or major abdominal surgery. Similar non-invasive strategies were explored independently and concurrently by Volodos, [56] Lazarus [57] and Balko. [58]

During this early period of development there was significant Australian involvement. The work of Michael Lawrence-Brown and David Hartley at the Royal Perth Hospital led to the manufacture of the Zenith endovascular graft in 1993, a key milestone in the development of modern-day endovascular aortic stent-grafts. [59] The first bifurcated graft was successfully implanted one year later. [60] Prof James May and his team at the Royal Prince Alfred Hospital in Sydney conducted further key research, investigating the causes of aortic stent failure and complications. [61] This group went on to pioneer the modular design of present day aortic prostheses. [62]

The FDA approved the first two AAA stent grafts for widespread use in 1999. Since then, technical improvements in device design have resulted in improved surgical outcomes and increased ability to treat patients with difficult aneurysmal morphology. Slimmer device profiles have allowed easier device insertion through tortuous iliac vessels. [63] Furthermore, fenestrated and branched grafts have made possible the stent-grafting of juxtarenal AAA, where suboptimal proximal neck anatomy once meant traditional stenting would lead to renal failure and mesenteric ischaemia. [64]

AAA intervention now and beyond

Today, surgical intervention is generally reserved for AAAs greater than 5.5cm diameter and may be achieved by either open or endoluminal access. The UK small aneurysm trial determined that there is no survival benefit to elective open repair of aneurysms of less than 5.5cm. [8] The EVAR-1 trial (2005) found EVAR to reduce aneurysm related mortality by three percent at four years when compared to open repair; however, EVAR remains significantly more expensive and requires more re-interventions. Furthermore, it offers no advantage with respect to all cause mortality or health related quality of life. [5] These findings raised significant debate over the role of EVAR in patients fit for open repair. This controversy was furthered by the findings of the EVAR-2 trial (2005), which saw risk factor modification (fitness and lifestyle) as a better alternative to EVAR in patients unfit for open repair. [65] Many would argue that these figures are obsolete, with Criado stating, “it would not be unreasonable to postulate that endovascular experts today can achieve far better results than those produced by the EVAR-1 trial.” [53] It is undisputed that EVAR has dramatically changed the landscape of surgical intervention for AAA. By 2005, EVAR accounted for 56% of all non-ruptured AAA repairs but only 27% of operative mortality. Since 1993, deaths related to AAA have decreased dramatically, by 42%. [53] EVAR’s shortcomings of high long-term rates of complications and re-interventions, as well as questions of device performance beyond ten years, appear balanced by the procedure’s improved operative mortality and minimally invasive approach. [54]

Conclusion

The journey towards truly effective surgical intervention for AAA has been a long and experimental one. Once regarded as one of the most deadly pathologies, with little chance of a favourable surgical outcome, AAAs can now be successfully treated with minimally invasive procedures. Sushruta’s millennia-old fear of abdominal aortic surgery appears well and truly overcome.

Conflict of interest

None declared.

Correspondence

A Wilton: awil2853@uni.sydney.edu.au

References

[1] Kumar V et al. Robbins and Cotran Pathologic Basis of Disease 8th ed. Elsevier. 2010.
[2] Semmens J, Norman PE, Lawrence-Brown MMD, Holman CDJ. Influence of gender on outcome from ruptured abdominal aortic aneurysm. British Journal of Surgery. 2000;87:191-4.
[3] Jamrozik K, Norman PE, Spencer CA. et al. Screening for abdominal aortic aneurysms: lessons from a population-based study. Med. J. Aust. 2000;173:345-50.
[4]Semmens J, Lawrence-Brown MMD, Norman PE, Codde JP, Holman, CDJ. The Quality of Surgical Care Project: Benchmark standards of open resection for abdominal aortic aneurysm in Western Australia. Aust N Z J Surg. 1998;68:404-10.
[5] The EVAR trial participants. EVAR-1 (EndoVascular Aneurysm Repair): EVAR vs open repair in patients with abdomial aortic aneurym. Lancet. 2005;365:2179-86.
[6] National Heart Foundation of Australia. The Shifting Burden of Cardiovascular Disease. 2005.
[7] Fleming C, Whitlock EP, Beil TL, Lederle FA. Screening for abdominal aortic aneurysm: a best-evidence systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2005;142(3):203-11.
[8] United Kingdom Small Aneurysm Trial Participants. UK Small Aneurysm Trial. N Eng J Med. 2002;346:1445-52.
[9] Ghalioungui P. Magic and Medical Science in Ancient Egypt. Hodder and Stoughton Ltd. 1963.
[10] Bhishagratna KKL. An English Translation of The Sushruta Samhita. Calcutta: Self Published; 1916.
[11] Lytton DG, Resuhr LM. Galen on Abnormal Swellings. J Hist Med Allied Sci. 1978;33(4):531-49.
[12] Suy R. The Varying Morphology and Aetiology of the Arterial Aneurysm. A Historical Review. Acta Chir Belg. 2006;106:354-60.
[13] Friedman SG. A History of Vascular Surgery. New York: Futura Publishing Company 1989;74-89.
[14] Stehbens WE. History of Aneurysms. Med Hist 1958;2(4):274–80.
[15] Van Hee R. Andreas Vesalius and Surgery. Verh K Acad Geneeskd Belg. 1993;55(6):515-32.
[16] Kulkarni NV. Clinical Anatomy: A problem solving approach. New Delhi: Jaypee Brothers Medical Publishers. 2012;4.
[17] Paré A. Les OEuvres d’Ambroise Paré. Paris: Gabriel Buon; 1585.
[18] Morgagni GB. Librum quo agitur de morbis thoracis. Italy: Lovanni; 1767 p270-1.
[19] Monro DP. Remarks on the coats of arteries, their diseases, and particularly on the formation of aneurysm. Medical essays and Observations. Edinburgh, 1733.
[20] Matas R. Surgery of the Vascular System. AMA Arch Surg. 1956;72(1):1-19.
[21] Brock RC. The life and work of Sir Astley Cooper. Ann R Coll Surg Engl.1969; 44:1.
[22] Matas R. Aneurysm of the abdominal aorta at its bifurcation into the common iliac arteries. A pictorial supplement illustration the history of Corrinne D, previously reported as the first recored instance of cure of an aneurysm of the abdominal aorta by ligation. Ann Surg. 1940;122:909.
[23] Velpeau AA. Memoire sur la figure de l’acupuncture des arteres dans le traitement des anevrismes. Gaz Med. 1831;2:1.
[24] Moore CH, Murchison C. On a method of procuring the consolidation of fibrin in certain incurable aneurysms. With the report of a case in which an aneurysm of the ascending aorta was treated by the insertion of wire. Med Chir Trans. 1864;47:129.
[25] Siddique K, Alvernia J, Frazer K, Lanzino G, Treatment of aneurysms with wires and electricity: a historical overview. J Neurosurg. 2003;99:1102–7.
[26] Pearse HE. Experimental studies on the gradual occlusion of large arteries. Ann Surg. 1940;112:923.
[27] Harrison PW, Chandy J. A subclavian aneurysm cured by cellophane fibrosis. Ann Surg. 1943;118:478.
[28] Cohen JR, Graver LM. The ruptured abdominal aortic aneurysm of Albert Einstein. Surg Gynecol Obstet. 1990;170:455-8.
[29] Edwards WS, Edwards PD. Alexis Carrel: Visionary surgeon. Springfield, IL: Charles C Thomas Publisher, Ltd 1974;64–83.
[30] Shumacker HB Jr. Coarctation and aneurysm of the aorta. Report of a case treated by excision and end-to-end suture of aorta. Ann Surg. 1948;127:655.
[31] Alexander J, Byron FX. Aortectomy for thoracic aneurysm. JAMA 1944;126:1139.
[32] Carrel A. Ultimate results of aortic transplantation, J Exp Med. 1912;15:389–92.
[33] Carrel A. Heterotransplantation of blood vessels preserved in cold storage, J Exp Med. 1907;9:226–8.
[34] Guthrie CC. Heterotransplantation of blood vessels, Am J Physiol 1907;19:482–7.
[35] Dubost C. First successful resection of an aneurysm of an abdominal aorta with restoration of the continuity by human arterial graft. World J Surg. 1982;6:256.
[36] Voorhees AB. The origin of the permeable arterial prosthesis: a personal reminiscence. Surg Rounds. 1988;2:79-84.
[37] Voorhees AB. The development of arterial prostheses: a personal view. Arch Surg. 1985;120:289-95.
[38] Voorhees AB. How it all began. In: Sawyer PN, Kaplitt MJ, eds. Vascular Grafts. New York: Appleton-Century-Crofts 1978;3-4.
[39] Blakemore AH, Voorhees AB Jr. The use of tubes constructed from vinyon “N” cloth in bridging arterial defects – experimental and clinical. Ann Surg. 1954;140:324.
[40] Schumacker HB, Muhm HY. Arterial suture techniques and grafts: past, present, and future. Surgery. 1969;66:419-33.
[41] Lidman H, Faibisoff B, Daniel RK. Expanded Polytetrafluoroethene as a microvascular stent graft: An experimental study. Journal of Microsurgery. 1980;1:447-56.
[42] Cooley DA, DeBakey ME. Surgical considerations of intrathoracic aneurysms of the aorta and great vessels. Ann Surg. 1952;135:660–80.
[43] DeBakey ME. Successful resection of aneurysm of distal aortic arch and replacement by graft. J Am Med Assoc. 1954;155:1398–403.
[44] Argenteri A. The recent history of aortic surgery from 1950 to the present. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele 2008;200-25.
[45] Green Sy, LeMaire SA, Coselli JS. History of aortic surgery in Houston. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele. 2008;39-73.
[46] Edler I, Hertz CH. The use of ultrasonic reflectoscope for the continuous recording
of the movements of heart walls. Clin Physiol Funct I. 2004;24:118–36.
[47] Ian D. The investigation of abdominal masses by pulsd ultrasound. The Lancet June 1958;271(7032):1188-95.
[48] Thompson SG, Ashton HA, Gao L, Scott RAP. Screening men for abdominal aortic aneurysm: 10 year mortality and cost effectiveness results from the randomised Multicentre Aneurysm Screening Study. BMJ. 2009;338:2307.
[49] Filipovic M, Goldacre MJ, Robert SE, Yeates D, Duncan ME, Cook-Mozaffari P. Trends in mortality and hospital admission rates for abdominal aortic aneurysm in England and Wales. 1979-1999. BJS 2005;92(8):968-75.
[50] Kevles BH. Naked to the Bone: Medical Imagine in the Twentieth Century. New Brunswick, NJ: Rutgers University Press 1997;242-3.
[51] Ascher E, Veith FJ, Gloviczki P, Kent KC, Lawrence PF, Calligaro KD et al. Haimovici’s vascular surgery. 6th ed. Blackwell Publishing Ltd. 2012;86-92.
[52] Bown MJ, Sutton AJ, Bell PRF, Sayers RD. A meta-analysis of 50 years of ruptured abdominal aortic aneurysm repair. British Journal of Surgery. 2002;89(6):714-30.
[53] Criado FJ. The EVAR Landscape in 2011: A status report on AAA therapy. Endovascular Today. 2011;3:40-58.
[54] Criado FJ. EVAR at 20: The unfolding of a revolutionary new technique that changed everything. J Endovasc Ther. 2010;17:789-96.
[55] Hendriks Johanna M, van Dijk Lukas C, van Sambeek Marc RHM. Indications for
endovascular abdominal aortic aneurysms treatment. Interventional Cardiology.
2006;1(1):63-64.
[56] Volodos NL, Shekhanin VE, Karpovich IP, et al. A self-fixing synthetic blood vessel endoprosthesis (in Russian). Vestn Khir Im I I Grek. 1986;137:123-5.
[57] Lazarus HM. Intraluminal graft device, system and method. US patent 4,787,899 1988.
[58] Balko A, Piasecki GJ, Shah DM, et al. Transluminal placement of intraluminal polyurethane prosthesis for abdominal aortic aneurysm. J Surg Res. 1986;40:305-9.
[59] Lawrence-Brown M, Hartley D, MacSweeney ST et al. The Perth endoluminal bifurcated graft system—development and early experience. Cardiovasc Surg. 1996;4:706–12.
[60] White GH, Yu W, May J, Stephen MS, Waugh RC. A new nonstented balloon-expandable graft for straight or bifurcated endoluminal bypass. J Endovasc Surg. 1994;1:16-24.
[61] May J, White GH, Yu W, Waugh RC, McGahan T, Stephen MS, Harris JP. Endoluminal grafting of abdominal aortic aneurysms: cause of failure and their prevention. J Endovasc Surg. 1994;1:44-52.
[62] May J, White GH, Yu W, Ly CN, Waugh R, Stephen MS, Arulchelvam M, Harris JP. Concurrent comparison of endoluminal versus open repair in the treatment of abdominal aortic aneurysms: analysis of 303 patients by life table method. J Vasc Surg. 1998;27(2):213-20.
[63] Omran R. Abul-Khouodud Intervention for Peripheral Vascular Disease Endovascular AAA Repair: Conduit Challenges. J Invasive Cardiol. 2000;12(4).
[64] West CA, Noel AA, Bower TC, et al. Factors affecting outcomes of open surgical repair of pararenal aortic aneurysms: A 10-year experience. J Vasc Surg. 2006;43:921–7.
[65] The EVAR trial participants. EVAR-2 (EndoVascular Aneurysm Repair): EVAR in patients unfit for open repair. Lancet. 2005;365:2187-92.

Categories
Feature Articles Articles

Is there a role for end-of-life care pathways for patients in the home setting who are supported with community palliative care services?

The concept of a “good death” has developed immensely over the past few decades and we now recognise the important role of palliative care services in healthcare for the dying, our most vulnerable population. [1-3] In palliative care, end-of-life care pathways have been developed to transfer the gold standard hospice model of care for the dying to other settings, addressing the physical, psychosocial and practical issues surrounding death. [1,4] Currently, these frameworks are used in hospitals and residential aged-care facilities across Australia. [1] However, there is great potential for these pathways to be introduced into the home setting with support from community palliative care services. This could help facilitate a good death for these patients in the comfort of their own home, and also support their families through the grieving process.

Although there is no one definition of a “good death”, many studies have examined factors considered important at the end-of-life by patients and their families. Current literature acknowledges that terminally ill patients highly value adequate pain and symptom management, avoidance of prolongation of death, preparation for end-of-life, relieving the burden imposed on their loved ones, spirituality, and strengthening relationships with health professionals through acknowledgement of imminent death. [2] Interestingly, the Steinhauser study noted a substantial disparity in views on spirituality between physicians and patients. [3] Physicians were found to rank good symptom control as most important, whilst patients considered spiritual issues to hold equal significance. These studies highlight the individual nature of end-of-life care, which refl ects why the holistic approach of palliative care can improve the quality of care provided.

It is recognised that patients with life-limiting illnesses have complex needs that often require a multidisciplinary approach with multiple care providers. [1] However, an increased number of team members also creates its own challenges, and despite the best intentions, care can often become fragmented due to poor interdisciplinary communication. [5] This can lead to substandard end-of-life care with patients suff ering prolonged and painful deaths, and receiving unwanted, expensive and invasive care, as demonstrated by the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). [6] Temel et al. also demonstrated that palliative care can improve the documentation of advanced care directives. [7] For terminally ill patients, this is essential in clarifying and enabling patients’ wishes regarding end-of-life to be respected.

In 2010, Temel et al. conducted a randomised controlled trial in patients with newly diagnosed metastatic non-small-cell lung cancer, comparing the effect of palliative care and standard oncologic therapy, to standard oncologic therapy alone. [7] Results demonstrated that palliative care intervention improves quality of life and reduces rates of depression, consistent with existing literature. [7] Furthermore, despite receiving less aggressive end-of-life care, the additional early involvement of palliative care services resulted in a significant prolongation of life, averaging 2.7 months (p = 0.02). [7] This 30% improved survival benefit is equivalent to that achieved with a response to standard chemotherapy regimens, which has profound significance for patients with metastatic disease. [7] This study thereby validates the benefits of early palliative care intervention in oncology patients. In addition, early palliative intervention encourages advance care planning, allowing treating teams to elicit and acknowledge patient preferences regarding end-of-life care.

Many physicians often find it difficult to discuss poor prognoses with patients, potentially leaving patients and their families unaware of their terminal condition, despite death being anticipated by the treating team. [1,4] Many health care professionals are uncomfortable discussing death and dying, citing lack of training and fear of upsetting the patient. [8] Regardless, patients are entitled to be informed and supported through this difficult time. In addition, terminal patients and their caregivers are often neglected in decisions about their care, [9] despite their fundamental legal and ethical right to be involved, and studies indicate that they often want to be included in such discussions. [1,10,11] With the multitude of patient values and preferences for care, it can often be difficult to standardise the care provided. End-of-life care pathways encourage discussion of prognosis, facilitating communication that allows patients’ needs to be identified and addressed systematically and collaboratively. [1]

End-of-life care pathways provide a systematic approach and a standardised level of care for patients in the terminal phase of their illness. [1] This framework includes documentation of discussion with the patient and carers of the multi-disciplinary consensus that death is now imminent and life-prolonging treatment is futile, and also provides management strategies to address the individual needs of the dying. There is limited evidence to support the use of end-of-life care pathways, however we cannot discount the substantial anecdotal benefits. [1,12] The lack of high-quality studies indicates a need for further research. [1,12] When used in conjunction with clinical judgment, these pathways can lead to benefits such as: improved symptom control, earlier acknowledgement of terminal prognosis by the patient and family, prescription of medications for end-of-life, and aiding the grieving process for relatives. [1,12,13] As such, end-of-life care pathways are highly regarded in palliative care, transferring the benchmarked hospice model of care of the dying into other settings, [14] and have been widely implemented nationally and internationally. [1]

The most recognised and commonly used end-of-life care pathway is the Liverpool Care Pathway (LCP), which was developed in the United Kingdom to transfer the hospice model of care for the dying to other care settings. [13,15] It has been implemented into hospices, hospitals and aged care facilities, and addresses the physical, psychosocial and spiritual needs of these patients. [1,13,15] In 2008, Verbeek et al. examined the effect of the LCP pre- and post-implementation on patients from hospital, aged care and home settings. [13] Results demonstrated improved documentation and reduced symptom burden as assessed by nurses and relatives, in comparison with the baseline period. [13] Although increased documentation does not necessarily equate to better care, high-quality medical records are essential to facilitate communication between team members and ensure quality care is provided. In this study, staff also reported that they felt the LCP provided a structure to patient care, assisted the anticipation of problems, and promoted proactive management of patient comfort. [13] The LCP has significantly increased the awareness of good terminal care, and has provided a model for the end-of-life care pathways currently in use in hospitals and institutions throughout Australia. [1,4]

Community palliative care services support terminally ill patients at home in order to retain a high quality of life. Recognising the holistic principles of palliative care, these multidisciplinary teams provide medical and nursing care, counselling, spiritual support and welfare supports. In the Brumley trial, which evaluated an in-home palliative care intervention with a multidisciplinary team for homebound terminally ill patients, results demonstrated that the intervention group had greater satisfaction with care, were less likely to visit the emergency department, and were more likely to die in the comforts of their own home. [16] These results infer that the community palliative care team provided a high standard of care where symptoms were well-managed and did not require more aggressive intervention. This prevented unnecessary emergency presentations, potential distress for the patient and family, and allowed better use of resources. This study demonstrates that community palliative care services can significantly improve the quality of care for patients living at home with life-limiting illnesses, however, there is still scope for improvement in the current healthcare system.

End-of-life care pathways are regarded as best practice in guiding care for patients where death is imminent. [1] In Australia, there are a number of these frameworks that have been implemented in hospitals and aged-care facilities, demonstrating an improvement in the quality of care in these settings. However, there are also many terminally ill patients who choose to reside in the comfort of their own home, and are supported by community palliative care services. End-of-life care pathways support a high standard of care, which should be available to all patients, irrespective of where they choose to die. As such, there may be a role for end-of-life care pathways in the home setting, supported by community palliative care services. Introducing already implemented local end-of-life care pathway into the community has great potential to reap similar benefits. Initially, these frameworks would be implemented by the community palliative care team, however, caregivers could be educated and empowered to participate in the ongoing care. This could be a useful means to facilitate communication between treating team members and family, and also empower the patient and family to become more involved in their care.

The potential benefits of implementing end-of-life care pathways into community palliative care services include those currently demonstrated in the hospital and aged-care settings, however there are potentially further positive effects. By introducing these frameworks into the homes of terminally ill patients, caregivers can also be encouraged to take a more active role in the care of their loved ones. This indirect education for the patient and family can provide a sense of empowerment, and assist them to make informed decisions. Additional potential benefits of these pathways could include a reduction in the number of hospital admissions and emergency department presentations, which would reduce the pressures on our already overburdened acute care services. Empowered family and carers could also assist with monitoring, providing regular updates to the community palliative care team, which could potentially lead to earlier detection for when more specialised care is required. The documentation within the pathways could also allow for a smoother transition to hospices if required, and prevent unnecessary prolongation of death. This may translate to prevention of significant emotional distress for the patient and family in an already difficult time, and promote more effective use of limited hospital resources. Integrating end-of-life care pathways into community palliative care services has many potential benefits for patients at home with terminal illnesses, and should be considered as an option to improve the delivery of care.

Palliative care can significantly improve the quality of care provided to patients in the terminal phase, which can be guided by end-of-life care pathways. Evidence validates that these pathways encourage a multidisciplinary change in practice that facilitates a “good death”, and supports the family through the bereavement period. In the community, this framework has the potential to empower patients and their caregivers, and assist them to make informed decisions regarding their end-of-life care, thereby preventing unwanted aggressive intervention and unnecessary prolongation of death. However, there is a need for further high-quality studies to validate the anecdotal benefits of these pathways, with potential for a randomised controlled trial investigating the use of end-of-life care pathways in the home setting in Australia. In conclusion, the introduction of end-of-life care pathways into community palliative care services has great potential, particularly if supported and used in conjunction with specialist palliative care teams.

Acknowledgements

I would like to acknowledge Dr Leeroy William from McCulloch House, Monash Medical Centre for his support in developing this proposal, and Andrew Yap for his editorial assistance.

Conflicts of interest

None declared.

Correspondence

A Vo: amanda.vo@southernhealth.org.au

Categories
Feature Articles Articles

Immunology beyond a textbook: Psychoneuroimmunology and its clinical relevance for psychological stress and depression

Our medical studies encompass many areas of medical science, and immunology is an example of just one. Traditionally, we have been taught that our immune system exists to protect us from pathogens; however, in recent years, this romantic view of the immune system has been challenged and it is now well recognised that it is also involved in whole-body homeostasis and cross talks to other regulating systems of the body. This is the notion of psychoneuroimmunology (PNI). This text will briefly review the current understanding of PNI and how it features prominently in clinical practice as a part of the ‘whole person’ model of patient care and, especially, in terms of stress and depression. With this in mind, PNI is an emerging medical discipline that warrants integration and consideration in future medical care and practice.

Introduction

At first glance, immunology may be viewed by some as an esoteric medical science that simply provides us with the molecular and cellular mechanisms of disease and immunity. It is a subject that all medical students have to face and no doubt can find quite challenging as well. Yet, in recent times, its role in helping us understand mental health and why individuals behave in certain ways has become increasingly appreciated. [1,2] The novel area of study that attempts to explain this intricate and convoluted relationship between the mind, behaviour, nervous system, endocrine system and finally the immune system is, quite appropriately, termed psychoneuroimmunology (PNI) or sometimes psychoendoneuroimmunology. [3] This was probably something that was never mentioned during our studies because it is quite radical and somewhat ambiguous. So what, then, is PNI all about and why is it important?

Many of us may have come across patients that epitomise the association between mental disturbances and physical manifestations of disease. Indeed, it is this biopsychosocial model that is well documented and instilled into the minds of medical students. [4-7] The mechanism behind this, although something best left to science, is nonetheless interesting to know and appreciate as medical students. This is PNI.

The basic science of psychoneuroimmunology

History

The notion that behaviour and the manifestation of disease were linked was probably first raised by Galen (129-199 AD) who noticed that melancholic women were more likely to develop breast cancer than sanguine women. [8] The modern push for PNI probably began in the 1920s to 1930s when Metal’nikof and colleagues conducted several preliminary experiments in various animals showing that the immune system can be classically conditioned. [9] New interest in this area was established by Solomon et al. who, in 1964, coined the term ‘psychoimmunology’ [10]; however, the concept of PNI was firmly established by the American behavioural scientist Dr Robert Ader in his revolutionary 1981 book, ‘Psychoneuroimmunology.’ This book described the dynamic molecular and clinical manifestations of PNI through various early experiments. [11,12] In one initial experiment, Ader and fellow researchers successfully demonstrated that the immune system can be conditioned, similarly to Metal’nikov. After pairing saccharin with the immunosuppressive agent, cyclophosphamide, and administering this to some rats, they found that saccharin administration alone, at a later date, was able to induce an immunosuppressive state marked by reduced titres of haemagglutinating antibodies to injected sheep erythrocytes. [13]

The authors postulated that non-specific stress associated with the conditioning process would have elicited such a result. By extension and based on earlier research, [14] the authors believed psychological, emotional or physical stress probably act through hypothalamic pathways to induce immunomodulation which manifests itself in various ways. [13]

Stress, depression and PNI

A prominent aspect of PNI focuses on the bi-directional relationship between the immune system and stress and depression, where one affects the other. [4,15] The precise mechanisms are complicated but are ultimately characterised by the stress-induced dysregulation, (either activation or depression), of the hypothalamic-pituitaryadrenal (HPA) and sympathetic-adrenal-medullary (SAM) axes. [16] Because of the pleiotropic effects of these hormones, they can induce a dysfunctioning immune system partly through modulating the concentration of certain cytokines in the blood. [15] Endocrine and autonomic pathways upregulate pro-inflammatory cytokines (such as interleukin (IL)-1β, IL-6 and tumour necrosis factor-α (TNF-α)) that can exert their effects at the brain through direct (i.e., circumventricular organs) and indirect access ports (via aff erent nerve fi bres). [17,18] Such pro-inflammatory cytokines therefore stimulate the HPA axis and activate it leading to the rapid production of corticotropin-releasing hormone. [19-21] Eventually, cortisol is produced which, in turn, suppresses the pro-inflammatory cytokines. Interestingly, receptors for these cytokines have also been found on the pituitary and adrenal glands, thereby serving the ability to integrate neuroendocrine signals at all three levels of the HPA axis. [21,22] Cortisol also has significant eff ects on mood, behaviour and cognition. On a short-term basis, it may be beneficial; making an animal more alert and responsive. However, increased periods of elevation may give rise to impaired cognition, fatigue and apathy. [23]

In the brain, an active role is played by the once-thought insignificant glial cells which participate at the so-called tripartite synapse (glial cell plus pre- and post-synaptic neurons). [24] It is this unit that is fundamental to much of the central nervous system activity of the PNI system. Pro-inflammatory cytokines like interferon (IFN)-α and IL-1β released from peripheral and central (microglia and astrocytes) sources can alter dopaminergic signals, basal ganglial circuitry, hippocampal functioning and so on. Consequently, this induces behavioural changes of anhedonia, memory impairment and other similar behaviours. [18,25] Since IFN-α receptors have been found on microglia in the brain, [26] IFN-α likely also causes further local inflammation and further disruption of dopaminergic signals. Excessively activated microglia by a range of inflammatory cytokines can therefore cause direct neurotoxicity and neuropathology. [27] Additionally, these cytokines can induce activity of the indoleamine 2,3-dioxygenase enzyme (found in astrocytes and microglia) which metabolise the precursor of serotonin, tryptophan. The result is a reduction of serotonin and the production of various products, including quinolinic acid, an NMDA (N-methyl-D-aspartate) receptor agonist which leads to excess glutamate and neurodegeneration. These mechanisms are postulated to contribute to the pathogenesis of depression; however, the precise mechanisms of which are yet to be fully elucidated. [28-30]

Recent research into behavioural epigenetics has also provided an additional interesting link whereby stressors to the psychosocial environment can modulate gene expression within the neuroimmune, physiological and behavioural internal environments. This may account for the long-term aforementioned changes in immune function. [31]

Depression has also been shown to activate the HPA and SAM axes as well through inflammatory processes, [28,32] which in turn exacerbates any pre-existing depressive behaviours. [33] This inflammatory theory of depression sheds light onto the complicated pathophysiology of depression, adding to the already well-characterised theory of serotonergic neurotransmission deficiency. [28,33] Interestingly, proinfl ammatory cytokines have been shown to modulate serotonergic activity in the brain as well, [34,35] which provides further insight into this complex disorder. There is question as to whether or not this may have its roots with evolution where the body diverts energy resources away from other areas to the immune system for the promotion of anti-pathogenic activity during stress and depression. [17] For instance, with threat of an injury or wound in an acute situation (the stressor), cortisol (a natural immunosuppressant) would be released via the HPA axis. This aids in energy conservation which in turn, and paradoxically, attempts to minimise the non-helpful effect of immunosuppression in times of infection risks. [17] Depressive behaviour such as lethargy has also been said to have stemmed from the need to conserve energy to promote fever and inflammation. [2] Ultimately, the evolutionary aspects of PNI are under current speculation and investigation to elicit the precise links and relationships. [36]

The alterations of the immune system in stress and depression have implications for other areas of medicine as well. Though conclusive clinical experiments are lacking, it has been strongly hypothesised that this imbalanced immune state can contribute to a plethora of medical ailments. Depression, characterised by a general pro-inflammatory state with oxidative and nitrosative stress, [33,37] can contribute to poor wound healing; and exacerbate chronic infections and pain. [38,39] Stress similarly entails a dysregulated immune system and may contribute to the aforementioned conditions plus cardiovascular disease and minor infectious diseases such as the common cold. [40- 44] The link with cancer is somewhat more controversial but both may, in some way, predispose to the development of it through numerous mechanisms such as reduced immune surveillance by immune cells (cytotoxic T cells and natural killer cells), general inflammation and genomic instability. [45,46]

Highlighting the bidirectionality of the PNI paradigm, secondary inflammation caused by a myriad of neurological diseases (e.g., Huntington’s disease, Alzheimer’s disease) and local and systemic disorders (e.g., systemic lupus erythematosus, stroke, cardiovascular disease and diabetes mellitus) may very well contribute to the pathogenesis of co-existing depression. [47] This may account for the close association of depression and such diseases. Underlying neurochemical changes have been observed in many of these diseases—especially the neurological disease examples—and it has been suggested that depression vulnerability is proportional to how well one can ‘adapt’ to said neurochemical imbalances. [48,49]

Through an immunophysiological point-of-view, these links certainly makes sense; but it is important to note that there could be other confounding factors, such as increased alcohol consumption and other associated behaviours that accompany stress and depression that can contribute to pathology. [50] The question therefore remains as to how much the mind plays in the pathogenesis of physical ailments. Figure 1 summarises the general PNI model as it relates to stress and depression.

Implications

Having explored the discipline of PNI, what is the importance of this for clinical practice? Because of the links between stress and depression; altered immunity; other ill-effects and behaviour, [3,12] it seems fitting that if we can address a patient’s underlying stress or depression, we may be able to improve the course of their illness or prevent, to a certain extent, the onset of certain diseases by correcting immune system dysregulation. [43]

Simply acknowledging the relationship between stress and their role in the pathogenesis, maintenance and susceptibility of diseases is certainly not enough, and healthcare professionals should consider the mental state of mind for every patient that presents before them. It is fortunate, then, that a myriad of simple stress-management strategies could be employed to improve their mental welfare, depending on their individual circumstances. Such strategies include various relaxation techniques, meditation, tai chi, hypnosis and mindfulness practice. These have, importantly, proven cost-eff ective and lead to self-care and self-efficacy. [51,52]

As an example, mindfulness has received considerable attention in its role of alleviating stress and depression. [52] Defined as the increased awareness and attention to present, moment-to-moment thoughts and experiences, mindfulness therapy has shown remarkable efficacy in the promotion of positive mental states and quality of life. [52-54] This is particularly important in this age of chronic diseases and their associated unwelcomed psychological consequences. [54] Furthermore, and in light of the discussion above on PNI, there is evidence that mindfulness practice induces physiological responses in brain and immune function. [55,56] This suggests that its benefits are mediated, at least in part, through such positive immunological alterations that modulate disease processes.

With the growing understanding of the cellular and molecular mechanisms behind stress, depression and other similar psychiatric disorders, a host of novel pharmacological interventions to target the previously discussed biological pathways are actively being researched. Most notably is the proposition of the role of anti-inflammatories in ameliorating such conditions where patients present in an increased inflammatory state. This is largely based on experimental work where antagonists to pro-inflammatory cytokines and/or their receptors improve sickness behaviours in animals. [17] As an example, the cholesterol-lowering statins have been found to have intrinsic anti- inflammatory and antioxidant properties. In a study of patients taking statins for cardiovascular disease, it was found that statins had a substantial protective effect on the risk of developing depression. This suggests that the drug acts, at least in part, to decrease systemic inflammatory and oxidative processes that characterise depression. [57] Other drugs being researched aim to tackle additional pathways such as those involving neurotransmitters and their receptors.

Of the neuroendocrine arm of PNI, current research is looking at ways to reverse HPA axis activation. [20] Some tested drugs that act on specific parts of the HPA axis seem to show promise; however, a major problem is tailoring the correct drug to the correct patient, for not all patients will present with the same neuroendocrine profile. [58,59] Neuroendocrine manipulation can also be used to treat or act as an adjunct to other non-HPA axis-mediated diseases. For example, administration of melatonin and IL-2 was able to increase the survival time in patients with certain solid tumours. [60] Needless to say, a great amount of research is further warranted to test and understand possible pharmaceutical agents.

Discussion and Conclusion

The exciting and revolutionary field of PNI has now provided us with the internal links of all the major regulating systems of the human body. The complex interactions that take place is, indeed, a tribute to the complexity of our design, and has provided a basis or mechanism of how our mind and behaviour can infl uence our physical health. As a result, serious stressors—be them emotional, mental or physical—can wreak havoc on our delicate internal environment and predispose to physical ailments, which can further exacerbate the inciting stressors and our mental state. For said psychological stress or depression, it seems appropriate that if healthcare professionals can ameliorate the severity of these, they may be able to further improve the physical health of an individual. How much so is a matter of debate and further investigation. Conversely, as demonstrated by the bi-directionality model of PNI, addressing or ‘fi xing’ the organic pathology may be conducive to the mental state of patients’ minds.

Whilst clinical approaches have been sharply juxtaposed to a very theoretical and scientific review of PNI, this has been deliberately done to hopefully demonstrate how mind-body therapies can exert their physical benefits. Accordingly, valued mind-body therapies deserve as much attention as the scientific study of molecular pharmacology. It is also important to note that even these two approaches (pharmacology and mind-body therapies) are almost certainly the tip of the iceberg; for there is certainly a vast amount more to be further explored in our therapeutic approach to medical conditions. For example, how does a practitioner-patient relationship fit into this grand scheme of things, and how much of a role does it play? No doubt a decent part for sure. Furthermore, whilst the PNI framework provides good foundations for which to explain, (at a basic level), the mechanisms behind the development of stress, depression and associated ailments, further insight is needed into the biological basis of these. For example, a symphony of intricate factors (such as the up-regulation of inflammation-induced enzymes, neurotransmitter changes, dysfunction of intracellular signalling, induced autoimmune activity, neurodegeneration and decreased serum levels of antioxidants and zinc) are at play for the signs and symptoms of depression. [61,62] Thus, the complex pathogenesis of psychological stress and depression begs for further clinical and scientific research into unravelling its mysteries. Nevertheless, with a sound basis behind mindfulness, other similar mind-body therapies and novel pharmacological approaches, it seems suitable for these to be further integrated into primary care [54] and other areas of medicine as an adjuvant to current treatments. If we can achieve this, then medicine undoubtedly has more potent tools in its armamentarium of strategies to address and alleviate the growing burden of chronic disease.

Acknowledgements

My thanks go to Dr E Warnecke and Prof S Pridmore for their support.

Conflicts of interest

None declared.

Correspondence

A Lee: adrian.lee@utas.edu.au

Categories
Feature Articles Articles

Graded exposure to neurophobia: Stopping it affect another generation of students

Neurophobia

Neurophobia has probably afflicted you at some stage during your medical school training, whether figuring out how to correlate signs elicited from examination with a likely diagnosis, or deciphering which tract has decussated at a particular level in the neuroaxis. The disease definition of neurophobia as the ‘fear of neural sciences and clinical neurology’ is a testament to its influence; affecting up to 50% of students and junior doctors at least once in their lifetime. [1] The severity of the condition ranges from simple dislike or avoidance of neurology to sub-par clinical assessment of patients with a neurological complaint. Neurophobia is often compounded by a crippling lack of confi dence in approaching and understanding basic neurological concepts.

According to the World Health Organisation, neurological conditions contribute to about 6.3% of the global health burden and account for as much as twelve percent of global mortality. [2] Given these figures, neurophobia persisting into postgraduate medical years may adversely infl uence the treatment that the significant proportion of patients who present with neurological complaints receive. This article will explore why neurophobia exists and some strategies in remedying it from both a student and teaching perspective.

Perceptions of neurology

One factor contributing to the existence of neurophobia is the perception of neurology within the medical community. The classic stereotype, so vividly depicted by the editor of the British Medical Journal: ‘the neurologist is one of the great archetypes: a brilliant, forgetful man with a bulging cranium….who….talks with ease about bits of the brain you’d forgotten existed, adores diagnosis and rare syndromes, and – most importantly – never bothers about treatment.’ [3] The description by Talley and O’ Connor is that of the neurologist being identified by the presence of hatpins in their expensive suits; and how they use the keys of an expensive motor car to elicit plantar reflexes further solidifies the mythology of the neurologist as a stereotype for another generation of Australian medical students. [4] Some have even proposed that neurologists thrive in a specialty known for its intellectual pursuits and exclusivity – a specialty where ‘only young Einsteins need apply.’ [5] Unfortunately, these stereotypes may only serve to perpetuate the reputation of neurology as a difficult specialty, which is complex and full of rare diagnoses (Figure 1).

However, everyone knows that stereotypes are almost always inaccurate. An important question is what do students really think about neurology? There have been several questionnaires asking this question to medical students across various countries and the results are strikingly similar in theme. Neurology is considered by students as the most difficult of the internal medicine specialities. Not surprisingly, it was also the specialty perceived by medical students as the one they had the least knowledge about and, understandably, were least confident in. [5-10] Yet such sentiments are also shared amongst residents, junior doctors and general practitioners in the United Kingdom (UK) and United States (US). [8-10] The persistence of this phenomenon after medical school is supported by the number of intriguing and difficult case reports published in the prestigious Lancet journal. Neurological cases (26%) appear at more than double the frequency of the next highest specialty, gastroenterology, (12%) as a proportion of total case reports in the Lancet from 2003 to 2008. [11] However, this finding may also be explained by the fact that in one survey, neurology was ranked as the most interesting of specialities by medical students, especially after they had completed a rotation within the specialty. [10] So whilst neurophobia exists, it is not outlandish to claim that many medical students do at least find neurology very interesting and would therefore actively seek to improve their understanding and knowledge.

The perception of neurological disease amongst students and the wider community can also be biased. Films such as The Diving Bell and the Butterfly (2007), which is about locked-in syndrome, are not only a compelling account of a peculiar neurological disease, capturing the imagination of the public, but they also highlights the absence of effective treatment following established cerebral infarction. Definitive cures for progressive illnesses, including multiple sclerosis and motor neuron disease are also yet to be discovered, but the reality is that there are numerous effective treatments for a variety of neurological complaints and diseases. Some students will thus incorrectly perceive that the joy gained from neurology only comes from the challenge of arriving at a diagnosis rather than from providing useful treatment to patients.

 

Other causes of neurophobia

Apart from the perception of neurology, a number of other reasons for students’ neurophobia and the perceived difficulty of neurology have been identified. [5-10] Contributory factors to neurophobia can be divided into pre-clinical and clinical exposure factors. Preclinical factors include inadequate teaching in the pre-clinical years, insufficient knowledge of basic neuroanatomy and neuroscience, as well as difficulty in correlating the biomedical sciences with neurological cases (especially localising lesions). Clinical exposure factors include the length of the neurology examination, a perception of complex diagnoses stemming from inpatients being a biased sample of neurology patients, limited exposure to neurology and a paucity of bedside teaching.

Preventing neurophobia – student and teacher perspective

It is clearly much better to prevent neurophobia from occurring than to attempt to remedy it once it has become ingrained. Addressing pre-clinical exposure factors can prevent its development early during medical school. Media reports have quoted doctors and students bemoaning the lack of anatomy teaching contact hours in Australian medical courses. [12, 13] Common sense dictates that the earlier and more frequent the exposure that students have with basic neurology in their medical programs (for example, in the form of introductory sessions on the brain, spinal cord and cranial nerves that are reinforced later down the track), the greater the chance of preventing neurophobia in their clinical years. It goes without saying that a fundamental understanding of neuroanatomy is essential to localising lesions in neurology. Clinically-relevant neurosciences should likewise receive emphasis in pre-clinical teaching.

Many neurology educators concur with students’ wishes for the teaching of basic science and clinical neurology to be effectively integrated . [14, 15] This needs to be a priority. The problem or case based learning model adopted by many undergraduate programs should easily accommodate this integration, using carefully selected cases that can be reinforced with continual assessments via written or observed clinical exams. [15] Neuroanatomy can be a complex science to comprehend. Therefore, more clinically-appropriate and student offerocused rules or tricks should be taught to simplify the concepts. The ‘rule of fours’ for brainstem vascular syndromes is one delightful example of such a rule. [16] This example of a teaching ‘rule’ would be more useful for students than memorising anatomical mnemonics, which favour rote learning over developing a deeper understanding of anatomical concepts. Given the reliance on more and more sophisticated neuroimaging in clinical neurology, correlating clinical neuroimaging with the relevant anatomical concepts must also be included in the pre-clinical years.

During the clinical years, medical students ideally want more frequent and improved bedside teaching in smaller tutorial groups. The feasibility of smaller groups is beyond the scope of my article but I will emphasise one style of bedside teaching that is most conducive to learning neurology. Bedside teaching allows the student to carry out important components of a clinical task under supervision, test their understanding during a clinical discussion and then reflect on possible areas of improvement during a debrief afterwards. This century-old style of bedside teaching, which has more recently been characterised in educational theory as the application of an experiential learning cycle (ELC) framework, works for students and as it did for their teachers when they themselves were students of neurology. [17, 18] The essential questions for a clinical teacher to ask during bedside tutorials are ‘why?’ and ‘so what?’ These inquiries will gauge students’ deeper understanding of the interactions between an illness and its neuro-anatomical correlations, rather than simply testing recall of isolated medical facts. [19]

There is also the issue of the inpatient population within the neurology ward. The overwhelming majority of patients are people who have experienced a stroke and, in large tertiary teaching hospitals, there will also be several patients with rare diagnoses and syndromes. This selection of patients is unrepresentative of the broad nature of neurological presentations and especially excludes patients whose conditions are non-acute and who are referred to outpatients’ clinics. Students are sometimes deterred by patients with rare syndromes that would not even be worth mentioning during a differential diagnosis list in an objective structured clinical examination. Therefore, more exposure to outpatient clinics would assist students to develop skills in recognising common neurological presentations. The learning and teaching of neurology at outpatients should, like bedside tutorials, follow the ELC model. [18] Outpatient clinics should be made mandatory within neurology rotations and rather than making students passive observers, as is commonplace, students should be required to see the patient beforehand (especially if the patient is a patient known to the neurologist with signs or important learning points that can be garnered in their history). A separate clinic room for the student is necessary for this approach, with the neurologist then coming in after a period of time, allowing the student to present their findings followed by an interactive discussion of relevant concepts. Next, the consultant can conduct the consultation with the student observing. Following feedback, the student can think about what can be improved and plan the next consultation, as described in the ELC model (Figure 2). Time constraints make teaching difficult in outpatient settings. However, with this approach, when the student is seeing the known patient by themselves, the consultant can see other (less interesting) patients in the clinic so in essence no time (apart from the teaching time) is lost. This inevitably means the student may miss seeing every second patient that comes to the clinic but in this case, sacrificing quantity for quality of learning may be more beneficial in combating neurophobia long term.

Neurology associations have developed curricula in the US and UK as a “must-know” guideline for students and residents. [20, 21] The major benefits of these endeavours are to set a minimum standard across medical schools and provide clear objectives to which students can aspire. This helps develop recognition of common neurological presentations and the acquisition of essential clinical skills. It is for this reason that the development of a uniform neurology curriculum adopted through all medical school programs across Australia may also alleviate neurophobia.

The responsibility to engage or seek learning opportunities in neurology, to combat neurophobia, nevertheless lies with the students. Students’ own motivation is vital in seeking improvement. It is often hardest to motivate students who find neurology boring and thus fail to engage with the subject. Nevertheless, interest often picks up once students feel more competent in the area. To help improve knowledge and skills in neurology, students can now use a variety of resources apart from textbooks and journals to complement their clinical neurology exposure. One growing trend in the US is the use of online learning and resources for neurology. A variety of online resources supplementing bedside learning and didactic teaching (e.g. lectures) is beneficial to students because of the active learning process they promote. This involves integrating the acquisition of information, placing it in context and then using it practically in patient encounters. [9] Therefore, medical schools should experiment with novel resources and teaching techniques that students will find useful – ‘virtual neurological patients’, video tutorials and neuroanatomy teaching computer programmes are all potential modern teaching tools. This new format of electronic teaching is one way to engage students who otherwise are uninterested in neurology. In conclusion, recognising the early signs of neurophobia is important for medical students and teachers alike. Once it is diagnosed, it is the responsibility of both student and   teacher to minimise the burden of disease.

Acknowledgements

The author would like to thank May Wong for editing and providing helpful suggestions for an earlier draftof the article.

Conflicts of interest

None declared.

Correspondence

B Nham: benjaminsb.nham@gmail.com

Categories
Feature Articles Articles

The ethics of euthanasia

Introduction

The topic of euthanasia is one that is shrouded with much ethical debate and ambiguity. Various types of euthanasia are recognised, with active voluntary euthanasia, assisted suicide and physicianassisted suicide eliciting the most controversy. [1] Broadly speaking, these terms are used to describe the termination of a person’s life to end their suffering, usually through the administration of drugs. Euthanasia is currently illegal in all Australian states, refl ecting the status quo of most countries, although, there are a handful of countries and states where acts of euthanasia are legally permitted under certain conditions.

Advocates of euthanasia argue that people have a right to make their own decisions regarding death, and that euthanasia is intended to alleviate pain and suffering, hence being ascribed the term “mercy killing.” They hold the view that active euthanasia is not morally worse than the withdrawal or withholding of medical treatment, and erroneously describe this practice as “passive euthanasia.” Such views are contested by opponents of euthanasia who raise the argument of the sanctity of human life and that euthanasia is equal to murder, and moreover, abuses autonomy and human rights. Furthermore, it is said that good palliative care can provide relief from suffering to patients and unlike euthanasia, should be the answer in modern medicine. This article will define several terms relating to euthanasia in order to frame the key arguments used by proponents and opponents of euthanasia. It will also outline the legal situation of euthanasia in Australia and countries abroad.

Defining euthanasia

The term “euthanasia” is derived from Greek, literally meaning “good death”. [1] Taken in its common usage however, euthanasia refers to the termination of a person’s life, to end their suffering, usually from an incurable or terminal condition. [1] It is for this reason that euthanasia was also coined the name “mercy killing”.

Various types of euthanasia are recognised. Active euthanasia refers to the deliberate act, usually through the intentional administration of lethal drugs, to end an incurably or terminally ill patient’s life. [2] On the other hand, supporters of euthanasia use another term, “passive euthanasia” to describe the deliberate withholding or withdrawal of life-prolonging medical treatment resulting in the patient’s death. [2] Unsurprisingly, the term “passive euthanasia” has been described as a misnomer. In Australia and most countries around the world, this practice is not considered as euthanasia at all. Indeed, according to Bartels and Otlowski [2] withholding or withdrawing life-prolonging treatment, either at the request of the patient or when it is considered to be in the best interests of the patient, “has become an established part of medical practice and is relatively uncontroversial.”

Acts of euthanasia are further categorised as “voluntary”, “involuntary” and “non-voluntary.” Voluntary euthanasia refers to euthanasia performed at the request of the patient. [1] Involuntary euthanasia is the term used to describe the situation where euthanasia is performed when the patient does not request it, with the intent of relieving their suffering – which, in effect, amounts to murder. [3] Non-voluntary euthanasia relates to a situation where euthanasia is performed when the patient is incapable of consenting. [1] The term that is relevant to the euthanasia debate is “active voluntary euthanasia”, which collectively refers to the deliberate act to end an incurable or terminally ill patient’s life, usually through the administration of lethal drugs at his or her request. The main difference between active voluntary euthanasia and assisted suicide is that in assisted suicide and physician-assisted suicide, the patient performs the killing act. [2] Assisted suicide is when a person intentionally assists a patient, at their request, to terminate his or her life. [2] Physician-assisted suicide refers to a situation where a physician intentionally assists a patient, at their request, to end his or her life, for example, by the provision of information and drugs. [3]

Another concept that is linked to end-of-life decisions and should be differentiated from euthanasia is the doctrine of double effect. The doctrine of double effect excuses the death of the patient that may result, as a secondary effect, from an action taken with the primary intention of alleviating pain. [4] Supporters of euthanasia may describe this as indirect euthanasia, but again, this term should be discarded when considering the euthanasia debate. [3]

Legal situation of active voluntary euthanasia and assisted suicide

In Australia, active voluntary euthanasia, assisted suicide and physician-assisted suicide are illegal (see Table 1). [1] In general, across all Australian states and territories, any deliberate act resulting in the death of another person is defined as murder. [2] The prohibition of euthanasia and assisted suicide is established in the criminal legislation of each Australian state, as well as the common law in the common law states of New South Wales, South Australia and Victoria. [2]

The prohibition of euthanasia and assisted suicide in Australia has been the status quo for many years now. However, there was a period when the Northern Territory permitted euthanasia and physician-assisted suicide under the Rights of Terminally Ill Act (1995). The Act came into effect in 1996 and made the Northern Territory the first place in the world to legally permit active voluntary euthanasia and physicianassisted suicide. Under this Act, competent terminally ill adults who were aged 18 or over, were able to request a physician to help them in dying. This Act was short-lived however, after the Federal Government overturned it in 1997 with the Euthanasia Laws Act 1997. [1,2] The Euthanasia Laws Act 1997 denied states the power to legislate to permit euthanasia or assisted suicide. [1] There have been a number of attempts in various Australian states, over the past decade and more recently, to legislate for euthanasia and assisted suicide, but all have failed to date, owing to a majority consensus against euthanasia. [1]

A number of countries and states around the world have permitted euthanasia and/or assisted suicide in some form; however this is often under specific conditions (see Table 2).

Arguments for and against euthanasia

There are many arguments that have been put forward for and against euthanasia. A few of the main arguments for and against euthanasia are outlined below.

For

Rights-based argument

Advocates of euthanasia argue that a patient has the right to make the decision about when and how they should die, based on the principles of autonomy and self-determination. [1, 5] Autonomy is the concept that a patient has the right to make decisions relating to their life so long as it causes no harm to others. [4] They relate the notion of autonomy to the right of an individual to control their own body, and should have the right to make their own decisions concerning how and when they will die. Furthermore, it is argued that as part of our human rights, there is a right to make our own decisions and a right to a dignified death. [1]

Beneficence

It is said that relieving a patient from their pain and suffering by performing euthanasia will do more good than harm. [4] Advocates of euthanasia express the view that the fundamental moral values of society, compassion and mercy, require that no patient be allowed to suffer unbearably, and mercy killing should be permissible. [4]

The difference between active euthanasia and passive euthanasia

Supporters of euthanasia claim that active euthanasia is not morally worse than passive euthanasia – the withdrawal or withholding of medical treatments that result in a patient’s death. In line with this view, it is argued that active euthanasia should be permitted just as passive euthanasia is allowed.

James Rachels [12] is a well-known proponent of euthanasia who advocates this view. He states that there is no moral difference between killing and letting die, as the intention is usually similar based on a utilitarian argument. He illustrates this argument by making use of two hypothetical scenarios. In the first scenario, Smith anticipates an inheritance should anything happen to his six-year-old cousin, and ventures to drown the child while he takes his bath. In a similar scenario, Jones stands to inherit a fortune should anything happen to his six-year-old cousin, and upon intending to drown his cousin, he witnesses his cousin drown on his own by accident and lets him die. Callahan [9] highlights the fact that Rachels uses a hypothetical case where both parties are morally culpable, which fails to support Rachels’ argument.

Another of his arguments is that active euthanasia is more humane than passive euthanasia as it is “a quick and painless” lethal injection whereas the latter can result in “a relatively slow and painful death.” [12]

Opponents of euthanasia argue that there is a clear moral distinction between actively terminating a patient’s life and withdrawing or withholding treatment which ends a patient’s life. Letting a patient die from an incurable disease may be seen as allowing the disease to be the natural cause of death without moral culpability. [5] Life-support treatment merely postpones death and when interventions are withdrawn, the patient’s death is caused by the underlying disease. [5]

Indeed, it is this view that is strongly endorsed by the Australian Medical Association, who are opposed to voluntary active euthanasia and physician-assisted suicide, but does not consider the withdrawal or withholding of treatment that result in a patient’s death as euthanasia or physician-assisted suicide. [1]

Against

The sanctity of life

Central to the argument against euthanasia is society’s view of the sanctity of life, and this can have both a secular and a religious basis. [2] The underlying ethos is that human life must be respected and preserved. [1]

The Christian view sees life as a gif offerrom God, who ought not to be off ended by the taking of that life. [1] Similarly the Islamic faith says that “it is the sole prerogative of God to bestow life and to cause death.” [7] The withholding or withdrawal of treatment is permitted when it is futile, as this is seen as allowing the natural course of death. [7]

Euthanasia as murder

Society views an action which has a primary intention of killing another person as inherently wrong, in spite of the patient’s consent. [8] Callahan [9] describes the practice of active voluntary euthanasia as “consenting adult killing.”

Abuse of autonomy and human rights

While autonomy is used by advocates for euthanasia, it also features in the argument against euthanasia. Kant and Mill [3] believe that the principle of autonomy forbids the voluntary ending of the conditions necessary for autonomy, which would occur by ending one’s life.

It has also been argued that patients’ requests for euthanasia are rarely autonomous, as most terminally ill patients may not be of a sound or rational mind. [10]

Callahan [9] argues that the notion of self-determination requires that the right to lead our own lives is conditioned by the good of the community, and therefore we must consider risk of harm to the common good.

In relation to human rights, some critics of euthanasia argue that the act of euthanasia contravenes the “right to life”. The Universal Declaration of Human Rights highlights the importance that, “Everyone has the right to life.” [3] Right to life advocates dismiss claims there is a right to die, which makes suicide virtually justifi able in any case. [8]

The role of palliative care

It is often argued that pain and suffering experienced by patients can be relieved by administering appropriate palliative care, making euthanasia a futile measure. [3] According to Norval and Gwynther [4] “requests for euthanasia are rarely sustained after good palliative care is established.”

The rights of vulnerable patients

If euthanasia were to become an accepted practice, it may give rise to situations that undermine the rights of vulnerable patients. [11] These include coercion of patients receiving costly treatments to accept euthanasia or physician-assisted suicide.

The doctor-patient relationship and the physician’s role

Active voluntary euthanasia and physician-assisted suicide undermine the doctor-patient relationship, destroying the trust and confi dence built in such a relationship. A doctor’s role is to help and save lives, not end them. Casting doctors in the role of administering euthanasia “would undermine and compromise the objectives of the medical profession.” [1]

Conclusion

It can be seen that euthanasia is indeed a contentious issue, with the heart of the debate lying at active voluntary euthanasia and physicianassisted suicide. Its legal status in Australia is that of a criminal off ence, conferring murder or manslaughter charges according to the criminal legislation and/or common law across Australian states. Australia’s prohibition and criminalisation of the practice of euthanasia and assisted suicide refl ects the legal status quo that is present in most other countries around the world. In contrast, there are only a few countries and states that have legalised acts of euthanasia and/or assisted suicide. The many arguments that have been put forward for and against euthanasia, and the handful that have been outlined provide only a glimpse into the ethical debate and controversy surrounding the topic of euthanasia.

Conflicts of interest

None declared.

Correspondence

N Ebrahimi: nargus.e@hotmail.com

References

[1] Bartels L, Otlowski M. A right to die? Euthanasia and the law in Australia. J Law Med. 2010 Feb;17(4):532-55.

[2] Walsh D, Caraceni AT, Fainsinger R, Foley K, Glare P, Goh C, et al. Palliative medicine. 1st ed. Canada: Saunders; 2009. Chapter 22, Euthanasia and physician-assisted suicide; p.110-5.

[3] Goldman L, Schafer AI, editors. Goldman’s Cecil Medicine. 23rd ed. USA: Saunders; 2008. Chapter 2, Bioethics in the practice of medicine; p.4-9.

[4] Norval D, Gwyther E. Ethical decisions in end-of-life care. CME. 2003 May;21(5):267-72.

[5] Kerridge I, Lowe M, Stewart C. Ethics and law for the health professions. 3rd ed. New South Wales: Federation Press; 2009.

[6] Legemaate J. The dutch euthanasia act and related issues. J Law Med. 2004 Feb;11(3):312-23.

[7] Bulow HH, Sprung CL, Reinhart K, Prayag S, Du B, Armaganidis A, et al. The world’s major religions’ points of view on end-of-life decisions in the intensive care unit. Intensive Care Med. 2008 Mar;34(3):423-30.

[8] Somerville MA. “Death talk”: debating euthanasia and physician-assisted suicide in Australia. Med J Aust. 2003 Feb 17;178(4):171-4.

[9] Callahan D. When self-determination runs amok. Hastings Cent Rep. 1992 Mar- Apr;22(2):52-55.

[10] Patterson R, George K. Euthanasia and assisted suicide: A liberal approach versus the traditional moral view. J Law Med. 2005 May;12(4):494-510.

[11] George R J, Finlay IG, Jeff rey D. Legalised euthanasia will violate the rights of vulnerable patients. BMJ. 2005 Sep 24;331(7518):684-5.

[12] Rachels J. Active and passive euthanasia. N Engl J Med. 1975 Jan 9;292(2):78-80.

Categories
Feature Articles

Reflections on an elective in Kenya

In Africa, you do not view death from the auditorium of life, as a spectator, but from the edge of the stage, waiting only for your cue. You feel perishable, temporary, transient. You feel mortal. Maybe that is why you seem to live more vividly in Africa. The drama of life there is amplified by its constant proximity to death.” – Peter Godwin. [1]

Figure 1. Baby hospitalised for suspected bacterial pneumonia.

Squeezing into our rusty mutatu (bus), we handed over the fare to the conductor, who returned to us less than expected change. In response to our indignant questioning, he defiantly stated, “You are mzungu (white person) and mzungu is money.” This was lesson one in a crash course we had inadvertently stumbled into: “Life in Kenya for the naïve tourist.” More unsettling than being scammed in day to day life, however, was the rampant corruption in the hospital and university setting.

We completed our placement at Kenyatta National Hospital, the largest referral centre in Kenya, with 1,800 beds, 50 wards and 24 operating theatres. I was based within the paediatric ward and paediatric emergency department…

Categories
Feature Articles

A week in the Intensive Care Unit: A life lesson in empathy

Empathy and the medical student – Practice makes perfect?

The observation of another person in a particular emotional state has been shown to activate a similar autonomic and somatic response in the observer without the activation of the entire pain matrix, not requiring conscious processing, but able to be controlled or inhibited nonetheless. [2] This effectively means that when we see someone in physical or emotional distress, we too experience at least some aspect of that suffering without it even needing to be in the forefront of our consciousness. As medical students we are constantly told to “practice” being empathetic to patients and family members. What we are really practicing is consciously processing this suffering we unknowingly share with these people in order to develop rapport with them (if not just to impress medical school examiners).

We are taught an almost automated response to this distress, including a myriad of body language and particular phrases, such as “I imagine this must be very difficult for you,” to indicate to a patient that we are aware of the pain they are in. Surveys amongst critical care nurses have shown that gender, position, level of education and years of nursing experience have no significant relationship with the ability of a person to show empathy. [1] Thus it could be said that empathy is less of a skill which can be practiced until perfect, and more of a mindset that makes us as human as the people we treat…

Categories
Feature Articles

Self-taught surgery using simulation technology

During my elective term in early 2010 at the Royal Free Hospital, London, I was presented with a fantastic opportunity: to learn how to perform a laparoscopic gastric bypass procedure. The challenge was for myself, a medical student and complete novice in laparoscopic surgery, to use the hospital’s state-of-the-art screen-based simulation technology to become proficient in a specific operation within six weeks in this rapidly advancing area of surgery.

My training was to be undertaken using the Simbionix LAP Mentor (Simbionix, Cleveland, Ohio, USA): an advanced piece of technology made up of a computer with simulation software and accompanying hardware, consisting of ports and instruments. The difference between this and a video game is the presence of haptic feedback; when you hit something or pull it, you feel the corresponding tension, making it a highly realistic representation of surgery…