Categories
Feature Articles Articles

The risks and rewards of direct-to-consumer genetic tests: A primer for Australian medical students

Introduction
Over the last five years, a number of overseas companies, such as 23andMe, have begun to offer direct-to-consumer (DTC) genetic tests to estimate the probability of an individual developing various diseases. Although no Australian DTC companies exist due to regulations mandating the involvement of a health practitioner, Australian consumers are free to access overseas mail-order services. In theory, DTC testing carries huge potential for preventing the onset of disease by lifestyle modification and targeted surveillance programs. However, the current system of mail-order genetic testing raises serious concerns related to test quality, psychological impacts on users, and integration with the health system. There are also issues with protecting genetic privacy, and ethical concerns about making medical decisions based on pre-emptive knowledge. This paper presents an overview of the ethical, legal and practical issues of DTC testing in an Australian context. The paper concludes by proposing five conditions that will be key for harnessing the potential of DTC testing technology. These include improved clinical utility, updated anti-discrimination legislation, accessible genetic counselling, Therapeutic Goods Administration (TGA) monitoring, and mechanisms for identity verification. Based on these conditions, the current system of mail-order testing is unviable as a scalable medical model. For the long term, the most sustainable solution is integration of pre-symptomatic genetic testing with the healthcare system.

The rise of direct-to-consumer testing
“Be on the lookout now.” This is the slogan of 23andMe.com, a Californian biotechnology company that has been offering personal genetic testing since late 2007. Clients mail a in a sample of their saliva and, for the humble fee of US$299, 23andMe will isolate their DNA and scan across key regions to estimate that individual’s risk of developing different diseases. [1] Over 200 different diseases in fact – everything from widespread, life-threatening conditions including breast cancer and coronary artery disease, to the comparatively obscure such as restless legs syndrome. Table 1 gives an example of the risk profile with which an individual may be faced.

Genetic testing has existed for decades as a diagnostic modality. Since the 1980s, clinicians have used genetic data to detect monogenic conditions such as cystic fibrosis and thalassaemia. [2] These studies were conducted in patients already showing symptoms of the disease in order to confirm a suspected diagnosis. 23andMe does something quite different: it takes asymptomatic people and calculates the risk of diseases emerging in the long term. It is a pre-emptive test rather than a diagnostic one.

23andMe is not the only service of its kind. There is a growing family of these direct-to-consumer (DTC) genetic tests: Navigenics (US), deCODEme (Iceland) and Genetic Health (UK) all offer a comprehensive disease screen for under $1000 AUD. There are currently no Australian companies that offer DTC disease scans due to regulations that require the involvement of a health professional. [3] However, Australian consumers are still free to access overseas services. Although no Australian retail figures exist, the global market for pre-symptomatic genetic testing is growing rapidly: 23andMe reported that 150,000 customers worldwide have used their test, [4] and in a recent European survey 64% of respondents said they would use a genetic test to detect possible future disease. [5] The Australian market for DTC testing, buoyed by increasing public awareness and decreasing product costs, is also set to grow.

Australian stakeholders have so far been divided on the issue of DTC testing. Certain parties have embraced it. In 2010 the Australian insurance company NIB offered 5000 of its customers a half-price genetic test through the US company Navigenics. [6] However, controversy arose over the fine-print at the end of NIB’s offer letter: “You may be required to disclose genetic test results, including any underlying health risks and conditions which the tests reveal, to life insurance or superannuation providers.” [6]

Most professional and regulatory bodies have expressed concern over the risks of DTC testing in an Australian context. In a 2012 paper, the Australian Medical Association argued that health-related genetic testing “should only be undertaken with a referral from a medical practitioner.” [7] It also highlighted issues surrounding the accreditation of overseas laboratories and the accuracy of the test results. Meanwhile, the Human Genetics Society of Australasia has stressed the importance of educating the public about the risks of DTC tests: “The best way to get rid of the market for DTC genetic testing may be to eliminate consumer demand through education … rather than driving the market underground or overseas.” [8]

Despite the deficiencies in the current model of mail-order services, personal genetic testing carries huge potential benefits from a healthcare perspective. The 2011 National Health and Medical Research Council (NHMRC) publication entitled The Clinical Utility of Personalised Medicine highlights some of the potential applications of genetic tests: targeting clinical screening programs based on disease risk, predicting drug susceptibility and adverse reactions and initiating preventative therapy before disease onset. [9] Genetic risk analysis has the potential to revolutionise preventative medicine in the 21st century.

The question is whether free-market DTC testing is a positive step towards an era of genetically-derived preventative therapy. Perhaps it creates more problems than it solves. What is the clinical utility of these tests? Is it responsible to give untrained individuals this kind of risk information? Could test results get into the wrong hands? These are the practical issues that will directly impact Australian medical professionals as genetic data infiltrates further into daily practice. This paper aims to grapple with some of these issues in an attempt to tease out how we as a healthcare community can best adapt to this new technology.

What is the clinical utility of these tests?
In 2010, a Cambridge University professor sent his own DNA off for analysis by two different DTC testing companies – 23andMe and deCODEme. He found that for approximately one third of the tested diseases, he was classed in a different risk category by the two companies. [10] A similar experiment carried out by a British journalist also revealed some major discrepancies. In one test, his risk of a myocardial infarction was 6% above average, while on another it was 18% below. [11]

This variability is a reflection of the current level of uncertainty about precisely how genes contribute to many diseases. Most diseases are polygenic, with an array of contributing environmental and lifestyle factors also playing a role in disease onset. [12] Hence, in all but a handful of diseases where robust genetic markers have been identified (such as the BRCA mutations for breast and ovarian cancers), these DTC test results are of questionable validity. An individual’s risk of Type 2 Diabetes Mellitus cannot simply be distilled down into a single numerical value.

Even for those diseases where isolated genetic markers have been identified in the literature, the findings are specific to the population studied. The majority of linkage analyses are performed in North American or European populations and may not be directly applicable to an Australasian context. Population bias aside, there is also a high level of ambiguity in how various genetic markers interact. As an example, consider two alleles that have each been shown to increase the risk of macular degeneration by 10%. It is not valid to say that the presence of both alleles signifies a 20% risk increase. This relates to the concept of epistasis in statistical genetics – the combined phenotypic effect of two alleles may differ from the sum of the individual effects. The algorithms currently used by DTC testing companies do not account for the complexity of gene-phenotype relationships.

For these reasons, the NHMRC states in its guide to the public about DTC testing: “At this time, studies have yet to prove that such susceptibility tests give accurate results to consumers.” [12] At best, current DTC testing is only valid as a rough guide to identify any risks that are particularly high or low. At worst, it is a blatantly misleading risk estimate based on insufficient molecular and clinical data. However, as our understanding of genetic markers improves, so too will the utility of these tests.

Can customers handle the results?
Assuming test quality improves, the next question is whether average individuals can deal with this type of risk information. What may the psychological consequences be if a healthy 25-year-old discovered that they had a 35% chance of developing ischaemic heart disease at some time during their life?

One risk is that people with an unfavourable prognosis may become discouraged from caring about their health at all, because they feel imprisoned within an immutable ‘genetic destiny.’ [13] As disease is written into their genes, they may as well surrender and accept it. Even someone with an average disease risk may feel an impending sense of doom when confronted with the vast array of diseases that may one day debilitate them. Could endless accounting of genetic risks overshadow the joy of living?

It is fair to say that DTC testing will only be useful if individuals have the right attitude – if they use this foreknowledge to take preventative measures. But do genetic test results really cause behaviour modification? A fascinating study in the New England Journal of Medicine in 2011 analysed the behavioural patterns of 2037 patients before and after a DTC genetic test. [14] They found no difference in exercise behaviour or dietary fat intake, suggesting that the genetic risk analysis did not translate into measurable lifestyle modification.

In order for individuals to interpret and use this genetic information effectively, they will need advice from healthcare professionals. Many of the DTC testing companies offer their own genetic counselling services; however, only 10% of clients reported accessing these. [15] The current position of the Australian Medical Association is that patients should consult a general practitioner when interpreting the results of a DTC genetic test. [7] However, a forced marriage between commercial sequencing companies and the healthcare system threatens to create problems of its own.

How should the health system adapt?
A 2011 study in North Carolina found that one in five family physicians had already been asked a question about pre-symptomatic genetic tests, yet 85% of the surveyed doctors reported that they were not sufficiently prepared to interpret test data [16]. In Australia, the healthcare system needs to adapt to this emerging trend. The question is – to what extent?

One controversial issue is whether it should be mandatory for doctors to be consulted when an individual orders a genetic test. Australia currently requires the involvement of a health practitioner to perform a disease-related genetic test. [3] Many countries, with the notable exception of the United States, share this stance. The German government ruled in early 2010 that pre-symptomatic testing could only be ordered by doctors trained in genetic counselling. [11] However, critics argue that mandatory doctor involvement would add medical legitimacy to a technology still in its infancy. [17] There is also an ethical argument that individuals should have the right to know about their own genes independent of the health system. [18]

Then there is the issue of how DTC genetic data should influence treatment. For example, should someone genetically predisposed to Type 2 Diabetes Mellitus be screened more regularly than others? Or, in a more extreme scenario: should those with more favourable genetic outlooks be prioritised for high-demand procedures such as transplant surgery?

These are serious ethical dilemmas; however, the medical community has had to deal with such issues before, whenever a new technology has arisen. With appropriate consultation from ethics committees (such as the NHMRC-affiliated Human Genetics Society of Australasia) and improved genetic literacy among healthcare professionals, it is possible to imagine a symbiotic partnership between the health system and free-market genetic testing.

How do we safeguard genetic privacy?
If DTC testing is indeed here to stay, a further concern is raised: how do we protect genetic privacy? Suppose a potential employer were to gain access to genetic data – the consequences could be disastrous for those with a poor prognosis. The outcome may be even worse if these data were made available to their insurance company.

In Australia, the disclosure of an individual’s genetic data by third parties (such as a genetic testing company) is tightly regulated under the Privacy Act 1988, which forbids its use for any purpose beyond that for which it was collected. [19] The only exception, based on the Privacy Legislation Amendment Act 2006, is for genetic data to be released to ‘genetic relatives’ in situations where disclosure could significantly benefit their health. [19]

In spite of the Privacy Act, individuals may still be forced to disclose their own test results to a third party such as an insurer or employer. There have been numerous reports of discrimination on the basis of genetic data in an Australian context. [20-22] The Australian Genetic Discrimination Project has been surveying the experiences of clients visiting clinical geneticists for ‘predictive or pre-symptomatic’ genetic testing since 1998. The pilot data, published in 2008, showed that 10% of the 951 subjects reported some negative treatment as a result of their genetic results. [23] Of the alleged incidents of discrimination, 42% were related to insurance and 5% to employment.

The use of genetic data by insurance companies is a complex issue. Although private health insurance in Australia is priced purely on basic demographic data, life and disability insurance is contingent on an individual’s prior medical record. This means that customers must disclose the results of any genetic testing (DTC or otherwise) they may have undergone. This presents a serious disincentive for purchasing a DTC test. The Australian Law Reform Commission, in its landmark report Essentially Yours: the Protection of Human Genetic Information in Australia, discusses the possibility of a two-tier system where insurance below a specific value would not require disclosure of any genetic information. [22] Sweden and the United Kingdom have both implemented such systems in the past; however insurers have argued that the Australian insurance market is not sufficiently large to accommodate a two-tiered model. [22]

As genetic testing becomes increasingly widespread, a significant issue will be whether insurance companies should be allowed to request genetic data as a standard component of insurance applications. Currently, the Investment and Financial Services Association of Australia, which represents all major insurance companies, has stated that no individual will be forced to have a genetic test. [24] But how long will this moratorium last?

Suffice to say that the privacy and anti-discrimination legislature needs to adapt to the times. There needs to be careful regulation of how these genomics companies use and protect sensitive data, and robust legislation against genetic discrimination. Organisations such as the Australian Law Reform Commission and the Human Genetics Society of Australasia will continue to play an integral role in this process.

However, there are some fundamental issues that even legislation cannot fix. For example, with the current system of mail-order genetic testing, there is no way of verifying the identity of the person ordering the test. This means that someone could easily send in DNA that is not their own. In addition, an individual’s genetic results reveal a great deal about their close family members. Consequently, someone who does not wish to know their genetic risks might be forcibly confronted with this information through a relative’s results. We somehow need to construct a system that preserves an individual’s right of autonomy over their own genetic data.

What does the future hold?
DTC genetic testing is clearly a technology still in its infancy, with many problems yet to be overcome. There are issues regarding test quality, psychological ramifications, integration with the health system and genetic privacy. On closer inspection, this risk-detection tool turns out to be a significant risk in itself. So does pre-symptomatic genetic testing have a future?

The current business platform, wherein clients mail their DNA to overseas companies, is unviable as a scalable medical model. This paper proposes that the following five conditions are necessary (although not sufficient) for pre-symptomatic genetic testing to persist into the future in an acceptable form:
• Improved clinical utility
• Updated anti-discrimination legislation pertaining to genetic test data
• Accessible genetic counselling facilities and community education about interpreting genetic results
• Monitoring of DTC companies by regulatory bodies such as the Therapeutic Goods Administration (TGA)
• Mechanism for identity verification to prevent fraudulent DNA analysis

Let us analyse each of these propositions. Condition (i) will be gradually fulfilled as our understanding of genetic markers and bioinformatics develops. A wealth of new data is emerging from large-scale sequencing studies spanning diverse populations, with advanced modeling for gene-gene interactions. [25,26] Condition (ii) is also a likely future prospect – the report by the Australian Law Reform Commission is evidence of a responsive legislative landscape. [22] Condition (iii) is feasible, contingent on adequate funding for publicly accessible genetic counselling services and education programs. However, given that the clinical utility of DTC risk analysis is currently low, it would be difficult in the short term to justify any public expenditure on counselling services targeted at test users.

Conditions (iv) and (v) are more difficult to satisfy. Since DTC companies are all located overseas, they fall outside the jurisdiction of the Australian TGA. Given that consumers may make important healthcare choices based on DTC results, it is imperative that this industry be regulated. We have three options. First, we could rely on appropriate monitoring by foreign regulatory bodies. In the US, DTC genetic tests are classed as an ‘in vitro diagnostic device’ (IVD), meaning they fall subject to FDA regulation. However, in a testimony before the US government’s Subcommittee on Oversight and Investigations in July 2010, the FDA stated that it has “generally exercised enforcement discretion” in regulating IVDs. [27] It went on to admit that “none of the genetic tests now offered directly to consumers has undergone premarket review by the FDA to ensure that the test results being provided to patients are accurate, reliable, and clinically meaningful.” This is an area of active reform in the US; however, it seems unwise for Australia to blindly accept the standards of overseas regulators.

The second option is to sanction overseas DTC testing for Australian consumers. Many prescription medicines are subject to import controls if they are shipped into Australia. In theory, the same regulations could be applied to genetic test kits. However, it is not difficult to imagine ways around this ban, e.g. simply posting an oral swab and receiving the results online.

A third option is to open the market for Australian DTC testing companies, which could compete with overseas services while remaining under TGA surveillance. In other words, we could cultivate a domestic industry. However, it may not be possible for fledgling Australian companies to compete on price with the large-scale US operations. It would also be hard to justify the change in policy before conditions (i) to (iii) are fulfilled. That said, of the three options discussed, this appears to be the most viable in the long term.

Finally, condition (v) presents one of the fundamental flaws with DTC testing. If the health system was formally involved in the testing process, the medical practitioner would be responsible for identity verification. However, it is simply not possible to reliably check identity in a mail-order system. The only way DTC testing can verify identity is to have customers come in person to a DTC facility and provide proof when their DNA is collected. However, such a regulation would make it even more difficult for any Australian company to compete against online services.

Conclusion
In summary, it is very difficult to construct a practical model that addresses conditions (iv) and (v) in an Australian context. Hence, for the short term, DTC testing will likely remain a controversial, unregulated market run through overseas websites. It is the duty of the TGA to inform the public about the risks of these products, and the duty of the health system to support those who do choose to purchase a test.
For the longer term, it seems that the only sustainable solution is to move towards an Australian-based testing infrastructure linked into the healthcare system (for referrals and post-test counselling). There are many hurdles to overcome; however, one might envisage a situation, twenty years from now, where a genetic risk analysis is a standard medical procedure offered to all adults and subsidised by the health system, and where individuals particularly susceptible to certain conditions can maximise their quality of life by making educated lifestyle changes and choosing medications that best suit their genetic profiles. [28]
As a medical community, therefore, we should be wary of the current range of DTC tests, but also open-minded about the possibilities for a future partnership. If we get it right, the potential payoff for preventative medicine is huge.

Conflict of interest
None declared.

Correspondence
M Seneviratne: msen5354@uni.sydney.edu.au

References
[1] 23andMe. Genetic testing for health, disease and ancestry. Available from: www.23andme.com.
[2] Antonarakis SE. Diagnosis of genetic disorders at the DNA level. N Engl J Med. 1989;320(3):153-63.
[3] Trent R, Otlowski M, Ralston M, Lonsdale L, Young M-A, Suther G, et al. Medical Genetic Testing: Information for health professionals. Canberra: National Health and Medical Research Council, 2010.
[4] Perrone M. 23andMe’s DNA test seeks FDA approval. USA Today Business. 2012.
[5] Ramani D, Saviane C. Genetic tests: Between risks and opportunities EMBO Reports. 2010;11:910-13.
[6] Miller N. Fine print hides risk of genetic test offer. The Age. 2010.
[7] Position statement on genetic testing – 2012. Australian Medical Association, 2012.
[8] Human Genetic Society of Australia. Issue Paper: Direct to consumer genetic testing. 2007.
[9] Clinical Utility of Personalised Medicine. NHMRC. 2011.
[10] Knight C, Rodder S, Sturdy S. Response to Nuffield Bioethics Consultation Paper ESRC Genomics Policy and Research Forum. 2009.
[11] Hood C, Khaw KT, Liddel K, Mendus S. Medical profiling and online medicine: The ethics of personalised healthcare in a consumer age. Nuffield Council on Bioethics. 2010.
[12] Direct to Consumer Genetic Testing: An information resource for consumers. NHMRC. 2012.
[13] Green SK. Getting personal with DNA: From genome to me-ome Virtual Mentor. 2009;11(9):714-20.
[14] Bloss CSP, Schork NJP, Topol EJM. Effect of Direct-to-consumer genomewide profiling to assess disease risk. N Engl J Med. 2011;364(6):524-34.
[15] Caulfield T, McGuire AL. Direct-to-consumer genetic testing: Perceptions, problems, and policy responses. Annu Rev Med. 2012;63(1):23-33.
[16] Powell K, Cogswell W, Christianson C, Dave G, Verma A, Eubanks S, et al. Primary Care Physicians’ Awareness, Experience and Opinions of Direct-to-Consumer Genetic Testing. J Genet Couns.1-14.
[17] Frueh FW, Greely HT, Green RC. The future of direct-to-consumer clinical genetic tests Nat Rev Gene. 2011;12:511-15.
[18] Sandroff R. Direct-to-consumer genetic tests and the right to know. Hastings Center Report. 2010;40(5):24-5.
[19] Use and disclosure of genetic information to a patient’s genetic relatives under section 95AA of the Privacy Act 1988 (Cth). NHMRC / Office of the Privacy Commissioner, 2009.
[20] Barlow-Stewart K, Keays D. Genetic Discrimination in Australia. Journal of Law and Medicine. 2001;8:250-63.
[21] Otlowski M. Investigating genetic discrimination in the Australian life insurance sector: The use of genetic test results in underwriting, 1999-2003. Journal of Law and Medicine. 2007;14:367.
[22] Essentially Yours: The protection of human genetic information in Australia (ALRC Report 96). Australian Law Reform Commission, 2003.
[23] Taylor S, Treloar S, Barlow-Stewart K, Stranger M, Otlowski M. Investigating genetic discrimination in Australia: A large-scale survey of clinical genetics clients. Clinical Genetics. 2008;74(1):20-30.
[24] Barlow-Stewart K. Life Insurance products and genetic testing in Australia. Centre for Genetics Education, 2007.
[25] Davey JW, Hohenlohe PA, Etter PD, Boone JQ, Catchen JM, Blaxter ML. Genome-wide genetic marker discovery and genotyping using next-generation sequencing. Nat Rev Genet. 2011;12(7):499-510.
[26] Saunders CL, Chiodini BD, Sham P, Lewis CM, Abkevich V, Adeyemo AA, et al. Meta-Analysis of Genome-wide Linkage Studies in BMI and Obesity. Obesity. 2007;15(9):2263-75.
[27] Food and Drug Administration CeLnter for Devices and Radiological Health. Direct-to-Consumer Genetic Testing and the Consequences to the Public. Subcommittee on Oversight and Investigations, Committee on Energy and Commerce, US House of Representatives; 2010.
[28] Mrazek DA, Lerman C. Facilitating Clinical Implementation of Pharmacogenomics. JAMA: The Journal of the American Medical Association. 2011;306(3):304-5.

Categories
Articles Editorials

Modelling human development and disease: The role of animals, stem cells, and future perspectives

Introduction

The ‘scientific method’ begins with a hypothesis, which is the critical keystone in forming a well-designed study. As important as it is to ask the correct questions to form the hypothesis, it is equally important to be aware of the available tools to derive the answers.

Experimental models provide a crucial platform on which to interrogate cells, tissues, and even whole animals. They broadly serve two important purposes: investigation of biological mechanisms to understand diseases and the opportunity to perform preclinical trials of new therapies.

Here, an overview of experimental models based on animals commonly used in research is provided. Limitations which may impact clinical translation of findings from animal experiments are discussed, along with strategies to overcome this. Additionally, stem cells present a novel human-derived model, with great potential from both scientific and clinical viewpoints. These perspectives should draw attention to the incredible value of model systems in biomedical research, and provide an exciting view of future directions.

Animal models – a palette of choices

Animal models provide a ‘whole organism’ context in studying biological mechanisms, and are crucial in testing and optimising delivery of new therapies before the commencement of human studies. They may be commonly referred to under the classification of invertebrates (flies, worms) and vertebrates (fish, rodents, swine, primates); or small animal (fish, rodents) and large animal (swine, primates, sheep).

Whilst organisms have their own niche area of research, the most frequently used is the humble mouse. Its prominence is attested by the fact that it was only the second mammalian species after humans to have its genome sequenced, demonstrating that both species share 99% of their genes. [1] Reasons for the popularity of mice as a choice include that mice share many anatomical and physiological similarities with humans. Other advantages include that they are small, hardy, cheap to maintain and easy to breed with a short lifespan (approximately three years), [2] allowing experiments to gather results more quickly. Common human diseases such as diabetes, heart disease, and cancer affect mice, [3] hence complex pathophysiological mechanisms such as angiogenesis and metastasis can be readily demonstrated. [2] Above all, the extraordinary ease with which mice are manipulated has resulted in the widespread availability of inbred, mutant, knockout, transgenic or chimeric mice for almost every purpose conceivable. [3] By blocking or stimulating the overexpression of specific genes, their role in developmental biology and disease can be identified and even demonstrated in specific organs. [4]

Humanised mice are another step closer in representation of what happens in the human body, thereby increasing the clinical value of knowledge gained from experiments. Humanised mice contain either human genes or tissue allowing the investigation of human mechanisms whilst maintaining an in vivo context within the animal. Such approaches are also available in other organisms such as rats, but are often adapted from initial advances in mice, and hardly mirror the ease and diversity with which humanised mice are produced.

Aside from the mouse, invertebrates such as the Drosophila vinegar fly [5] and Caenorhabditis elegans worm [6] are also widely used in research of genetics or developmental biology studies. They are particularly easy to maintain and breed and therefore large stocks can be kept. Furthermore, there are fewer ethical dilemmas and invertebrates have a genome simple enough to be investigated in its entirety without being cost-prohibitive or requiring an exhaustive set of experiments. Their anatomies are also distinct and simple, allowing developmental changes to be readily visualised.

Another alternative is the Zebrafish, which shares many of the advantages offered by Drosophila and C. elegans. Additionally, it offers greater scope for investigating more complex diseases like spinal cord injury and cancer, and possesses advanced anatomical structures as a vertebrate. [7] Given the inherent capacity of the Zebrafish for cardiac regeneration, it is also of interest in regenerative medicine as we seek to harness this mechanism for human therapy. [8]

Large animals tend to be prohibitively expensive, time-consuming to manage and difficult to manipulate for use in basic science research. Instead, they have earned their place in preclinical trials. Their relatively large size and physiological similarity to humans provides the opportunity to perform surgical procedures and other interventions on a scale similar to that used clinically. Disease models created in sheep or swine are representative of the complex biological interactions that are present in highly evolved mammals; hence may be suitable for vaccine discovery. [9] Furthermore, transgenic manipulation is now possible in non-human primates, presenting an opportunity to develop humanised models. [10] Despite this, there are obvious limitations confining their use to specialised settings. Large animals need more space, are difficult to transport, require expert veterinary care, and their advanced psychosocial awareness raises ethical concerns. [9]

The clinical context of animal experimentation

A major issue directly relevant to clinicians is the predictive value of animal models. Put simply, how much of research using animals is actually clinically relevant? Although most medical therapies in use today were initially developed using animal models, it is also recognised that many animal experiments fail to reproduce their findings when translated into clinical trials. [11] The reasons for this are numerous, and require careful analysis.

The most obvious is that despite some similarities, animals are still animals and humans are humans. Genetic similarities between species as seemingly disparate as humans and mice may lead to assumptions of conserved function between humans and other animal species that are not necessarily correct. Whilst comparing genomes can indicate similarities between two species such studies are unable to capture differences in expression or function of a gene across species that may occur at a molecular level. [12]

The effectiveness and clinical relevance of experimental animal trials is further complicated by epigenetics. Epigenetics is the modification of genetic expression due to environmental or other cues without actual change in DNA sequence. [13] These changes are now considered just as central to the pathogenesis of cancer and other conditions as genetic mutations.

It is also important to consider the multi-factorial nature of human diseases. Temporal patterns such as asymptomatic or latent phases of disease can further complicate matters. Patients have co-morbidities, risk factors, and family history, all of which contribute to disease in a way that we may still not completely understand. With such complexity, animal models do not encapsulate the overall pathophysiology of human disease. Animals may be too young, too healthy, or too streamlined in sex or genetics. [14] To obtain animals with specific traits, they are often inbred such that two animals in the same experiment will have identical genetic make-up – like twins, hardly representative of the diversity present in nature. Understandably, it can be an extraordinary challenge to incorporate all these dimensions into one study. This is especially so when the very principles of scientific method dictate that variables except for the one under experimentation should be minimised as much as possible.

A second area of concern is the sub-optimal rigour and research design of animal experiments. Scientists who conduct animal experiments and clinicians who conduct clinical trials often have different goals and perspectives. Due to ethical and cost concerns, the sample size of animal experiments is often kept to a minimum, and studies are prolonged no more than necessary, often with arbitrarily determined end-points. [14] Randomisation, concealed allocation, and blinded outcome of assessment are poorly enforced, leading to concerns of experimental bias. [11] Additionally, scientific experiments are rarely repeated due to an emphasis on originality, whereas clinical trials are often repeated (sometimes as multi-centre trials) in order to assess reproducibility of results. Furthermore, clinical trials are more likely to be published regardless of the nature of results; in contrast, scientific experiments with negative findings or low statistical significance often fail to be reported. These gaps highlight the fact that preclinical trials should be expected to adhere to the same standards and principles of clinical trials in order to improve the translatability of results between the two settings.

Although deficiencies in research conduct is a concern, the fundamental issue that remains is that even the best-designed preclinical study cannot overcome the inherent differences that exist between animal models and ‘real’ human patients. However, it is reassuring to know that we are becoming better at manipulating animal models and enhancing their compatibility with their human counter-parts. As such, this drive towards increasingly sophisticated animal models will provide more detailed and clinically relevant answers. Additionally, with the recognition that a single animal model is inadequate on its own, experiments may be repeated in multiple models. Each model will provide a different perspective and lead to the formation of a more comprehensive and balanced conclusion. A suggested structure is to start initial proof-of-principle experiments in small, relatively inexpensive and easily manipulated animals, and then scale up to larger animal models.

‘Human’ experimental models – the revolution of stem cells

Given the intrinsic differences between animals and humans, it is crucial to develop experimental systems that simulate human biology as much as possible. Stem cells are ‘master cells’ with the potential to differentiate into more mature cells, and are involved in the development and maintenance of organs through all stages of life from an embryo (embryonic stem cells) to adult (tissue-specific stem cells). [15] With the discovery of human embryonic stem cells [16] and other tissue-specific stem cells [17] it is now possible to appreciate the developmental biology of human tissues and organs in the laboratory. Stem cells may be studied under various controlled conditions in a culture dish, or even implanted into an animal to recapitulate in vivo conditions. Furthermore, stem cell transplantation has been used in animal models of disease to replace lost or damaged tissue. These methods are now commencing high-profile clinical trials with both embryonic stem cells [18] and tissue-specific stem cells. [19] Although stem cells hold great potential, translating this into the clinical environment has been hindered by several obstacles. Chiefly, tissue- specific stem cells are rare and difficult to isolate, while embryonic stem cells can only be created by destroying an embryo. In order to generate personalised embryonic stem cells for cell therapy or disease modelling, they need to be created via ‘therapeutic cloning.’ The considerable ethical quandary associated with this resulted in a field mired in controversy and political debate. This led to research coming almost to a standstill. Fortunately, stem cell research was rejuvenated in 2007 with the revolutionary discovery of induced pluripotent stem (iPS) cells – a discovery notable enough to be awarded the 2012 Nobel Prize in Physiology/Medicine.

Induced pluripotent stem (iPS) cells are created by reprograming mature cells (such as skin fibroblasts) back into a pluripotent ‘stem cell’ state, which can then re-differentiate into cells of any of the three germ layers irrespective of what its original lineage was. [20] Cells from patients with various diseases can be re-programmed into iPS cells, examined and compared to cells from healthy individuals to understand disease mechanisms and identify therapeutic opportunities. Rather than using models created in animals, this approach represents a ‘complete’ model where all genes contributing to a specific disease are present. Crucially, this enables the previously inconceivable notion of deriving patient-specific ‘disease in a dish’ models, which could be used to test therapeutic response. [21] It also provides unprecedented insight into conditions such as those affecting the heart [22] or brain, [23] which have been difficult to study due to limitations accessing tissue specimens and conducting experiments in live patients.

However, if a model system rests purely on stem cells alone this would relegate the approach to in vitro analysis without the whole organism outlook that animal experiments afford us. Accordingly, by combining this with rapidly evolving cell transplantation techniques it is possible to derive stem-cell based animal models. Although this field is flourishing at an exponential rate it is still in its infancy. It remains to be seen how the actual translation of iPS technology will fit into the pharmacological industry, and whether personalised drug screening assays will become adopted clinically.

Conclusion

Experimental models provide us with insight into human biology in ways that are more detailed and innovative than ever before, with a dazzling array of choices now available. Although the limitations of animal models can be sobering, they remain highly relevant in biomedical research. Their contribution to clinical knowledge can be strengthened by refining models to mimic human biology as closely as possible, and by modifying research methods to include protocols similar to that used in clinical trials. Additionally, the emergence of stem cells has shifted current paradigms by introducing patient-specific models of human development and disease. However, it should not be seen as rendering animal models obsolete, but rather a complementary methodology that should improve the predictive power of preclinical experiments as a whole.

Understanding and awareness of these advances is imperative in becoming an effective researcher. By applying these models and maximising their potential, medical students, clinicians and scientists alike will enter a new frontier of scientific discovery.

Conflict of interest

None declared.

Correspondence

k.yap@amsj.org

 

Categories
Feature Articles Articles

Eye protection in the operating theatre: Why prescription glasses don’t cut it

Introduction
Needle-stick injury represents a serious occupational hazard for medical professionals, and much time is spent on educating students and practitioners on its prevention. Acquiring a blood-borne viral infection such as Human Immunodeficiency Virus (HIV), Hepatitis B or C from a patient is a rare yet devastating event. While most often associated with ‘sharps’ injuries, viral transmission is possible across any mucocutaneous membrane – including the eye. Infection via the transconjunctival route is a particularly relevant occupational hazard for operating room personnel, where bodily fluids are commonly encountered. Published cases of HIV seroconversion after ocular blood splash reinforce the importance of eye protection. [1]

Surgical operations carry an inherent risk of blood splash injury – masks with visors are provided in operating theatres for this reason. However, many surgeons and operating personnel rely solely upon prescription glasses for eye protection, despite spectacles being shown to offer an ineffective safeguard against blood splash injury. [2]

Incidence of blood splash injury
The incidence of blood splash is understandably more prevalent in some surgical specialties, such as orthopaedics, where power tools and other instruments increase the likelihood of blood spray. [3] Within these specialties, the risk is acknowledged and the use of more comprehensive eye protection is usually routine.

Laparoscopic and endoscopic procedures may particularly be viewed as low-risk, despite the rates of positive blood splash evident on post-operative examination of eye protection in one prospective study approaching 50%. [4] These results imply that even minimally invasive procedures need to be treated with a high level of vigilance.

The prevalence of blood splash during general surgical operations is highlighted by a study that followed one surgeon over a 12 month period and recorded all bodily fluids evident on protective eyewear following each procedure. [5] Overall, 45% of surgeries performed resulted in blood splash and an even higher incidence (79%) was found in vascular procedures. In addition, half of the laparoscopic cases were associated with blood recorded on the protective eyewear postoperatively.

A similar prospective trial undertaken in Australia found that protective eye shields tested positive for blood in 44% of operations, yet the surgeon was only aware of the incident in 18% of these cases. [6] Much blood spray during surgery does not occur at a visually perceivable level, with this study demonstrating that the incidence of blood splash during a procedure may be considerably higher than is realised.

Despite the predominance of blood splash occurring within the operating theatre, the risks of these injuries are not limited to surgeons and theatre staff – even minor surgery carries a considerable risk of blood splash. A review of 500 simple skin lesion excisions in a procedural dermatology unit revealed positive blood splash on facemask or visor in 66% of cases, which highlights the need for protective eyewear in all surgical settings. [7]

Risk of blood splash injury
Although a rare occurrence, even a basic procedure such as venepuncture can result in ocular blood splash injury. Several cases of confirmed HCV transmission via the conjunctival route have been reported. [8-10]

Although the rates of blood-borne infectious disease are reasonably low within Australia, and likewise the rates of conversion from a blood splash injury are low at around 2%, [9] the consequences of contracting HIV, HBV or HCV from a seropositive patient are potentially serious and require strict adherence to post exposure prophylaxis protocols. [11] Exposure to bodily fluids, particularly blood, is an unavoidable occupational risk for most health care professionals, but personal risk can be minimised by using appropriate universal precautions.

For those operating theatre personnel who wear prescription glasses, there exists a common belief that no additional eye protection is necessary. The 2007 Waikato Eye Protection Study [2] surveyed 71 practicing surgeons, of which 45.1% required prescription glasses while operating. Of the respondents, 84.5% had experienced prior periorbital blood splash during their operating careers, and 2.8% had gone on to contract an illness from such an event. While nearly 80% of the participants routinely used eye protection, amongst those who wore prescription glasses, 68% used them as sole eye protection.

A 2009 in vitro study examining the effectiveness of various forms of eye protection in orthopaedic surgery [12] employed a simulation model, with a mannequin head placed in a typical position in the operating field, with femoral osteotomy performed on a cadaveric thigh. The resulting blood splash on six different types of protective eyewear was measured, and found that prescription glasses offered no benefit over control (no protection). While none of the eye protection methods tested offered complete protection, significantly lower rates of conjunctival contamination were recorded for recommended eyewear, including facemask and eyeshield, hard plastic glasses and disposable plastic glasses.

Prevention and management of blood splash injury
Given that blood splash is an occupational hazard, the onus is on the hospital and clinical administration to ensure that there are adequate supplies of protective eye equipment available. Disposable surgical masks with full-face visors have been shown to offer the highest level of protection from blood splash injury [12] and ought to be readily accessible for all staff involved in procedures or settings where contact with bodily fluids is possible. The use of masks and visors should be standard practice for all theatre staff, including assistants, scrub nurses and observers, regardless of the use of prescription spectacles.

Should an incident occur, a procedure similar to that used for needle-stick injury may be followed to minimise the risk of infection. The eye should first be rinsed thoroughly to remove as much of the fluid as possible and serology should be ordered promptly to obtain a baseline for future comparisons. An HIV screen and acute hepatitis panel (HAV IgM, HB core IgM, HBsAg, HCV and HB surface antibody for immunised individuals) are indicated. Post-exposure prophylaxis (PEP) should be initiated as soon as practicable unless the patient is known to be HIV, HBV and HCV negative. [13]

Conclusion
Universal precautions are recommended in all instances where there is the potential for exposure to patient bodily fluids, with an emphasis on appropriate eye protection. Prescription glasses are unsuitable for use as the sole source of eye protection from blood splash injury. In light of the fact that a blood splash injury can occur without knowledge of the event, regular blood tests for health care workers involved in regular procedural activity may allow for early detection and intervention of workplace acquired infection.

Conflict of interest
None declared.

Correspondence
S Campbell: shaun.p.campbell@gmail.com

References
[1] Eberle J, Habermann J, Gurtler LG. HIV-1 infection transmitted by serum droplets into the eye: a case report. AIDS. 2000;14(2):206–7.
[2] Chong SJ, Smith C, Bialostocki A, McEwan CN. Do modern spectacles endanger surgeons? The Waikato Eye Protection Study. Ann Surg. 2007;245(3):495-501
[3] Alani A, Modi C, Almedghio S, Mackie I. The risks of splash injury when using power tools during orthopaedic surgery: a prospective study. Acta Orthop Belg. 2008;74(5):678-82.
[4] Wines MP, Lamb A, Argyropoulos AN, Caviezel A, Gannicliffe C, Tolley D. Blood splash injury: an underestimated risk in endourology. J Endourol. 2008;22(6):1183-7.
[5] Davies CG, Khan MN, Ghauri AS, Ranaboldo CJ. Blood and body fluid splashes during surgery – the need for eye protection and masks. Ann R Coll Surg Engl. 2007;89(8):770-2.
[6] Marasco S, Woods S. The risk of eye splash injuries in surgery. Aust N Z J Surg. 1998;68(11):785-7.
[7] Holzmann RD, Liang M, Nadiminti H, McCarthy J, Gharia M, Jones J et al. Blood exposure risk during procedural dermatology. J Am Acad Dermatol. 2008;58(5):817-25.
[8] Sartori M, La Terra G, Aglietta M, Manzin A, Navino C, Verzetti G. Transmission of hepatitis C via blood splash into conjunctiva. Scand J Infect Dis 1993;25:270-1.
[9] Hosoglu S, Celen MK, Akalin S, Geyik MF, Soyoral Y, Kara IH. Transmission of hepatitis C by blood splash into conjunctiva in a nurse. American Journal of Infection Control 2003;31(8):502-504.
[10] Rosen HR. Acquisition of hepatitis C by a conjunctival splash. Am J Infect Control 1997;25:242-7.
[11] NSW Health Policy Directive, AIDS/Infectious Diseases Branch. HIV, Hepatitis B and Hepatitis C – Management of Health Care Workers Potentially Exposed. 2010;Circular No 2003/39. File No 98/1833.
[12] Mansour AA, Even JL, Phillips S, Halpern JL. Eye protection in orthopaedic surgery. An in vitro study of various forms of eye protection and their effectiveness. J Bone Joint Surg Am. 2009 May;91(5):1050-4.
[13] Klein SM, Foltin J, Gomella LG. Emergency Medicine on Call. New York: McGraw-Hill; 2003. p. 288.

Categories
Editorials Articles

Medical students in the clinical environment

Introduction

It is common amongst medical students to feel apprehension and uncertainty in the clinical environment. It can be a daunting setting, where medical students can sometimes feel as if they are firmly rooted to the bottom of the pecking order. However, there are many ways medical students can contribute to their respective healthcare teams. Whilst students are not able to formally diagnose patients or prescribe medications, they remain an integral part of the healthcare landscape and culture. The step from being ‘just’ a medical student to being a confident, capable medical professional is a big step to take, but an important one in our development from the textbook to the bedside. By being proactive and committed, students can be of great help and achieve improved outcomes in a clinical setting. Through this editorial we hope to illustrate several methods one can employ to ease this transition.

Concerns of medical students

When faced with the clinical environment, most medical students will have some form of reservation regarding various aspects of clinical practice. Some of the concerns listed in the literature revolve around being accepted as part of the team, [1] fatigue, [2, 3] potential mental abuse, [4, 5] poor personal performance and lifestyle issues. [6, 7] These points of concern can mostly be split up into three parts: concern regarding senior clinicians, concern regarding the clinical environment, and concern regarding patient interaction. [1] Practicing clinicians hold the key to effective medical education and their acceptance of medical students is often crucial for a memorable learning experience. [1] Given the hierarchical nature of most medical organisations, senior clinicians being the direct ‘superiors’, are given the responsibility of assessing students. Concerns regarding the clinical environment refer to the demands on students during clinical years, such as on calls, long hours, early starts and the pressure to gain practical knowledge. Anecdotally, it’s common to hear of medical students becoming consumed by their study of medicine and rarely having the time to pursue other interests in life.

Patient-student interaction is another common source of anxiety, as medical students are often afraid to cause harm to real-life patients. Medical students are often encouraged to perform invasive practical skills (such as venipuncture, intravenous cannulation, catheterisations, suturing, invasive clinical exams, nasogastric tube insertion, airway management, arterial blood gases) and to take sensitive histories. We have the ability to physically or psychologically hurt our patients, and Rees et al. [8] have recently reported the performance of intimate examinations without valid consent by Australian medical students. This has to be balanced against our need to learn as students so that we avoid making errors when we eventually enter clinical practice. These are all pertinent points that have to be addressed to ensure that the average medical student can feel comfortable and contribute to the team in an ethical manner.

Attitudes towards medical students

Despite the concerns of medical students regarding the attitudes of clinicians, allied health professionals and patients towards them, most actually take a positive view on having students in the clinical environment. Most studies have shown that the majority of patients are receptive to medical students and had no issues with disclosing personal information or being examined. [9-11] In particular, patients who were older and had prior admissions tended to be more accepting of student participation in their care. [9, 12] These findings were consistent across a number of specialties, even those dealing with genitourinary issues. [13] On a cautionary note, students should bear in mind that a sizable minority of patients prefer to avoid medical student participation, and under these circumstances it is important to respect patient autonomy and refrain from being involved with their care. [14] Graber et al. [14] have also reported that patients are quite apprehensive regarding having medical students perform procedures on them, particularly more invasive procedures such as central line placement or lumbar puncture. Interestingly, a sizable minority (21%) preferred to never have medical students perform venipuncture, [14] a procedure often considered minor by medical professionals. It is a timely reminder that patient perspectives often differ from ours and that we need to respect their opinions and choices.

Ways we can contribute

As aspiring medical professionals our primary objective is to actively seek ways to learn from experienced colleagues and real-life patients about the various conditions that they face. Being a proactive learner is a crucial aspect of being a student and this in itself can be advantageous to the clinical team by sharing new knowledge, promoting academic discussion or as a source of motivation for senior clinicians. However as medical students we can actively contribute to the healthcare team in a variety of practical ways. These methods include formulating a differential diagnosis, assisting in data collection, preventing medical errors and ensuring the emotional well-being of patients. These are simple yet effective ways of fulfilling one’s role as a medical student with potentially meaningful outcomes for patients.

Preventing medical errors

As medical students, we can play an important role in preventing patient harm and picking up medical errors. Medical errors can be caused by a wide variety of reasons, ranging from miscommunication to a loss of documentation to the lack of time on the part of physicians. [15-18] These are all situations where medical students can be as capable as medical professionals in noticing these errors. Seiden et al. [19] reports four cases where medical students prevented medical errors and ensured patient safety, ranging from ensuring sterile technique in surgery to correcting a medication error to respecting a do not resuscitate order. These are all cases within the circle of competence of most medical students. Anecdotally, there are many more cases of situations where a medical student has contributed to reducing medical errors. Another study has shown that up to 76% of second-year medical students at the University of Missouri-Columbia observed a medical error. [17] However, only 56% reported the error to the resident-in-charge. Various factors contribute to this relatively low percentage: inexperience, lack of confidence, hesitancy to voice opinions, being at the bottom of the medical hierarchy and fear of conflict. [17] Whilst medical students should not be relied upon as primary gatekeepers for patient safety, we should be more forthcoming with voicing our opinions and concerns. By being involved and attuned to the fact that medical errors are common, we can make a significant difference to a patient’s well-being. In recognition of the need to educate medical students about the significance of medical errors, there have been efforts to integrate this formally into the medical student curriculum. [20, 21]

Assistance with collecting data

Physicians in clinical environments are notoriously limited with time. Average duration of consultations may range from eight to nineteen minutes, [22-24] which is often insufficient to take a comprehensive history. There are also a range of administrative duties that reduce patient interaction time, such as ordering investigations, filling out drug charts, arranging referrals or finding a hospital bed. [25,26] Mache et al. [25,26] have reported that pediatricians and surgeons spent up to 27% and 21% of their time on administrative duties and documentation. Medical students tend to have less administrative duties and are thus able to spend more time on individual patients. Medical students can be just as competent at taking medical histories or examining patients, [27,28] and they can uncover crucial pieces of information that had gone unnoticed, such as the presence of a ‘Do Not Resuscitate’ order in a seriously ill patient. [19] Students are also often encouraged to try their hand at practical skills such as venipuncture, history taking or clinical examination, all of which saves physician time and contribute to the diagnostic process as well.

Emotional well-being of patients

Due to the unique nature of the hospital environment, patients often have a range of negative emotions, ranging from anxiety to apprehension and depression. [29-31] A patient’s journey in the hospital can be an unnerving and disorientating experience, where he/ she is referred from unit to unit with several different caregivers at each stage of the process. This issue is further compounded by the fact that clinicians simply do not always have sufficient patient contact time to soothe their fears and emotional turmoil; studies have shown that direct patient contact time represented a small proportion of work time, as little as 4% in some cases. [25,26,32,33] Most patients feel comfortable and enjoy their interactions with medical students and some even feel that they benefit from having medical students in the healthcare team. [9,10,12,14,34] By being empathetic and understanding of our patient’s conditions, we can often alleviate the isolating and disorientating nature of the hospital environment. [12,35]

International health

Most medical students, particularly earlier in the course are motivated by idealistic notions of making a difference to the welfare of our patients. [36,37] This often extends to the less fortunate in developing countries and students often have a strong interest in global health and overseas electives (38, 39). This can be a win-win situation for both parties. Healthcare systems in developing countries stand to benefit from the additional help and expertise provided by students and students gain educational benefits (recognising tropical conditions, public health, alternative medicine), enhanced skills (clinical examination, performing investigations), cultural exposure and fostering certain values (idealism, community service). [38] However, it is important to identify our limits as medical students and learn how to turn down requests that are beyond our scope of knowledge, training and experience. This is an ethical dilemma that many students face whilst on electives in resource-poor areas, and it is often a fine line to tread between providing help to those in desperate need and inappropriate abuse of one’s position. We have the potential to do more harm than good when exceeding our capabilities, and given the lack of clear guidelines it comes down to the student to be aware of these ethical dilemmas and draw the line between right and wrong in these situations. [40,41]

Student-run clinics and health promotion activities

In other countries, such as the United States, student-run medical clinics play a crucial role in the provision of affordable healthcare. [42- 45] These clinics number over 120 across the country and have up to 36 000 visits nation-wide. [43] In these clinics, students from a variety of disciplines (such as medicine, nursing, physiotherapy, dentistry, alternative medicine, social work, law and pharmacy) collaborate to manage patients coming from disadvantaged backgrounds. [46] Whilst this concept is still an emerging one in Australia (the first student run clinic was initiated by Doutta Galla Community Health and the University of Melbourne this year, culminating in the REACH clinic – Realising Education, Access and Collaborative Health), [47] there has been a strong tradition of medical students being heavily involved with health promotion projects in their respective local communities. [48] It is not uncommon to hear of students being actively involved in community health promotion clinics, blood donation drives or blood pressure screening, [49] all of which have practical implications on public health. Through modifying our own health behaviours and active participation in local communities, students can have a tangible impact and influence others to lead a healthier lifestyle.

Note of caution

Whilst medical students should actively participate and be an integral part of a medical team, care must be taken to not overstep the professional boundaries of our role. It is always important to remember that our primary aim is to learn how to care for patients, not to be the principle team member responsible for patient care. There have been several ethical issues surrounding the behavior of medical students in clinical settings in recent times. A prominent example of this is the lack of valid consent whilst observing or performing intimate examinations. This report by Rees et al. [8] generated widespread controversy and public outrage. [50] The study showed that most medical students complied with the instructions of more senior clinicians and performed sensitive examinations without explicit consent, sometimes whilst patients were under anaesthesia. There were a variety of reasons leading up to the action, ranging from the lack of similar opportunities to the presumed pressure from supervising doctors. This is not a new issue; a previous study by Coldicott et al. [51] had also highlighted this as a problem. As emerging medical professionals we must avoid getting carried away by the excitement of clinical practice and ignore the vulnerability of our patients.

Conclusion

The clinical environment offers medical students limitless potential to develop their clinical acumen. As medical students we have the opportunity to participate fully in all stages of patient care, from helping formulate a diagnosis to proposing a management plan. Holistic care for our patients goes beyond the physical aspect of disease and medical students can play an important role in ensuring that the psychosocial wellbeing of patients is not ignored. Our impact is not just restricted to a hospital setting; we are only limited by our imagination and determination. By harnessing the idealism unique to medical students we are able to come up with truly inspirational projects that influence local or overseas communities. Through experiencing a full range of clinical scenarios in different environments we can develop a generation of doctors that are not only clinically astute, but also well- rounded individuals with the ability to connect to patients from all backgrounds. As medical students we have the potential to contribute in a practical manner with tangible outcomes, and we should aspire to that as we make the fifth cup of coffee for the busy registrar on call.

Acknowledgements

Michael Thompson for his feedback and assistance in editing draft manuscripts.

Conflict of interest

None declared.

Correspondence

f.chao@amsj.org

 

Categories
Articles

The AMSJ leading the way in Australian medical student research

Welcome to Volume 3, Issue 2, another successful issue of the Australian Medical Student Journal. The current issue again provides our medical student and junior doctor readership with a broad range of intellectually intriguing topics.

Our medical student authors have continued to submit quality articles, demonstrating their important contribution to research. Publication of medical student research is an important part of the transition from being a medical student to a junior academic clinician and the AMSJ continues to provide this opportunity for Australian medical students.

Some key highlights from this issue include an editorial from our Associate Editor, Foong Yi Chao, who illustrates the important role that medical students have in clinical settngs – in addition to their role in medical research. Continuing this theme are submissions that describe an impressive Australian medical student-led project aimed at reducing the incidence of malaria in a Ugandan community as well as a student elective project in India. Other submissions range from an examination of the traditional white coat in clinical medicine, the history and evolution of abdominal aortic aneurysm repair surgery, case reports in paediatric surgery and infectious diseases, a student perspective in palliative care medicine and the future role of direct-to-consumer genetic tests.

We also present a systematic review on “Predicting falls in the elderly” by Mina Sarofim, which has won the award for the best article for Volume 3, Issue 2. This research was identified by our editors and reviewers to be of a significantly high quality, with robust and rigorous methodology. Sarofim’s article analyses an important problem and cause of great morbidity in the elderly population.

The Hon. Jillian Skinner MP, New South Wales Minister for Health and Minister for Medical Research provides us with an insightful discussion on the role of e-health and telemedicine programs in improving healthcare. Advances in e-health will be of significant value to the Australian community, especially in rural and regional areas where a lack of appropriate specialist care has been a major problem.

The AMSJ continues to support initiatives to encourage student research. We are pleased to be publishing a supplementary online issue of research abstracts presented at the Australasian Student’s Surgical Conference (ASSC) in June this year.

Another new addition to our website will be a database of all peer-reviewers who have contributed in 2012. We are always on the look out for new peer reviewers who are welcome to fill in their details via our website.

The AMSJ Blog is another initiative that we are excited to have been redeveloping and revitalising. From November 2012, readers can look forward to regular fortnightly articles from the AMSJ staff. Topics coming up include conference summaries, tips on professional networking and even a discussion on the coffee habits of medical students!

Since our inaugural issue in 2010, the AMSJ has continued to expand as a student publication. We received over 70 submissions for this issue and have been able to continue to publish approximately 30-50% of submissions. We have also recently completed a new Australia- wide recruitment for staff members. Our nation-wide based staff have continued to successfully work together through the use of teleconference meetings and email.

The production of this journal is a major undertaking, with several staff members completing their final medical school exams during the process of compiling this issue. We would like to extend our thanks to all of our voluntary staff members as well as the invaluable assistance of our external peer- reviewers for donating their time and efforts in ensuring a successful issue of the AMSJ.

In addition, we would like to thank you- our valued readers and authors for your continued support and providing the body of work that has made this publication possible. We hope that you will enjoy the current issue as much as we have in compiling it.

Categories
Feature Articles Articles

The history of abdominal aortic repair: from Egypt to EVAR

Introduction

An arterial aneurysm is defined as a localised dilation of an artery to greater than 50% of its normal diameter. [1] Abdominal aortic aneurysm (AAA) is common with an incidence five times greater in men than women. [2] In Australia the prevalence of AAAs is 4.8% in men aged 65-69 years rising to 10.8% in those aged 80 years and over. [3] The mortality from ruptured AAA is very high, approximately 80%, [4] whilst the aneurysm-related mortality of surgically treated, asymptomatic AAA is around five percent. [5] In Australia AAAs make up 2.4% of the burden of cardiovascular disease, contributing 14,375 disability adjusted life years (DALYs), ahead of hypertension (14,324) and valvular heart disease (13,995). [6] Risk factors for AAA of greater than four centimetres include smoking (RR=3-5), family history (OR=1.94), coronary artery disease (OR= 1.52), hypercholesterolaemia (OR= 1.44) and cerebrovascular disease (OR= 1.28). [7] Currently, the approach to AAA management involves active surveillance, risk factor reduction and surgical intervention. [8]

The surgical management of AAAs dates back over 3000 years and has evolved greatly since its conception. Over the course of surgical history arose three landmark developments in aortic surgery: crude ligation, open repair and endovascular AAA repair (EVAR). This paper aims to examine the development of surgical interventions for AAA, from its experimental beginnings in ancient Egypt to current evidence based practice defining EVAR therapy, and to pay homage to the surgical and anatomical masters who made significant advances in this field.

Early definition

The word aneurysm is derived from the Greek aneurysma, for ‘widening’. The first written evidence of AAA is recorded in the ‘Book of Hearts’ from the Eber Scolls of ancient Egypt, dating back to 1550 BC. [9] It stated that “only magic can cure tumours of the arteries.” India’s Sushruta (800 ~ 600 BC) mentions aneurysm, or ‘Granthi’, in chapter 17 of his great medical text ‘Sushruta Samhita’. [10] Although undistinguished from painful varicose veins in his text, Sushruta shared a similar sentiment to the Egyptians when he wrote “[Granthi] can be cured only with the greatest difficulty”. Galen (126-c216 AD), a surgeon of ancient Rome, first formally described these ‘tumours’ as localised pulsatile swellings that disappear with pressure. [11] He was also first to draw anatomical diagrams of the heart and great vessels. His work with wounded gladiators and that of the Greek surgeon Antyllus in the same period helped to define traumatic false aneurysms as morphologically rounded, distinct from true, cylindrical aneurysms caused by degenerative dilatation. [12] This work formed the basis of the modern definition.

Early ligation

Antyllus is also credited with performing the first recorded surgical interventions for the treatment of AAA. His method involved midline laparotomy, proximal and distal ligation of the aorta, central incision of the aneurysm sac and evacuation of thrombotic material. [13] Remarkably, a few patients treated without aseptic technique or anaesthetic managed to survive for some period. Antyllus’ method was further described in the seventh century by Aetius, whose detailed paper ‘On the Dilation of Blood Vessels,’ described the development and repair of AAA. [14] His approach involved stuffing the evacuated sac with incense and spices to promote pus formation in the belief that this would aid wound healing. Although this belief would wane as knowledge of the process of wound healing improved, Antyllus’s method would remain largely unchanged until the late nineteenth century.

Anatomy

The Renaissance saw the birth of modern anatomy, and with it a proper understanding of aortic morphology. In 1554 Vesalius (1514-1564) produced the first true anatomical plates based on cadaveric dissection, in ‘De Humani Corporis Fabrica.’ [15] A year later he provided the first accurate diagnosis and illustrations of AAA pathology. In total, Vesalius corrected over 200 of Galen’s anatomical mistakes and is regarded as the father of modern anatomy. [16] His discoveries began almost 300 years of medical progress characterised by the ‘surgeon-anatomist’, paving the way for the anatomical greats of the sixteenth, seventeenth and eighteenth centuries. It was during this period that the great developments in the anatomical and pathological understanding of aneurysms took place.

Pathogenesis

Ambroise Pare (1510-1590) noted that aneurysms seemed to manifest following syphilis, however he attributed the arterial disease to syphilis treatment rather than the illness itself. [17] Stress on the arteries from hard work, shouting, trumpet playing and childbirth were considered other possible causes. Morgagni (1682-1771) described in detail the luetic pathology of ruptured sacular aortic aneurysms in syphilitic prostitutes, [18] whilst Monro (1697-1767) described the intima, media and adventitia of arterial walls. [19] These key advances in arterial pathology paved the way for the Hunter Brothers of London (William Hunter [1718-1783] and John Hunter [1728-1793]) to develop the modern definitions of true, false and mixed aneurysms. Aneurysms were now accepted to be caused by ‘a disproportion between the force of the blood and the strength of the artery’, with syphilis as a risk factor rather than a sole aetiology. [12] As life expectancy rose dramatically in the twentieth century, it became clear that syphilis was not the only cause of arterial aneurysms, as the great vascular surgeon Rudolf Matas (1860-1957) stated: “The sins, vices, luxuries and worries of civilisation clog the arteries with the rust of premature senility, known as arteriosclerosis or atheroma, which is the chief factor in the production of aneurysm.” [20]

Modern ligation

The modern period of AAA surgery began in 1817 when Cooper first ligated the aortic bifurcation for a ruptured left external iliac aneurysm in a 38 year old man. The patient died four hours later; however, this did not discourage others from attempting similar procedures. [21]

Ten further unsuccessful cases were recorded prior to the turn of the twentieth century. It was not until a century later, in 1923, that Matas performed the first successful complete ligation of the aorta for aneurysm, with the patient surviving seventeen months and dying from tuberculosis. [22] Described by Osler as the ‘modern father of vascular surgery’, Matas also developed the technique of endoaneurysmorrhaphy, which involved ligating the aneurysmal sac upon itself to restore normal luminal flow. This was the first recorded technique aiming to spare blood flow to the lower limbs, an early prelude to the homograft, synthetic graft and EVAR.

Early Alternatives to Ligation

Despite Matas’ landmark success, the majority of surgeons of the era shared Suchruta’s millennia-old fear of aortic surgery. The American Surgical Association wrote in 1940, “the results obtained by surgical intervention have been discouraging.” Such fear prompted a resurgence of techniques introducing foreign material into the aneurismal lumen with the hope of promoting thrombosis. First attempted by Velpeau [23] with sewing needles in 1831, this technique was modified by Moore [24] in 1965 using 26 yards of iron wire. Failure of aneurysm thrombosis was blamed on ‘under packing’ the aneurysm. Corradi used a similar technique, passing electric current through the wire to introduce thrombosis. This technique became known as fili-galvanopuncture or the ‘Moore-Corradi method’. Although this technique lost popularity for aortic procedures, it marked the beginning of electrothrombosis and coiling of intracranial aneurysms in the latter half of the twentieth century. [25]

Another alternative was wrapping the aneurysm with material in an attempt to induce fibrosis and contain the aneurysm sac. AAA wrapping with cellophane was investigated by Pearse in 1940 [26] and Harrison in 1943. [27] Most notably, Nissen, the pioneer of Nissen fundoplication for hiatus hernia, famously wrapped Albert Einstein’s AAA with cellophane in 1948. [28] The aneurysm finally ruptured in 1955, with Einstein refusing surgery: “I want to go when I want. It is tasteless to prolong life artificially.” [28]

Anastomosis

Many would argue that the true father of modern vascular techniques is Alexis Carrel. He conducted the first saphenous vein bypass in 1948, the first successful kidney transplant in 1955 and the first human limb re-implantation in 1962. [13,29] Friedman states that “there are few innovations in cardiac and vascular surgery today that do not have roots in his work.” [13] Perhaps of greatest note was Carrel’s development of the triangulation technique for vessel anastomosis.

This technique was utilised by Crafoord in Sweden in 1944, in the first correction of aortic coarctation, and by Shumacker [30] in 1947 to correct a four centimetre thoracic aortic aneurysm secondary to coarctation. Prior to this time, coarctation was treated in a similar fashion to AAA, with ligation proximal and distal to the defect. [31] These developments would prove to be great milestones in AAA surgery as the first successful aortic aneurysm resection with restoration of arterial continuity.

Biological grafts

Despite this success, restoration of arterial continuity was limited to the thoracic aorta. Abdominal aneurysms remained too large to be anastomosed directly and a different technique was needed. Carrel played a key role in the development of arterial grafting, used when end-to-end anastomosis was unfeasible. The original work was performed by Carrel and Guthrie (1880-1963) with experiments transplanting human and canine vessels. [32,33] Their 1907 paper ‘Heterotransplantation of blood vessels’ [34] began with:

“It has been shown that segments of blood vessels removed from animals may be caused to regain and indefinitely retain their function.”

This discovery led to the first replacement of a thrombosed aortic bifurcation by Jacques Oudot (1913-1953) with an arterial homograft in 1950. The patient recovered well, and Oudot went on to perform four similar procedures. The landmark first AAA resection with restoration of arterial continuity can be credited to Charles Dubost (1914-1991) in 1951. [35] His patient, a 51 year old man, received the aorta of a young girl harvested three weeks previously. This brief period of excitement quickly subsided when it was realised that the long-term patency of aortic homografts was poor. It did, however, lay the foundations for the age of synthetic aortic grafts.

Synthetic grafts

Arthur Voorhees (1921-1992) can be credited with the invention of synthetic arterial prosthetics. In 1948, during experimental mitral valve replacement in dogs, Voorhees noticed that a misplaced suture had later become enveloped in endocardium. He postulated that, “a cloth tube, acting as a lattice work of threads, might indeed serve as an arterial prosthesis.” [36] Voorhees went on to test a wide variety of materials as possible candidates from synthetic tube grafts, resulting in the use of vinyon-N, the material used in parachutes. [37] His work with animal models would lead to a list of essential structural properties of arterial prostheses. [38]

Vinyon-N proved robust, and was introduced by Voorhees, Jaretski and Blakemore. In 1952 Voorhees inserted the first synthetic graft into a ruptured AAA. Although the vinyon-N graft was successfully implanted, the patient died shortly afterwards from a myocardial infarction. [39] By 1954, Voorhees had successfully implanted 17 AAAs with similar grafts. Schumacker and Muhm would simultaneously conduct similar procedures with nylon grafts. [40] Vinyon-N and nylon were quickly supplanted by Orlon. Similar materials with improved tensile strength are used in open AAA repair today, including Teflon, Dacron and expanded Polytetrafluoroethylene (PTFE). [41]

Modern open surgery

With the development of suitable graft material began the golden age of open AAA repair. The focus would now be largely on the Americans, particularly with surgeons DeBakey (1908-2008) and Cooley (1920) leading the way in Houston, Texas. In the early 1950s, DeBakey and Cooley developed and refined an astounding number of aortic surgical techniques. Debakey would also classify aortic dissection into different types depending on their site. In 1952, a year after Dubost’s first success in France, the pair would perform the first repair of thoracic aneurysm, [42] and a year later, the first aortic arch aneurysm repair. [43] It was around this time that the risks of spinal cord ischaemia during aortic surgery became apparent. Moderate hypothermia was first used and then enhanced in 1957, with Gerbode’s development of extracorporeal circulation, coined ‘left heart bypass’. In 1963, Gott expanded on this idea with a heparin-treated polyvinyl shunt from ascending to descending aorta. By 1970, centrifuge-powered, left-heart bypass with selective visceral perfusion had been developed. [44] In 1973, Crawford simplified DeBakey and Cooley’s technique by introducing sequential clamping of the aorta. By moving clamps distally, Crawford allowed for reperfusion of segments following the anastomoses of what had now become increasingly more complex grafts. [45] The work of DeBakey, Cooley and Crawford paved the way for the remarkable outcomes available to modern patients undergoing open AAA repair. Where once feared by surgeons and patients alike, in-hospital mortality following elective, open AAA now has a 30-day all-cause mortality of around five percent. [58]

Imaging

It must not be overlooked that significant advances in medical imaging have played a major role in reducing the incidence of ruptured AAAs and the morbidity and mortality associated with AAAs in general. The development of diagnostic ultrasound began in the late 1940s and 50s, with simultaneous research by John Wild in the United States, Inge Elder and Carl Hertz in Sweden and Ian Donald in Scotland. [46] It was the latter who published ‘Investigation of Abdominal Masses by Pulsed Ultrasound,’ regarded as one of the most important papers in diagnostic imaging. [47] By the 1960s, Doppler ultrasound would provide clinicians with both a structural and functional view of vessels, with colour flow Doppler in the 1980s allowing images to represent the direction of blood flow. The Multicentre Aneurysm Study showed that ultrasound screening resulted in a 42% reduction in mortality from ruptured AAAs over four years to 2002. [48] Ultrasound screening has resulted in an overall increase in hospital admissions for asymptomatic aneurysms; however, increases in recent years cannot be attributed to improved diagnosis alone, as it is known that the true incidence of AAA is also increasing in concordance with Western vascular pathology trends. [49]

In addition to the investigative power of ultrasound imaging, computed tomography (CT) scanners became available in the early 1970s. As faster, higher-resolution spiral CT scanners became more accessible in the 1980s, the diagnosis and management of AAAs became significantly more refined. [50] CT angiography has emerged as the gold standard for defining aneurysm morphology and planning surgical intervention. It is crucial in determining when emergent treatment is necessary, when calcification and soft tissue may be unstable, when the aortic wall is thickened or adhered to surrounding structures, and when rupture is imminent. [51] Overall operative mortality from ruptured AAA fell by 3.5% per decade from 1954-1997. [52] This was due to both a significant leap forward in surgical techniques in combination with drastically improved imaging modalities.

EVAR

The advent of successful open surgical repair of AAAs using synthetic grafts in the 1950s proved to be the first definitive treatment for AAA. However, the procedure remained highly invasive and many patients were excluded due to medical and anatomical contraindications. [53] Juan Parodi’s work with Julio Palmaz and Héctor Barone in the late 1980s aimed to rectify this issue. Parodi developed the first catheter-based arterial approach to AAA intervention. The first successful EVAR operation was completed by Parodi in Argentina on seventh September 1990. [54] The aneurysm was approached intravascularly via a femoral cutdown. Restoration of normal luminal blood flow was achieved with the deployment of a Dacron graft mounted on a Palmaz stent. [55] There was no need for aortic cross-clamping or major abdominal surgery. Similar non-invasive strategies were explored independently and concurrently by Volodos, [56] Lazarus [57] and Balko. [58]

During this early period of development there was significant Australian involvement. The work of Michael Lawrence-Brown and David Hartley at the Royal Perth Hospital led to the manufacture of the Zenith endovascular graft in 1993, a key milestone in the development of modern-day endovascular aortic stent-grafts. [59] The first bifurcated graft was successfully implanted one year later. [60] Prof James May and his team at the Royal Prince Alfred Hospital in Sydney conducted further key research, investigating the causes of aortic stent failure and complications. [61] This group went on to pioneer the modular design of present day aortic prostheses. [62]

The FDA approved the first two AAA stent grafts for widespread use in 1999. Since then, technical improvements in device design have resulted in improved surgical outcomes and increased ability to treat patients with difficult aneurysmal morphology. Slimmer device profiles have allowed easier device insertion through tortuous iliac vessels. [63] Furthermore, fenestrated and branched grafts have made possible the stent-grafting of juxtarenal AAA, where suboptimal proximal neck anatomy once meant traditional stenting would lead to renal failure and mesenteric ischaemia. [64]

AAA intervention now and beyond

Today, surgical intervention is generally reserved for AAAs greater than 5.5cm diameter and may be achieved by either open or endoluminal access. The UK small aneurysm trial determined that there is no survival benefit to elective open repair of aneurysms of less than 5.5cm. [8] The EVAR-1 trial (2005) found EVAR to reduce aneurysm related mortality by three percent at four years when compared to open repair; however, EVAR remains significantly more expensive and requires more re-interventions. Furthermore, it offers no advantage with respect to all cause mortality or health related quality of life. [5] These findings raised significant debate over the role of EVAR in patients fit for open repair. This controversy was furthered by the findings of the EVAR-2 trial (2005), which saw risk factor modification (fitness and lifestyle) as a better alternative to EVAR in patients unfit for open repair. [65] Many would argue that these figures are obsolete, with Criado stating, “it would not be unreasonable to postulate that endovascular experts today can achieve far better results than those produced by the EVAR-1 trial.” [53] It is undisputed that EVAR has dramatically changed the landscape of surgical intervention for AAA. By 2005, EVAR accounted for 56% of all non-ruptured AAA repairs but only 27% of operative mortality. Since 1993, deaths related to AAA have decreased dramatically, by 42%. [53] EVAR’s shortcomings of high long-term rates of complications and re-interventions, as well as questions of device performance beyond ten years, appear balanced by the procedure’s improved operative mortality and minimally invasive approach. [54]

Conclusion

The journey towards truly effective surgical intervention for AAA has been a long and experimental one. Once regarded as one of the most deadly pathologies, with little chance of a favourable surgical outcome, AAAs can now be successfully treated with minimally invasive procedures. Sushruta’s millennia-old fear of abdominal aortic surgery appears well and truly overcome.

Conflict of interest

None declared.

Correspondence

A Wilton: awil2853@uni.sydney.edu.au

References

[1] Kumar V et al. Robbins and Cotran Pathologic Basis of Disease 8th ed. Elsevier. 2010.
[2] Semmens J, Norman PE, Lawrence-Brown MMD, Holman CDJ. Influence of gender on outcome from ruptured abdominal aortic aneurysm. British Journal of Surgery. 2000;87:191-4.
[3] Jamrozik K, Norman PE, Spencer CA. et al. Screening for abdominal aortic aneurysms: lessons from a population-based study. Med. J. Aust. 2000;173:345-50.
[4]Semmens J, Lawrence-Brown MMD, Norman PE, Codde JP, Holman, CDJ. The Quality of Surgical Care Project: Benchmark standards of open resection for abdominal aortic aneurysm in Western Australia. Aust N Z J Surg. 1998;68:404-10.
[5] The EVAR trial participants. EVAR-1 (EndoVascular Aneurysm Repair): EVAR vs open repair in patients with abdomial aortic aneurym. Lancet. 2005;365:2179-86.
[6] National Heart Foundation of Australia. The Shifting Burden of Cardiovascular Disease. 2005.
[7] Fleming C, Whitlock EP, Beil TL, Lederle FA. Screening for abdominal aortic aneurysm: a best-evidence systematic review for the U.S. Preventive Services Task Force. Ann Intern Med. 2005;142(3):203-11.
[8] United Kingdom Small Aneurysm Trial Participants. UK Small Aneurysm Trial. N Eng J Med. 2002;346:1445-52.
[9] Ghalioungui P. Magic and Medical Science in Ancient Egypt. Hodder and Stoughton Ltd. 1963.
[10] Bhishagratna KKL. An English Translation of The Sushruta Samhita. Calcutta: Self Published; 1916.
[11] Lytton DG, Resuhr LM. Galen on Abnormal Swellings. J Hist Med Allied Sci. 1978;33(4):531-49.
[12] Suy R. The Varying Morphology and Aetiology of the Arterial Aneurysm. A Historical Review. Acta Chir Belg. 2006;106:354-60.
[13] Friedman SG. A History of Vascular Surgery. New York: Futura Publishing Company 1989;74-89.
[14] Stehbens WE. History of Aneurysms. Med Hist 1958;2(4):274–80.
[15] Van Hee R. Andreas Vesalius and Surgery. Verh K Acad Geneeskd Belg. 1993;55(6):515-32.
[16] Kulkarni NV. Clinical Anatomy: A problem solving approach. New Delhi: Jaypee Brothers Medical Publishers. 2012;4.
[17] Paré A. Les OEuvres d’Ambroise Paré. Paris: Gabriel Buon; 1585.
[18] Morgagni GB. Librum quo agitur de morbis thoracis. Italy: Lovanni; 1767 p270-1.
[19] Monro DP. Remarks on the coats of arteries, their diseases, and particularly on the formation of aneurysm. Medical essays and Observations. Edinburgh, 1733.
[20] Matas R. Surgery of the Vascular System. AMA Arch Surg. 1956;72(1):1-19.
[21] Brock RC. The life and work of Sir Astley Cooper. Ann R Coll Surg Engl.1969; 44:1.
[22] Matas R. Aneurysm of the abdominal aorta at its bifurcation into the common iliac arteries. A pictorial supplement illustration the history of Corrinne D, previously reported as the first recored instance of cure of an aneurysm of the abdominal aorta by ligation. Ann Surg. 1940;122:909.
[23] Velpeau AA. Memoire sur la figure de l’acupuncture des arteres dans le traitement des anevrismes. Gaz Med. 1831;2:1.
[24] Moore CH, Murchison C. On a method of procuring the consolidation of fibrin in certain incurable aneurysms. With the report of a case in which an aneurysm of the ascending aorta was treated by the insertion of wire. Med Chir Trans. 1864;47:129.
[25] Siddique K, Alvernia J, Frazer K, Lanzino G, Treatment of aneurysms with wires and electricity: a historical overview. J Neurosurg. 2003;99:1102–7.
[26] Pearse HE. Experimental studies on the gradual occlusion of large arteries. Ann Surg. 1940;112:923.
[27] Harrison PW, Chandy J. A subclavian aneurysm cured by cellophane fibrosis. Ann Surg. 1943;118:478.
[28] Cohen JR, Graver LM. The ruptured abdominal aortic aneurysm of Albert Einstein. Surg Gynecol Obstet. 1990;170:455-8.
[29] Edwards WS, Edwards PD. Alexis Carrel: Visionary surgeon. Springfield, IL: Charles C Thomas Publisher, Ltd 1974;64–83.
[30] Shumacker HB Jr. Coarctation and aneurysm of the aorta. Report of a case treated by excision and end-to-end suture of aorta. Ann Surg. 1948;127:655.
[31] Alexander J, Byron FX. Aortectomy for thoracic aneurysm. JAMA 1944;126:1139.
[32] Carrel A. Ultimate results of aortic transplantation, J Exp Med. 1912;15:389–92.
[33] Carrel A. Heterotransplantation of blood vessels preserved in cold storage, J Exp Med. 1907;9:226–8.
[34] Guthrie CC. Heterotransplantation of blood vessels, Am J Physiol 1907;19:482–7.
[35] Dubost C. First successful resection of an aneurysm of an abdominal aorta with restoration of the continuity by human arterial graft. World J Surg. 1982;6:256.
[36] Voorhees AB. The origin of the permeable arterial prosthesis: a personal reminiscence. Surg Rounds. 1988;2:79-84.
[37] Voorhees AB. The development of arterial prostheses: a personal view. Arch Surg. 1985;120:289-95.
[38] Voorhees AB. How it all began. In: Sawyer PN, Kaplitt MJ, eds. Vascular Grafts. New York: Appleton-Century-Crofts 1978;3-4.
[39] Blakemore AH, Voorhees AB Jr. The use of tubes constructed from vinyon “N” cloth in bridging arterial defects – experimental and clinical. Ann Surg. 1954;140:324.
[40] Schumacker HB, Muhm HY. Arterial suture techniques and grafts: past, present, and future. Surgery. 1969;66:419-33.
[41] Lidman H, Faibisoff B, Daniel RK. Expanded Polytetrafluoroethene as a microvascular stent graft: An experimental study. Journal of Microsurgery. 1980;1:447-56.
[42] Cooley DA, DeBakey ME. Surgical considerations of intrathoracic aneurysms of the aorta and great vessels. Ann Surg. 1952;135:660–80.
[43] DeBakey ME. Successful resection of aneurysm of distal aortic arch and replacement by graft. J Am Med Assoc. 1954;155:1398–403.
[44] Argenteri A. The recent history of aortic surgery from 1950 to the present. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele 2008;200-25.
[45] Green Sy, LeMaire SA, Coselli JS. History of aortic surgery in Houston. In: Chiesa R, Melissano G, Coselli JS et al. Aortic surgery and anaesthesia “How to do it” 3rd Ed. Milan: Editrice San Raffaele. 2008;39-73.
[46] Edler I, Hertz CH. The use of ultrasonic reflectoscope for the continuous recording
of the movements of heart walls. Clin Physiol Funct I. 2004;24:118–36.
[47] Ian D. The investigation of abdominal masses by pulsd ultrasound. The Lancet June 1958;271(7032):1188-95.
[48] Thompson SG, Ashton HA, Gao L, Scott RAP. Screening men for abdominal aortic aneurysm: 10 year mortality and cost effectiveness results from the randomised Multicentre Aneurysm Screening Study. BMJ. 2009;338:2307.
[49] Filipovic M, Goldacre MJ, Robert SE, Yeates D, Duncan ME, Cook-Mozaffari P. Trends in mortality and hospital admission rates for abdominal aortic aneurysm in England and Wales. 1979-1999. BJS 2005;92(8):968-75.
[50] Kevles BH. Naked to the Bone: Medical Imagine in the Twentieth Century. New Brunswick, NJ: Rutgers University Press 1997;242-3.
[51] Ascher E, Veith FJ, Gloviczki P, Kent KC, Lawrence PF, Calligaro KD et al. Haimovici’s vascular surgery. 6th ed. Blackwell Publishing Ltd. 2012;86-92.
[52] Bown MJ, Sutton AJ, Bell PRF, Sayers RD. A meta-analysis of 50 years of ruptured abdominal aortic aneurysm repair. British Journal of Surgery. 2002;89(6):714-30.
[53] Criado FJ. The EVAR Landscape in 2011: A status report on AAA therapy. Endovascular Today. 2011;3:40-58.
[54] Criado FJ. EVAR at 20: The unfolding of a revolutionary new technique that changed everything. J Endovasc Ther. 2010;17:789-96.
[55] Hendriks Johanna M, van Dijk Lukas C, van Sambeek Marc RHM. Indications for
endovascular abdominal aortic aneurysms treatment. Interventional Cardiology.
2006;1(1):63-64.
[56] Volodos NL, Shekhanin VE, Karpovich IP, et al. A self-fixing synthetic blood vessel endoprosthesis (in Russian). Vestn Khir Im I I Grek. 1986;137:123-5.
[57] Lazarus HM. Intraluminal graft device, system and method. US patent 4,787,899 1988.
[58] Balko A, Piasecki GJ, Shah DM, et al. Transluminal placement of intraluminal polyurethane prosthesis for abdominal aortic aneurysm. J Surg Res. 1986;40:305-9.
[59] Lawrence-Brown M, Hartley D, MacSweeney ST et al. The Perth endoluminal bifurcated graft system—development and early experience. Cardiovasc Surg. 1996;4:706–12.
[60] White GH, Yu W, May J, Stephen MS, Waugh RC. A new nonstented balloon-expandable graft for straight or bifurcated endoluminal bypass. J Endovasc Surg. 1994;1:16-24.
[61] May J, White GH, Yu W, Waugh RC, McGahan T, Stephen MS, Harris JP. Endoluminal grafting of abdominal aortic aneurysms: cause of failure and their prevention. J Endovasc Surg. 1994;1:44-52.
[62] May J, White GH, Yu W, Ly CN, Waugh R, Stephen MS, Arulchelvam M, Harris JP. Concurrent comparison of endoluminal versus open repair in the treatment of abdominal aortic aneurysms: analysis of 303 patients by life table method. J Vasc Surg. 1998;27(2):213-20.
[63] Omran R. Abul-Khouodud Intervention for Peripheral Vascular Disease Endovascular AAA Repair: Conduit Challenges. J Invasive Cardiol. 2000;12(4).
[64] West CA, Noel AA, Bower TC, et al. Factors affecting outcomes of open surgical repair of pararenal aortic aneurysms: A 10-year experience. J Vasc Surg. 2006;43:921–7.
[65] The EVAR trial participants. EVAR-2 (EndoVascular Aneurysm Repair): EVAR in patients unfit for open repair. Lancet. 2005;365:2187-92.

Categories
Book Reviews Articles

Harrison’s: Friend or Foe?

Longo DL, Harrison TR. Harrison’s Principles of Internal Medicine, Eighteenth Edition. London: McGraw-Hill; 2012.

RRP: $199

So a review of this text has been done before, not of Harrison’s Principles of Internal Medicine (Harrison’s) in isolation but a comparison to William Osler’s The Principles and Practice of Medicine. [1] The latest edition of Harrison’s has been available since July 2011, and as an avid user of the online version of Harrison’s (via AccessMedicine™ through the University’s library website). The book is found in two tomes, a whopping 4012 pages in total. I have a thing for being able to physically hold a book and read it, hence not relying on the online edition, which has been previously compared to the text version, as my internet connection is very erratic and the University has a concurrent users policy. [2]

Alas it was a decision that I do regret (to some extent) as I have since found myself referring to Harrison’s to find an answer to a problem, whether it be electronically via the DVD given with the book or via fl icking through the book itself, and neglecting some other general medicine or specialised texts that I own. This speaks volumes about Harrison’s comprehensive nature, but also about my enjoyment of the text.

So what do I like about the book? It is detailed, this may speak more about myself than the text but I think that many medical students appreciate this level of detail, if only for interest rather than what is actually required. I mean, do you know of any other books with 395 chapters and another 51 chapters available electronically? I love the detailed explanations of concepts such as “Insulin biosynthesis, secretion and action”, which would normally be found in a more specialised text such as Lehninger’s Principles of Biochemistry™, and pathophysiology of common diseases such as asthma, COPD and myocardial infarction. [3]

The “yellow sections” in the chapters are a great reference for medical students and physicians alike, these are the sections on treatment of certain conditions. The diagrams are great, as are flowcharts, which explain key concepts such as development of a certain condition (for example, ischaemic stroke) or treatment or diagnostic algorithms, such as tuberculosis or HIV/AIDS. The layout of the parts, sections and chapters of the text are very logical and (if you were keen enough) could be read in order for example:

“Part 10: Disorders of the cardiovascular system, Section 1: Introduction to cardiovascular disorders, Chapter 224: Basic Biology of the Cardiovascular system, Chapter 225: Epidemiology of Cardiovascular disease … Section 2: Diagnosis of Cardiovascular disorders, Chapter 227: Physical examination of the cardiovascular system, Chapter 228: Electrocardiography … Section 3: Disorders of rhythm … Section 4: Disorders of the heart … Section 5: Vascular disease”

It is easy to see how logically the book is organised, starting from the basics of the given system or group of conditions then working through epidemiology, diagnosis and then fi nally about the conditions themselves; and given that Part 10 of the book as a whole spans pages 1797 – 2082 (yes, 285 pages) you can gather an appreciation for the detail of the text. Another great feature is the “further readings” given at the end of each chapter citing original and review publications from peer reviewed journals so (if interested) you can read some more about the topic you are interested in.

What don’t I like about the book? Having two volumes can sometimes be a little tedious when you pick up one and then find that the topic you want is in the other (although you have to remember page numbers this way, it is still preferable to having one enormous tome with a tiny typeface). The organisation of the text is a double-edged sword as it can get frustrating as when searching for a condition such as polycystic ovarian syndrome (PCOS) this will bring up entries in sections such as: menstrual disorders, biology of obesity, amenorrhoea, metabolic syndrome, hirsutism and virilisation and diabetes mellitus; yet there is no definitive section on PCOS itself as there is for a condition such as phaeochromocytoma. Sometimes you open a page, and the amount of text overwhelms you and there are no figures to break it up, which can be quite intimidating for a medical student to find one specific passage or sentence. This isn’t too large a problem in my opinion, but I have known students to be put off by books of such a nature.

References

[1] Hogan DB. Did Osler suff er from “paranoia antitherapeuticum baltimorensis”? A comparative content analysis of The Principles and Practice of Medicine and Harrison’s Principles of Internal Medicine, 11th edition. CMAJ. 1999 Oct;161(7):842-5.

[2] DeZee KJ, Durning S, Denton GD. effect of electronic versus print format and different reading resources on knowledge acquisition in the third-year medicine clerkship. Teach Learn Med. 2005;17(4):349-54.

[3] Powers AC. “Insulin Biosynthesis, Secretion and Action” from Harrison’s Principles of Internal Medicine. 18 ed. Longo DF, A; Kasper, D; Hauser, S; Jameson, J L; Loscalzo, J, editor. Columbus: McGraw-Hill Medical; 2011.

Categories
Articles Book Reviews

The only medical science textbook you need to buy?

Wilkins R, Cross S, Megson I, Meredith D. Oxford Handbook of Medical Sciences, Second Edition. Oxford: Oxford University Press; 2011.

RRP: $47.95

A complete guide to the medical sciences that fits in your pocket? Including anatomy? It sounds like something you’d find on the bookshop shelf between Refl exology at Your Fingertips and Sex Explained. But The Oxford Handbook of Medical Sciences (OHMS) is probably one of the few serious books that handles this enormous topic and can still be picked up with one hand. The first edition was published in 2006, and it’s been a fairly constant companion since I started graduate medicine at Sydney University. The dense but well written text often feels more conducive to medical school than authoritative textbooks – if you’re asked to explain a concept in a tutorial, the 30 second answer is better than the five minute dissertation. Compiling principles and systems also means you can flip from say anatomy to immunology without piling up your desk with resources. Unfortunately, the more I’ve used the first edition the more niggling errors I’ve come across. Granted most are just typos, but others were more frustrating. Including a colour DNA sequencing output that seems more CSI-prop than medical text, at least to someone with a molecular biology background. And errors like labelling the muscles of mastication as supplied by cranial nerve VIII are inexcusable (instead of V3 mandibular – so presumably type-setting error). So OHMS1e – a great book in serious need of a revision, but could the second edition be the last medical science book you ever buy?

The OHMS second edition was published September 2011 from $35 in online bookshops. On first impression it has not transformed into a full colour extravaganza like the latest Oxford Handbooks of Clinical Medicine/Specialties. It is 40 pages longer than the original, 962 in total, and still small enough for a big pocket. Much of the first edition worked well and it is good to see that the layout remains the same, with each topic generally covered in two pages or less, with plenty of room for annotation. The first three chapters cover the essentials: cells, molecules and biochemistry – with some good looking new figures. The ten systems-based chapters are now followed by a chapter on medicine and society. The final chapter – techniques of medical sciences – has had a timely rewrite, it won’t make you a lab scientist but at least you’ll be able to have an intelligent conversation with someone who is. The best addition, in my opinion, are the blue boxes succinctly summarising relevant treatments and drug therapies in all the sections.

The cross-referencing to the most recent clinical Oxford Handbooks is a welcome update (in spite of a couple that refer to OHCM8p.000). I would have liked to see a more thorough reworking of the anatomy section; the diagram of the muscles of the hand remains duplicated a few pages apart. The molecular biology chapter, the one I feel semi-qualified to comment on, is my major complaint. There is no mention of new sequencing technologies and of non-coding RNAs that we are frequently told are the future of the field. Instead Maxam-Gilbert sequencing, a technique probably last done in the 1980s is still covered. Furthermore, ‘junk DNA’, a term surely killed off by the ENCODE project, makes a vampirelike appearance here. [1]

In summary, if you’ve already built a reasonable understanding of the medical sciences and are looking for a one-stop book for reference or revision on the run then this book is a good option. For its convenience and conciseness it is hard to beat OHMS2e. The USMLE crammers like First Aid, offer analogous coverage at an equivalent price but carrying one in your pocket isn’t an option. But beware – as far as OHCM2e is concerned the muscles of mastication are still innervated by CNVIII. Now where is my anatomy book?

References

[1] Myers RM, Stamatoyannopoulos J, Snyder M, Dunham I, Hardison RC, Bernstein BE, et al. A user’s guide to the encyclopedia of DNA elements (ENCODE). PLoS Biol 2011 Apr;9(4):e1001046.

Categories
Feature Articles Articles

Is there a role for end-of-life care pathways for patients in the home setting who are supported with community palliative care services?

The concept of a “good death” has developed immensely over the past few decades and we now recognise the important role of palliative care services in healthcare for the dying, our most vulnerable population. [1-3] In palliative care, end-of-life care pathways have been developed to transfer the gold standard hospice model of care for the dying to other settings, addressing the physical, psychosocial and practical issues surrounding death. [1,4] Currently, these frameworks are used in hospitals and residential aged-care facilities across Australia. [1] However, there is great potential for these pathways to be introduced into the home setting with support from community palliative care services. This could help facilitate a good death for these patients in the comfort of their own home, and also support their families through the grieving process.

Although there is no one definition of a “good death”, many studies have examined factors considered important at the end-of-life by patients and their families. Current literature acknowledges that terminally ill patients highly value adequate pain and symptom management, avoidance of prolongation of death, preparation for end-of-life, relieving the burden imposed on their loved ones, spirituality, and strengthening relationships with health professionals through acknowledgement of imminent death. [2] Interestingly, the Steinhauser study noted a substantial disparity in views on spirituality between physicians and patients. [3] Physicians were found to rank good symptom control as most important, whilst patients considered spiritual issues to hold equal significance. These studies highlight the individual nature of end-of-life care, which refl ects why the holistic approach of palliative care can improve the quality of care provided.

It is recognised that patients with life-limiting illnesses have complex needs that often require a multidisciplinary approach with multiple care providers. [1] However, an increased number of team members also creates its own challenges, and despite the best intentions, care can often become fragmented due to poor interdisciplinary communication. [5] This can lead to substandard end-of-life care with patients suff ering prolonged and painful deaths, and receiving unwanted, expensive and invasive care, as demonstrated by the Study to Understand Prognoses and Preferences for Outcomes and Risks of Treatments (SUPPORT). [6] Temel et al. also demonstrated that palliative care can improve the documentation of advanced care directives. [7] For terminally ill patients, this is essential in clarifying and enabling patients’ wishes regarding end-of-life to be respected.

In 2010, Temel et al. conducted a randomised controlled trial in patients with newly diagnosed metastatic non-small-cell lung cancer, comparing the effect of palliative care and standard oncologic therapy, to standard oncologic therapy alone. [7] Results demonstrated that palliative care intervention improves quality of life and reduces rates of depression, consistent with existing literature. [7] Furthermore, despite receiving less aggressive end-of-life care, the additional early involvement of palliative care services resulted in a significant prolongation of life, averaging 2.7 months (p = 0.02). [7] This 30% improved survival benefit is equivalent to that achieved with a response to standard chemotherapy regimens, which has profound significance for patients with metastatic disease. [7] This study thereby validates the benefits of early palliative care intervention in oncology patients. In addition, early palliative intervention encourages advance care planning, allowing treating teams to elicit and acknowledge patient preferences regarding end-of-life care.

Many physicians often find it difficult to discuss poor prognoses with patients, potentially leaving patients and their families unaware of their terminal condition, despite death being anticipated by the treating team. [1,4] Many health care professionals are uncomfortable discussing death and dying, citing lack of training and fear of upsetting the patient. [8] Regardless, patients are entitled to be informed and supported through this difficult time. In addition, terminal patients and their caregivers are often neglected in decisions about their care, [9] despite their fundamental legal and ethical right to be involved, and studies indicate that they often want to be included in such discussions. [1,10,11] With the multitude of patient values and preferences for care, it can often be difficult to standardise the care provided. End-of-life care pathways encourage discussion of prognosis, facilitating communication that allows patients’ needs to be identified and addressed systematically and collaboratively. [1]

End-of-life care pathways provide a systematic approach and a standardised level of care for patients in the terminal phase of their illness. [1] This framework includes documentation of discussion with the patient and carers of the multi-disciplinary consensus that death is now imminent and life-prolonging treatment is futile, and also provides management strategies to address the individual needs of the dying. There is limited evidence to support the use of end-of-life care pathways, however we cannot discount the substantial anecdotal benefits. [1,12] The lack of high-quality studies indicates a need for further research. [1,12] When used in conjunction with clinical judgment, these pathways can lead to benefits such as: improved symptom control, earlier acknowledgement of terminal prognosis by the patient and family, prescription of medications for end-of-life, and aiding the grieving process for relatives. [1,12,13] As such, end-of-life care pathways are highly regarded in palliative care, transferring the benchmarked hospice model of care of the dying into other settings, [14] and have been widely implemented nationally and internationally. [1]

The most recognised and commonly used end-of-life care pathway is the Liverpool Care Pathway (LCP), which was developed in the United Kingdom to transfer the hospice model of care for the dying to other care settings. [13,15] It has been implemented into hospices, hospitals and aged care facilities, and addresses the physical, psychosocial and spiritual needs of these patients. [1,13,15] In 2008, Verbeek et al. examined the effect of the LCP pre- and post-implementation on patients from hospital, aged care and home settings. [13] Results demonstrated improved documentation and reduced symptom burden as assessed by nurses and relatives, in comparison with the baseline period. [13] Although increased documentation does not necessarily equate to better care, high-quality medical records are essential to facilitate communication between team members and ensure quality care is provided. In this study, staff also reported that they felt the LCP provided a structure to patient care, assisted the anticipation of problems, and promoted proactive management of patient comfort. [13] The LCP has significantly increased the awareness of good terminal care, and has provided a model for the end-of-life care pathways currently in use in hospitals and institutions throughout Australia. [1,4]

Community palliative care services support terminally ill patients at home in order to retain a high quality of life. Recognising the holistic principles of palliative care, these multidisciplinary teams provide medical and nursing care, counselling, spiritual support and welfare supports. In the Brumley trial, which evaluated an in-home palliative care intervention with a multidisciplinary team for homebound terminally ill patients, results demonstrated that the intervention group had greater satisfaction with care, were less likely to visit the emergency department, and were more likely to die in the comforts of their own home. [16] These results infer that the community palliative care team provided a high standard of care where symptoms were well-managed and did not require more aggressive intervention. This prevented unnecessary emergency presentations, potential distress for the patient and family, and allowed better use of resources. This study demonstrates that community palliative care services can significantly improve the quality of care for patients living at home with life-limiting illnesses, however, there is still scope for improvement in the current healthcare system.

End-of-life care pathways are regarded as best practice in guiding care for patients where death is imminent. [1] In Australia, there are a number of these frameworks that have been implemented in hospitals and aged-care facilities, demonstrating an improvement in the quality of care in these settings. However, there are also many terminally ill patients who choose to reside in the comfort of their own home, and are supported by community palliative care services. End-of-life care pathways support a high standard of care, which should be available to all patients, irrespective of where they choose to die. As such, there may be a role for end-of-life care pathways in the home setting, supported by community palliative care services. Introducing already implemented local end-of-life care pathway into the community has great potential to reap similar benefits. Initially, these frameworks would be implemented by the community palliative care team, however, caregivers could be educated and empowered to participate in the ongoing care. This could be a useful means to facilitate communication between treating team members and family, and also empower the patient and family to become more involved in their care.

The potential benefits of implementing end-of-life care pathways into community palliative care services include those currently demonstrated in the hospital and aged-care settings, however there are potentially further positive effects. By introducing these frameworks into the homes of terminally ill patients, caregivers can also be encouraged to take a more active role in the care of their loved ones. This indirect education for the patient and family can provide a sense of empowerment, and assist them to make informed decisions. Additional potential benefits of these pathways could include a reduction in the number of hospital admissions and emergency department presentations, which would reduce the pressures on our already overburdened acute care services. Empowered family and carers could also assist with monitoring, providing regular updates to the community palliative care team, which could potentially lead to earlier detection for when more specialised care is required. The documentation within the pathways could also allow for a smoother transition to hospices if required, and prevent unnecessary prolongation of death. This may translate to prevention of significant emotional distress for the patient and family in an already difficult time, and promote more effective use of limited hospital resources. Integrating end-of-life care pathways into community palliative care services has many potential benefits for patients at home with terminal illnesses, and should be considered as an option to improve the delivery of care.

Palliative care can significantly improve the quality of care provided to patients in the terminal phase, which can be guided by end-of-life care pathways. Evidence validates that these pathways encourage a multidisciplinary change in practice that facilitates a “good death”, and supports the family through the bereavement period. In the community, this framework has the potential to empower patients and their caregivers, and assist them to make informed decisions regarding their end-of-life care, thereby preventing unwanted aggressive intervention and unnecessary prolongation of death. However, there is a need for further high-quality studies to validate the anecdotal benefits of these pathways, with potential for a randomised controlled trial investigating the use of end-of-life care pathways in the home setting in Australia. In conclusion, the introduction of end-of-life care pathways into community palliative care services has great potential, particularly if supported and used in conjunction with specialist palliative care teams.

Acknowledgements

I would like to acknowledge Dr Leeroy William from McCulloch House, Monash Medical Centre for his support in developing this proposal, and Andrew Yap for his editorial assistance.

Conflicts of interest

None declared.

Correspondence

A Vo: amanda.vo@southernhealth.org.au

Categories
Feature Articles Articles

Immunology beyond a textbook: Psychoneuroimmunology and its clinical relevance for psychological stress and depression

Our medical studies encompass many areas of medical science, and immunology is an example of just one. Traditionally, we have been taught that our immune system exists to protect us from pathogens; however, in recent years, this romantic view of the immune system has been challenged and it is now well recognised that it is also involved in whole-body homeostasis and cross talks to other regulating systems of the body. This is the notion of psychoneuroimmunology (PNI). This text will briefly review the current understanding of PNI and how it features prominently in clinical practice as a part of the ‘whole person’ model of patient care and, especially, in terms of stress and depression. With this in mind, PNI is an emerging medical discipline that warrants integration and consideration in future medical care and practice.

Introduction

At first glance, immunology may be viewed by some as an esoteric medical science that simply provides us with the molecular and cellular mechanisms of disease and immunity. It is a subject that all medical students have to face and no doubt can find quite challenging as well. Yet, in recent times, its role in helping us understand mental health and why individuals behave in certain ways has become increasingly appreciated. [1,2] The novel area of study that attempts to explain this intricate and convoluted relationship between the mind, behaviour, nervous system, endocrine system and finally the immune system is, quite appropriately, termed psychoneuroimmunology (PNI) or sometimes psychoendoneuroimmunology. [3] This was probably something that was never mentioned during our studies because it is quite radical and somewhat ambiguous. So what, then, is PNI all about and why is it important?

Many of us may have come across patients that epitomise the association between mental disturbances and physical manifestations of disease. Indeed, it is this biopsychosocial model that is well documented and instilled into the minds of medical students. [4-7] The mechanism behind this, although something best left to science, is nonetheless interesting to know and appreciate as medical students. This is PNI.

The basic science of psychoneuroimmunology

History

The notion that behaviour and the manifestation of disease were linked was probably first raised by Galen (129-199 AD) who noticed that melancholic women were more likely to develop breast cancer than sanguine women. [8] The modern push for PNI probably began in the 1920s to 1930s when Metal’nikof and colleagues conducted several preliminary experiments in various animals showing that the immune system can be classically conditioned. [9] New interest in this area was established by Solomon et al. who, in 1964, coined the term ‘psychoimmunology’ [10]; however, the concept of PNI was firmly established by the American behavioural scientist Dr Robert Ader in his revolutionary 1981 book, ‘Psychoneuroimmunology.’ This book described the dynamic molecular and clinical manifestations of PNI through various early experiments. [11,12] In one initial experiment, Ader and fellow researchers successfully demonstrated that the immune system can be conditioned, similarly to Metal’nikov. After pairing saccharin with the immunosuppressive agent, cyclophosphamide, and administering this to some rats, they found that saccharin administration alone, at a later date, was able to induce an immunosuppressive state marked by reduced titres of haemagglutinating antibodies to injected sheep erythrocytes. [13]

The authors postulated that non-specific stress associated with the conditioning process would have elicited such a result. By extension and based on earlier research, [14] the authors believed psychological, emotional or physical stress probably act through hypothalamic pathways to induce immunomodulation which manifests itself in various ways. [13]

Stress, depression and PNI

A prominent aspect of PNI focuses on the bi-directional relationship between the immune system and stress and depression, where one affects the other. [4,15] The precise mechanisms are complicated but are ultimately characterised by the stress-induced dysregulation, (either activation or depression), of the hypothalamic-pituitaryadrenal (HPA) and sympathetic-adrenal-medullary (SAM) axes. [16] Because of the pleiotropic effects of these hormones, they can induce a dysfunctioning immune system partly through modulating the concentration of certain cytokines in the blood. [15] Endocrine and autonomic pathways upregulate pro-inflammatory cytokines (such as interleukin (IL)-1β, IL-6 and tumour necrosis factor-α (TNF-α)) that can exert their effects at the brain through direct (i.e., circumventricular organs) and indirect access ports (via aff erent nerve fi bres). [17,18] Such pro-inflammatory cytokines therefore stimulate the HPA axis and activate it leading to the rapid production of corticotropin-releasing hormone. [19-21] Eventually, cortisol is produced which, in turn, suppresses the pro-inflammatory cytokines. Interestingly, receptors for these cytokines have also been found on the pituitary and adrenal glands, thereby serving the ability to integrate neuroendocrine signals at all three levels of the HPA axis. [21,22] Cortisol also has significant eff ects on mood, behaviour and cognition. On a short-term basis, it may be beneficial; making an animal more alert and responsive. However, increased periods of elevation may give rise to impaired cognition, fatigue and apathy. [23]

In the brain, an active role is played by the once-thought insignificant glial cells which participate at the so-called tripartite synapse (glial cell plus pre- and post-synaptic neurons). [24] It is this unit that is fundamental to much of the central nervous system activity of the PNI system. Pro-inflammatory cytokines like interferon (IFN)-α and IL-1β released from peripheral and central (microglia and astrocytes) sources can alter dopaminergic signals, basal ganglial circuitry, hippocampal functioning and so on. Consequently, this induces behavioural changes of anhedonia, memory impairment and other similar behaviours. [18,25] Since IFN-α receptors have been found on microglia in the brain, [26] IFN-α likely also causes further local inflammation and further disruption of dopaminergic signals. Excessively activated microglia by a range of inflammatory cytokines can therefore cause direct neurotoxicity and neuropathology. [27] Additionally, these cytokines can induce activity of the indoleamine 2,3-dioxygenase enzyme (found in astrocytes and microglia) which metabolise the precursor of serotonin, tryptophan. The result is a reduction of serotonin and the production of various products, including quinolinic acid, an NMDA (N-methyl-D-aspartate) receptor agonist which leads to excess glutamate and neurodegeneration. These mechanisms are postulated to contribute to the pathogenesis of depression; however, the precise mechanisms of which are yet to be fully elucidated. [28-30]

Recent research into behavioural epigenetics has also provided an additional interesting link whereby stressors to the psychosocial environment can modulate gene expression within the neuroimmune, physiological and behavioural internal environments. This may account for the long-term aforementioned changes in immune function. [31]

Depression has also been shown to activate the HPA and SAM axes as well through inflammatory processes, [28,32] which in turn exacerbates any pre-existing depressive behaviours. [33] This inflammatory theory of depression sheds light onto the complicated pathophysiology of depression, adding to the already well-characterised theory of serotonergic neurotransmission deficiency. [28,33] Interestingly, proinfl ammatory cytokines have been shown to modulate serotonergic activity in the brain as well, [34,35] which provides further insight into this complex disorder. There is question as to whether or not this may have its roots with evolution where the body diverts energy resources away from other areas to the immune system for the promotion of anti-pathogenic activity during stress and depression. [17] For instance, with threat of an injury or wound in an acute situation (the stressor), cortisol (a natural immunosuppressant) would be released via the HPA axis. This aids in energy conservation which in turn, and paradoxically, attempts to minimise the non-helpful effect of immunosuppression in times of infection risks. [17] Depressive behaviour such as lethargy has also been said to have stemmed from the need to conserve energy to promote fever and inflammation. [2] Ultimately, the evolutionary aspects of PNI are under current speculation and investigation to elicit the precise links and relationships. [36]

The alterations of the immune system in stress and depression have implications for other areas of medicine as well. Though conclusive clinical experiments are lacking, it has been strongly hypothesised that this imbalanced immune state can contribute to a plethora of medical ailments. Depression, characterised by a general pro-inflammatory state with oxidative and nitrosative stress, [33,37] can contribute to poor wound healing; and exacerbate chronic infections and pain. [38,39] Stress similarly entails a dysregulated immune system and may contribute to the aforementioned conditions plus cardiovascular disease and minor infectious diseases such as the common cold. [40- 44] The link with cancer is somewhat more controversial but both may, in some way, predispose to the development of it through numerous mechanisms such as reduced immune surveillance by immune cells (cytotoxic T cells and natural killer cells), general inflammation and genomic instability. [45,46]

Highlighting the bidirectionality of the PNI paradigm, secondary inflammation caused by a myriad of neurological diseases (e.g., Huntington’s disease, Alzheimer’s disease) and local and systemic disorders (e.g., systemic lupus erythematosus, stroke, cardiovascular disease and diabetes mellitus) may very well contribute to the pathogenesis of co-existing depression. [47] This may account for the close association of depression and such diseases. Underlying neurochemical changes have been observed in many of these diseases—especially the neurological disease examples—and it has been suggested that depression vulnerability is proportional to how well one can ‘adapt’ to said neurochemical imbalances. [48,49]

Through an immunophysiological point-of-view, these links certainly makes sense; but it is important to note that there could be other confounding factors, such as increased alcohol consumption and other associated behaviours that accompany stress and depression that can contribute to pathology. [50] The question therefore remains as to how much the mind plays in the pathogenesis of physical ailments. Figure 1 summarises the general PNI model as it relates to stress and depression.

Implications

Having explored the discipline of PNI, what is the importance of this for clinical practice? Because of the links between stress and depression; altered immunity; other ill-effects and behaviour, [3,12] it seems fitting that if we can address a patient’s underlying stress or depression, we may be able to improve the course of their illness or prevent, to a certain extent, the onset of certain diseases by correcting immune system dysregulation. [43]

Simply acknowledging the relationship between stress and their role in the pathogenesis, maintenance and susceptibility of diseases is certainly not enough, and healthcare professionals should consider the mental state of mind for every patient that presents before them. It is fortunate, then, that a myriad of simple stress-management strategies could be employed to improve their mental welfare, depending on their individual circumstances. Such strategies include various relaxation techniques, meditation, tai chi, hypnosis and mindfulness practice. These have, importantly, proven cost-eff ective and lead to self-care and self-efficacy. [51,52]

As an example, mindfulness has received considerable attention in its role of alleviating stress and depression. [52] Defined as the increased awareness and attention to present, moment-to-moment thoughts and experiences, mindfulness therapy has shown remarkable efficacy in the promotion of positive mental states and quality of life. [52-54] This is particularly important in this age of chronic diseases and their associated unwelcomed psychological consequences. [54] Furthermore, and in light of the discussion above on PNI, there is evidence that mindfulness practice induces physiological responses in brain and immune function. [55,56] This suggests that its benefits are mediated, at least in part, through such positive immunological alterations that modulate disease processes.

With the growing understanding of the cellular and molecular mechanisms behind stress, depression and other similar psychiatric disorders, a host of novel pharmacological interventions to target the previously discussed biological pathways are actively being researched. Most notably is the proposition of the role of anti-inflammatories in ameliorating such conditions where patients present in an increased inflammatory state. This is largely based on experimental work where antagonists to pro-inflammatory cytokines and/or their receptors improve sickness behaviours in animals. [17] As an example, the cholesterol-lowering statins have been found to have intrinsic anti- inflammatory and antioxidant properties. In a study of patients taking statins for cardiovascular disease, it was found that statins had a substantial protective effect on the risk of developing depression. This suggests that the drug acts, at least in part, to decrease systemic inflammatory and oxidative processes that characterise depression. [57] Other drugs being researched aim to tackle additional pathways such as those involving neurotransmitters and their receptors.

Of the neuroendocrine arm of PNI, current research is looking at ways to reverse HPA axis activation. [20] Some tested drugs that act on specific parts of the HPA axis seem to show promise; however, a major problem is tailoring the correct drug to the correct patient, for not all patients will present with the same neuroendocrine profile. [58,59] Neuroendocrine manipulation can also be used to treat or act as an adjunct to other non-HPA axis-mediated diseases. For example, administration of melatonin and IL-2 was able to increase the survival time in patients with certain solid tumours. [60] Needless to say, a great amount of research is further warranted to test and understand possible pharmaceutical agents.

Discussion and Conclusion

The exciting and revolutionary field of PNI has now provided us with the internal links of all the major regulating systems of the human body. The complex interactions that take place is, indeed, a tribute to the complexity of our design, and has provided a basis or mechanism of how our mind and behaviour can infl uence our physical health. As a result, serious stressors—be them emotional, mental or physical—can wreak havoc on our delicate internal environment and predispose to physical ailments, which can further exacerbate the inciting stressors and our mental state. For said psychological stress or depression, it seems appropriate that if healthcare professionals can ameliorate the severity of these, they may be able to further improve the physical health of an individual. How much so is a matter of debate and further investigation. Conversely, as demonstrated by the bi-directionality model of PNI, addressing or ‘fi xing’ the organic pathology may be conducive to the mental state of patients’ minds.

Whilst clinical approaches have been sharply juxtaposed to a very theoretical and scientific review of PNI, this has been deliberately done to hopefully demonstrate how mind-body therapies can exert their physical benefits. Accordingly, valued mind-body therapies deserve as much attention as the scientific study of molecular pharmacology. It is also important to note that even these two approaches (pharmacology and mind-body therapies) are almost certainly the tip of the iceberg; for there is certainly a vast amount more to be further explored in our therapeutic approach to medical conditions. For example, how does a practitioner-patient relationship fit into this grand scheme of things, and how much of a role does it play? No doubt a decent part for sure. Furthermore, whilst the PNI framework provides good foundations for which to explain, (at a basic level), the mechanisms behind the development of stress, depression and associated ailments, further insight is needed into the biological basis of these. For example, a symphony of intricate factors (such as the up-regulation of inflammation-induced enzymes, neurotransmitter changes, dysfunction of intracellular signalling, induced autoimmune activity, neurodegeneration and decreased serum levels of antioxidants and zinc) are at play for the signs and symptoms of depression. [61,62] Thus, the complex pathogenesis of psychological stress and depression begs for further clinical and scientific research into unravelling its mysteries. Nevertheless, with a sound basis behind mindfulness, other similar mind-body therapies and novel pharmacological approaches, it seems suitable for these to be further integrated into primary care [54] and other areas of medicine as an adjuvant to current treatments. If we can achieve this, then medicine undoubtedly has more potent tools in its armamentarium of strategies to address and alleviate the growing burden of chronic disease.

Acknowledgements

My thanks go to Dr E Warnecke and Prof S Pridmore for their support.

Conflicts of interest

None declared.

Correspondence

A Lee: adrian.lee@utas.edu.au