Categories
Review Articles Articles

Paediatric regional anaesthesia: comparing caudal anaesthesia and ilioinguinal block for paediatric inguinal herniotomy

Caudal anaesthesia and ilioinguinal block are effective, safe anaesthetic techniques for paediatric inguinal herniotomy. This review article aims to educate medical students about these techniques by examining their safety and efficacy in paediatric surgery, as well as discussing the relevant anatomy and pharmacology. The roles of general anaesthesia in combination with regional anaesthesia, and that of awake regional anaesthesia, are discussed, as is the administration of caudal adjuvants and concomitant intravenous opioid analgesia.

Introduction

Inguinal hernia is a common paediatric condition, occurring in approximately 2% of infant males, of slightly reduced incidence in females, [1] and as high as 9-11% in premature infants. [2] Inguinal herniotomy, the reparative operation, is most commonly performed under general anaesthesia with regional anaesthesia; however, some experts in caudal anaesthesia perform the procedure with awake regional anaesthesia. Regional anaesthesia can be provided via the epidural (usually caudal) or spinal routes, or by blocking peripheral nerves with local anaesthetic agents. The relevant techniques and anatomy will be discussed, as will side effects and safety considerations, and the pharmacology of the most commonly used local anaesthetics. The role of general anaesthesia, awake regional anaesthesia and the use of adjuvants in regional anaesthesia will be discussed, with particular focus on future developments in these fields.

Anatomy and technique

The surgical field for inguinal herniotomy is supplied by the ilioinguinal and iliohypogastric nerves, arising from the first lumbar spinal root, as well as by the lower intercostal nerves, arising from T11 and T12. [3] Caudal anaesthesia is provided by placing local anaesthetic agents into the epidural space, via the caudal route. It then diffuses across the dura to anaesthetise the ventral rami, which supply sensory (and motor) nerves. Thus, the level of anaesthesia needs to reach the lower thoracic region to be effective. The caudal block is usually commenced after the induction of general anaesthesia. With the patient lying in the left lateral position, the thumb and middle finger of the anaesthetist’s left hand are placed on the two posterior superior iliac spines, the index finger then palpates the spinous process of the S4 vertebra. [4] Using sterile technique, a needle is inserted through the sacral hiatus to pierce the sacrococcygeal ligament, which is continuous with the ligamentum flavum (Figure 1). Correct placement of the needle can be confirmed by the “feel” of the needle passing through the ligament, the ease of injection and, if used, the ease of passing a catheter through the needle. The absence of spontaneous reflux, or aspiration, of cerebrospinal fluid or blood should be confirmed before drugs are injected into the sacral canal, which is continuous with the lumbar epidural space. [5] Ilioinguinal block is achieved by using sterile technique to insert a needle inferomedially to the anterior superior iliac spine and injecting local anaesthetic between the external oblique and internal oblique muscles, and between the internal oblique and the transversus abdominis. [6] These injections cover the ilioinguinal, iliohypogastric and lower intercostal nerves, anaesthetising the operating field, including the inguinal sac. [3] Commonly, these nerves are blocked by the surgeon during the surgical process when she/he can apply local anaesthesia directly to the nerves. Ultrasound guidance has enabled the more accurate placement of injections, allowing lower doses to be used [7] and improving success rates, [8] leading somewhat to a resurgence of the technique. [4] Pharmacological aspects Considerable discussion has arisen regarding which local anaesthetic agent is the best choice for caudal anaesthesia: bupivacaine or the newer pure left-isomers levobupivacaine and ropivacaine. A review by Casati and Putzu examined evidence regarding the toxicology and potency of these new agents in both animal and human studies. Despite conflicting results in the literature, this review ultimately suggests that there was a very small difference in potency between the agents: bupivacaine is slightly more potent than levobupivacaine, which is slightly more potent than ropivacaine. [9] Breschan et al. suggested that a caudal dose of 1 mL/kg of 0.2% levobupivacaine or ropivacaine produced less post-operative motor blockade than 1 mL/kg 0.2% bupivacaine. [10] This result could be consistent with a mild underdosing of the former two agents in light of their lesser potency, rather than intrinsic differences in motor effect. Doses for ilioinguinal nerve block are variable, given the blind technique commonly employed and the need to obtain adequate analgesia. Despite this, the maximum recommended single shot dose is the same for all three agents: neonates should not exceed 2 mg/kg, and children should not exceed 2.5 mg/kg. [11] Despite multiple studies showing minimal yet statistically significant differences, all three agents are nonetheless comparably effective local anaesthetic agents. [9]

When examining toxicity of the three agents discussed above, Casati and Putzu reported that the newer agents (ropivacaine and levobupivacaine) were less toxic than bupivacaine, resulting in higher plasma concentrations before the occurrence of signs of CNS toxicity, and with less cardiovascular toxicity occurring at levels that induce CNS toxicity. [9] Bozkurt et al. determined that a caudal dose of 0.5 mL/kg of 0.25% (effectively 1.25 mg/kg) bupivacaine or ropivacaine resulted in peak plasma concentrations of 46.8 ± 17.1 ng/mL and 61.2 ± 8.2 ng/mL, respectively. These are well below the levels at which toxic effects appear for bupivacaine and ropivacaine, at 250 ng/mL and 150-600 ng/mL, respectively. [12] The larger doses required for epidural anaesthesia and peripheral nerve blocks carry the increased risk of systemic toxicity, so the lesser toxic potential of levobupivacaine and ropivacaine justifies their use over bupivacaine. [9,13] However, partly due to cost bupivacaine remains in wide use today. [14]

Caudal anaesthesia requires consideration of two aspects of dose: concentration and volume. The volume of the injection controls the level to which anaesthesia occurs, as described by Armitage:

  • 0.5 mL/kg will cover sacral dermatomes, suitable for circumcision
  • 0.75mL/kg will cover inguinal dermatomes, suitable for inguinal herniotomy
  • 1 mL/kg will cover up to T10, suitable for orchidopexy or umbilical herniotomy
  • 1.25 mL/kg will cover up to mid-thoracic dermatomes. [15]

It is important to ensure both an adequate amount of local anaesthetic (mg/kg) and an adequate volume for injection (mL/kg) are used.

Efficacy of caudal and ilioinguinal blocks

Ilioinguinal block and caudal anaesthesia both provide excellent analgesia in the intraoperative and postoperative phases. Some authors suggest that ultrasound guidance in ilioinguinal block can increase accuracy of needle placement, allowing a smaller dose of local anaesthetic. [16] Thong et al. reviewed 82 cases of ilioinguinal block without ultrasonography, and found similar success rates to other regional techniques, [17] however, this was a small study. Markham et al. used cardiovascular response as a surrogate marker for intraoperative pain and found no difference between the two techniques. [18] Other studies have shown that both techniques provide similarly effective analgesic profiles in terms of post-operative pain scores, [19] duration or quality of post-operative analgesia, [20] and post-operative morphine requirements. [21] Caudal anaesthesia has a success rate of up to 96%, [22] albeit with 25% of patients requiring more than one attempt. In contrast, blind ilioinguinal block has a success rate of approximately 72%. [23] Willschke et al. quoted success rates of 70-80%, which improved with ultrasound guidance. [16] In a small study combining the two techniques, Jagannathan et al. explored the role of ultrasound-guided ilioinguinal block after inguinal herniotomy surgery performed under general anaesthetic with caudal block. With groups randomised to receive injections of normal saline, or bupivacaine with adrenaline, they found that the addition of a guided nerve block at the end of the surgery significantly decreased post-operative pain scores for the bupivacaine with adrenaline group. [24] This suggests that the two techniques can be combined for post-operative analgesia. Ilioinguinal block is not suitable as the sole method of anaesthesia, as its success rate is highly variable and the block not sufficient for surgical anaesthesia, whereas caudal block can be used as an awake regional anaesthetic technique. Both techniques are suitable for analgesia in the paediatric inguinal herniotomy setting.

Complications and side effects

Complications of caudal anaesthesia are rare at 0.7 per 1000 cases. [5] However, some of these complications are serious and potentially fatal:

  • accidental dural puncture, leading to high spinal block
  • intravascular injection
  • infection and epidural abscess formation
  • epidural haematoma. [4,13]

A comprehensive review of 2,088 caudal anaesthesia cases identified 101 (4.8%) cases in which either the dura was punctured, significant bleeding occurred, or a blood vessel was penetrated. Upon detection of any of these complications, the procedure was ceased. [25] This is a relatively high incidence; however, these were situations where potentially serious complications were identified prior to damage being done by injecting the local anaesthetic. The actual risk of harm occurring is unknown, but is considered to be much lower than the incidence of these events. Polaner et al. reviewed 6011 single shot caudal blocks, and identified 172 (2.9%) adverse events, including eighteen positive test doses, five dural punctures, 38 vascular punctures, 71 abandoned blocks and 26 failed blocks. However, no serious complications were encountered as each of these adverse events were detected early and managed. [26] Methods of minimising the risk of these complications include test doses under ECG monitoring for inadvertent vascular injection (tachycardia will be seen) or monitoring the onset of subarachnoid injection (rapid anaesthesia will occur). [13] Ilioinguinal blocks, as with all peripheral nerve blocks, are inherently less risky than central blockade. Potential complications include:

  • infection and abscess formation
  • mechanical damage to the nerves.

More serious complications identified at case-report level include cases of:

  • retroperitoneal haematoma. [27]
  • small bowel perforation. [28]
  • large bowel perforation. [29]

Polaner et al. reviewed 737 ilioinguinal-iliohypogastric blocks, and found one adverse event (positive blood aspiration). [26] This low morbidity rate was attributed to the widespread use of ultrasound guidance. [26] A number of studies have examined the side effect profiles of both techniques:

  • Time to first micturition has conflicting evidence – Markham et al. suggest delayed first micturition with caudal anaesthesia
  • compared to inguinal block, [18] but others found no difference. [19,20]
  • Post-operative time to ambulation is similar. [18,19]
  • Post-operative vomiting has similar incidence, [18-20] and has been shown to be affected more by the accompanying method of general anaesthetic than the type of regional anaesthesia, with sevoflurane inhalation resulting in more post-operative vomiting than intravenous ketamine and propofol. [30]
  • Time in recovery bay post-herniotomy was 45 ± 15 minutes for caudal, and 40 ± 9 minutes (p<0.02) for ilioinguinal; [19] however, this statistically significant result has little effect on clinical practice.
  • Time to discharge (day surgery) was 176 ± 33 minutes for caudal block, and shorter for ilioinguinal block at 166 ± 26 minutes (p<0.02). [19] Again, these times are so similar as to have little practical effect. These studies suggest that the techniques have similar side effect incidences and postoperative recovery profiles, and where differences exist, they are statistically but not clinically significant.

Use of general anaesthesia in combination with caudal anaesthesia or ilioinguinal block A topic of special interest is whether awake regional, rather than general, anaesthesia should be used. Although the great majority of inguinal herniotomy is performed with general and regional anaesthesia, the increased risk of post-operative apnoea in neonates after general anaesthesia (particularly in ex-low birth weight and preterm neonates) is often cited. Awake regional anaesthesia is therefore touted as a safer alternative. As described above, ilioinguinal block is unsuitable for use as an awake technique, but awake caudal anaesthesia has been successfully described and practised. Geze et al. reported on performing awake caudal anaesthesia in low birth weight neonates and found that the technique was safe; [31] however, this study examined only fifteen cases and conclusions regarding safety drawn from such a small study are therefore limited. Other work in the area has also been limited by cohort size. [32-35] Lacrosse et al. noted the theoretical benefits of awake caudal anaesthesia for postoperative apnoea, but recognised that additional sedation is often necessary, and in a study of 98 patients, found that caudal block with light general anaesthesia using sevoflurane was comparable in terms of safety to caudal anaesthesia alone, and had the benefit of offering better surgical conditions. [36] Additionally, the ongoing concerns around neurotoxicity of general anaesthetic agents to the developing brain need further evaluation before recommendations can be made. [37] More research is needed to fully explore the role and safety of awake caudal anaesthesia, [38] and it currently remains a highly specialised area of practice, limited mainly to high risk infants. [39]

Adjuncts to local anaesthetics

There are many potential adjuncts for caudal anaesthesia, but ongoing concerns about their safety continue to limit their use. The effect of systemic opioid administration on the quality of caudal anaesthesia has been discussed in the literature. Somri et al. studied the administration of general anaesthesia and caudal block both with and without intravenous fentanyl, and measured plasma adrenaline and noradrenaline at induction, end of surgery and in recovery as a surrogate marker for pain and stress. They found adding intravenous fentanyl resulted in no differences in plasma noradrenaline, and significantly less plasma adrenaline only in recovery. [40] Somri et al. questioned the practical significance of the result for adrenaline, noting no clinical difference in terms of blood pressure, heart rate or end-tidal CO2. Thus they suggested that general anaesthesia and caudal anaesthesia adequately block the stress response, and therefore there is no need for intraoperative fentanyl. [40] Interestingly, they also found no difference in post-operative analgesia requirements between the two groups. [40] Other authors noticed no difference in analgesia for caudal anaesthesia with or without intravenous fentanyl, and found a significant increase in post-operative nausea and vomiting with fentanyl. [41] Khosravi et al. found that pre-induction tramadol and general anaesthesia are slightly superior to general anaesthesia and ilioinguinal block for herniotomy post-operative pain relief, but suggested that the increased risk of nausea and vomiting outweighed the potential benefits. [42] Opioids have a limited role in caudal injection due to side effects, including respiratory depression, nausea, vomiting and urinary retention. [43] Both ilioinguinal block and caudal block are effective on their own, and that the routine inclusion of systemic opioids for regional techniques in inguinal herniotomy is unnecessary and potentially harmful. Adding opioids to the caudal injection has risks that outweigh the potential benefits. [44]

Ketamine, particularly the S enantiomer which is more potent and has a lower incidence of agitation and hallucinations than racemic ketamine, [44] has been studied as an adjuvant for caudal anaesthesia. Mosseti et al. reviewed multiple studies and found ketamine to increase the efficacy of caudal anaesthesia when combined with local anaesthetic compared to local anaesthetic alone. [44] Similar results were found for clonidine. [44] This is consistent with other work comparing caudal ropivacaine with either clonidine or fentanyl as adjuvants, which found clonidine has a superior side effect profile. [45] However, the use of caudal adjuvants has been limited due to concerns with potential neurotoxicity (reviewed by Jöhr and Berger). [4]

Local anaesthetic with adrenaline has been used to decrease the systemic absorption of short acting local anaesthetics and thus enhance the duration of blockade. Its sympathetic nervous effects are also useful for identifying inadvertent intravascular injection, which results in increased heart rate and increased systolic blood pressure. The advent of longer acting local anaesthetics has led to a decline in the use of adrenaline as an adjuvant to local anaesthetics, [44] and the validity of test doses of adrenaline has been called into doubt. [46]

Summary and Conclusion

Both caudal and ilioinguinal blocks are effective, safe techniques for inguinal herniotomy (Table 1). With these techniques there is no need for routine intravenous opioid analgesia, thus reducing the incidence of problems from these drugs in the postoperative period. The role of ultrasound guidance will continue to evolve, bringing new levels of safety and efficacy to ilioinguinal blocks. Light general anaesthesia with regional blockade is considered the first choice, with awake regional anaesthesia for herniotomy considered to be a highly specialised field reserved for a select group of patients. However, the ongoing concerns of neurotoxicity to the developing infant brain may fundamentally alter the neonatal anaesthesia landscape in the future.

Conflict of interest

None declared.

Acknowledgements

Associate Professor Rob McDougall, Deputy Director Anaesthesia and Pain Management, Royal Children’s Hospital Melbourne, for providing the initial inspiration for this review.

Correspondence

R Paul: r.paul@student.unimelb.edu.au

References

[1] King S, Beasley S. Surgical conditions in older children. In: South M, Isaacs D, editors. Practical Paediatrics. 7 ed. Australia: Churchill Livingstone Elsevier; 2012. p. 268-9.
[2] Dalens B, Veyckemans F. Anesthésie pédiatrique. Montpellier: Sauramps Médical; 2006.
[3] Brown K. The application of basic science to practical paediatric anaesthesia. Update in Anaesthesia. 2000(11).
[4] Jöhr M, Berger TM. Caudal blocks. Paediatr Anaesth. 2012;22(1):44-50.
[5] Raux O, Dadure C, Carr J, Rochette A, Capdevila X. Paediatric caudal anaesthesia. Update in Anaesthesia. 2010;26:32-6.
[6] Kundra P, Sivashanmugam T, Ravishankar M. Effect of needle insertion site on ilioinguinaliliohypogastric nerve block in children. Acta Anaesthesiol Scand. 2006;50(5):622-6.
[7] Willschke H, Bosenberg A, Marhofer P, Johnston S, Kettner S, Eichenberger U, et al. Ultrasonographic-guided ilioinguinal/iliohypogastric nerve block in pediatric anesthesia: What is the optimal volume? Anesth Analg. 2006;102(6):1680-4.
[8] Willschke H, Marhofer P, Machata AM, Lönnqvist PA. Current trends in paediatric regional anaesthesia. Anaesthesia Supplement. 2010;65:97-104.
[9] Casati A, Putzu M. Bupivacaine, levobupivacaine and ropivacaine: are they clinically different? Best Pract Res, Clin Anaesthesiol. 2005;19(2):247-68.
[10] Breschan C, Jost R, Krumpholz R, Schaumberger F, Stettner H, Marhofer P, et al. A prospective study comparing the analgesic efficacy of levobupivacaine, ropivacaine and bupivacaine in pediatric patients undergoing caudal blockade. Paediatr Anaesth. 2005;15(4):301-6.
[11] Howard R, Carter B, Curry J, Morton N, Rivett K, Rose M, et al. Analgesia review. Paediatr Anaesth. 2008;18:64-78.
[12] Bozkurt P, Arslan I, Bakan M, Cansever MS. Free plasma levels of bupivacaine and ropivacaine when used for caudal block in children. Eur J Anaesthesiol. 2005;22(8):640-1.
[13] Patel D. Epidural analgesia for children. Contin Educ Anaesth Crit Care Pain. 2006;6(2):63-6.
[14] Menzies R, Congreve K, Herodes V, Berg S, Mason DG. A survey of pediatric caudal extradural anesthesia practice. Paediatr Anaesth. 2009;19(9):829-36.
[15] Armitage EN. Local anaesthetic techniques for prevention of postoperative pain. Br J Anaesth. 1986;58(7):790-800.
[16] Willschke H, Marhofer P, Bösenberg A, Johnston S, Wanzel O, Cox SG, et al. Ultrasonography for ilioinguinal/iliohypogastric nerve blocks in children. Br J Anaesth. 2005;95(2):226.
[17] Thong SY, Lim SL, Ng ASB. Retrospective review of ilioinguinal-iliohypogastric nerve block with general anesthesia for herniotomy in ex-premature neonates. Paediatr Anaesth. 2011;21(11):1109-13.
[18] Markham SJ, Tomlinson J, Hain WR. Ilioinguinal nerve block in children. A comparison with caudal block for intra and postoperative analgesia. Anaesthesia. 1986;41(11):1098-103.
[19] Splinter WM, Bass J, Komocar L. Regional anaesthesia for hernia repair in children: local vs caudal anaesthesia. Can J Anaesth. 1995;42(3):197-200.
[20] Cross GD, Barrett RF. Comparison of two regional techniques for postoperative
analgesia in children following herniotomy and orchidopexy. Anaesthesia. 1987;42(8):845-9.
[21] Scott AD, Phillips A, White JB, Stow PJ. Analgesia following inguinal herniotomy or orchidopexy in children: a comparison of caudal and regional blockade. J R Coll Surg Edinb. 1989;34(3):143-5.
[22] Dalens B, Hasnaoui A. Caudal anesthesia in pediatric surgery: success rate and adverse effects in 750 consecutive patients. Anesth Analg. 1989;68(2):83-9.
[23] Lim S, Ng Sb A, Tan G. Ilioinguinal and iliohypogastric nerve block revisited: single shot versus double shot technique for hernia repair in children. Paediatr Anaesth.
2002;12(3):255.
[24] Jagannathan N, Sohn L, Sawardekar A, Ambrosy A, Hagerty J, Chin A, et al. Unilateral groin surgery in children: will the addition of an ultrasound-guided ilioinguinal nerve block enhance the duration of analgesia of a single-shot caudal block? Paediatr Anaesth. 2009;19(9):892-8.
[25] Beyaz S, Tokgöz O, Tüfek A. Caudal epidural block in children and infants: retrospective analysis of 2088 cases. Ann Saudi Med. 2011;31(5):494-7.
[26] Polaner DM, Taenzer AH, Walker BJ, Bosenberg A, Krane EJ, Suresh S, et al. Pediatric regional anesthesia network (PRAN): a multi-Institutional study of the use and incidence of complications of pediatric regional anesthesia. Anesth Analg. 2012;115(6):1353-64.
[27] Parvaiz MA, Korwar V, McArthur D, Claxton A, Dyer J, Isgar B. Large retroperitoneal haematoma: an unexpected complication of ilioinguinal nerve block for inguinal hernia repair. Anaesthesia. 2012;67(1):80-1.
[28] Amory C, Mariscal A, Guyot E, Chauvet P, Leon A, Poli-Merol ML. Is ilioinguinal/iliohypogastric nerve block always totally safe in children? Paediatr Anaesth. 2003;13(2):164-6.
[29] Jöhr M, Sossai R. Colonic puncture during ilioinguinal nerve block in a child. Anesth Analg. 1999;88(5):1051-2.

[30] Sarti A, Busoni P, Dellfoste C, Bussolin L. Incidence of vomiting in susceptible children under regional analgesia with two different anaesthetic techniques. Paediatr Anaesth. 2004;14(3):251-5.
[31] Geze S, Imamoglu M, Cekic B. Awake caudal anesthesia for inguinal hernia operations. Successful use in low birth weight neonates. Anaesthesist. 2011;60(9):841-4.
[32] Krane E, Haberkern C, Jacobson L. Postoperative apnea, bradycardia, and oxygen desaturation in formerly premature infants: prospective comparison of spinal and general anesthesia. Anesth Analg 1995;80:7-13.
[33] Somri M, Gaitini L, Vaida S, Collins G, Sabo E, Mogilner G. Postoperative outcome in high risk infants undergoing herniorrhaphy: comparison between spinal and general anaesthesia. Anaesthesia. 1998;53:762-6.
[34] Welborn L, Rice L, Hannallah R, Broadman L, Ruttiman U, Fink R. Postoperative apnea in former preterm infants: prospective comparison of spinal and general anesthesia. Anesthesiology 1990;72(838-42).
[35] Williams J, Stoddart P, Williams S, Wolf A. Post-operative recovery after inguinal herniotomy in ex-premature infants: comparison between sevoflurane and spinal anaesthesia. Br J Anaesth. 2001;86:366-71.
[36] Lacrosse D, Pirotte T, Veyckemans F. Bloc caudal associé à une anesthésie au masque facial (sévoflurane) chez le nourrisson à haut risque d’apnée : étude observationnelle. Ann Fr Anesth Reanim. 2012;31(1):29-33.
[37] Davidson AJ. Anesthesia and neurotoxicity to the developing brain: the clinical relevance. Paediatr Anaesth. 2011;21(7):716-21.
[38] Craven PD, Badawi N, Henderson-Smart DJ, O’Brien M. Regional (spinal, epidural, caudal) versus general anaesthesia in preterm infants undergoing inguinal herniorrhaphy in early infancy. Cochrane Database of Systematic Reviews. 2003(3).
[39] Bouchut JC, Dubois R, Foussat C, Moussa M, Diot N, Delafosse C, et al. Evaluation of caudal anaesthesia performed in conscious ex-premature infants for inguinal herniotomies. Paediatr Anaesth. 2001;11(1):55-8.
[40] Somri M, Tome R, Teszler CB, Vaida SJ, Mogilner J, Shneeifi A, et al. Does adding intravenous fentanyl to caudal block in children enhance the efficacy of multimodal analgesia as reflected in the plasma level of catecholamines? Eur J Anaesthesiol.
2007;24(5):408-13.
[41] Kokinsky E, Nilsson K, Larsson L. Increased incidence of postoperative nausea and vomiting without additional analgesic effects when a low dose of intravenous fentanyl is combined with a caudal block. Paediatr Anaesth. 2003;13:334-8.
[42] Khosravi MB, Khezri S, Azemati S. Tramadol for pain relief in children undergoing herniotomy: a comparison with ilioinguinal and iliohypogastric blocks. Paediatr Anaesth. 2006;16(1):54-8.
[43] Lloyd-Thomas A, Howard R. A pain service for children. Paediatr Anaesth. 1994;4:3-15.
[44] Mossetti V, Vicchio N, Ivani G. Local anesthetics and adjuvants in pediatric regional anesthesia. Curr Drug Targets. 2012;13(7):952-60.
[45] Shukla U, Prabhakar T, Malhotra K. Postoperative analgesia in children when using clonidine or fentanyl with ropivacaine given caudally. J Anaesthesiol, Clin Pharmacol. 2011;27(2):205-10.
[46] Tobias JD. Caudal epidural block: a review of test dosing and recognition of systemic injection in children. Anesth Analg. 2001;93(5):1156-61.

Categories
Review Articles Articles

The benefits associated with male HPV vaccination in Australia

Background: Human papillomavirus (HPV) is a family of highly contagious sexually transmitted viruses which are associated with the development of genital warts and certain HPV related cancers in males and females. After conducting a cost-effective analysis, the Australian Government has decided to expand the school based female only HPV vaccination program to include males commencing in 2013. Methods: A search of Ovid MEDLINE, The Cochrane Library, Google Scholar, BMJ Journals, and JSTOR was undertaken. Discussion: HPV vaccination has proven to have a high safety profile with sustained efficacy rates. Male vaccination will not only offer immunity to its recipients but also provide indirect protection to both sexes and high risk groups through herd immunity. The included high risk HPV strains 16 and 18 are associated with more than 70% of cervical cancers, 80% of anal cancers, 25% of penile cancers and 31% of oropharyngeal cancers worldwide. The quadrivalent vaccine also covers HPV 6 and 11 which are responsible for 90% of genital warts. Conclusion: Robust monitoring and surveillance systems are in place which will enable Australia to quantify the impacts of HPV vaccination in the future. Models show that the rates of HPV infection will further reduce by an additional 24% in 2050 compared to female vaccination alone, if vaccination rates for boys reach the same levels attained by girls in 2011. This will result in a significant decrease in the clinical burden of HPV-related diseases, the associated costs of treatment, and the psychological trauma which often accompanies the diagnosis of an HPV-related condition.

Introduction

Human papillomavirus (HPV) is a highly contagious family of viruses with over 150 distinct genotypes. [1] The virus infects the squamous epithelium in both males and females, with over 40 genotypes affecting the anogenital region. [2-4] HPV is usually a transient, asymptomatic infection which is transmitted through skin-to-skin contact associated with sexual activity, and the risk of infection increases with a greater number of sexual partners. [2-5] HPV is also the most common sexually transmitted infection (STI), [6] with up to 80% of people being infected with at least one type of genital HPV in their lifetime. [3,7,8]

There is a proven association establishing the relationship between persistent HPV infection and the development of pre-cancerous (CIN) and cancerous lesions in females. [7] Australia was the first of many countries to create a National HPV Vaccination Program for females, and has been providing the school based HPV vaccination to 12-13 year old girls since 2007. [9,10] Males are expected to join their female counterparts commencing in February 2013. [11,12]

Australia provides this vaccination in the form of the quadrivalent Gardasil® vaccine which covers four types of HPV (6, 11, 16 and 18). [8] In women, although there are many ‘high risk’ types, HPV 16 and 18 alone are associated with 70% of cervical cancers, [2,3,13] and 32% of vaginal cancers worldwide. [14] In men and women, those two types also contribute to over 80% of anal cancers, 24% of oral cancers, and 31% of oropharyngeal cancers. [6,14] Furthermore HPV 16 and 18 account for 90% of all HPV attributable male cancers. [5]

The other two HPV types covered by the quadrivalent vaccine, HPV 6 and 11, are associated with 90% of genital warts and 100% of juvenile onset recurrent respiratory papillomatosis (RRP) cases, resulting in a severe respiratory condition. [14] Recent studies also reveal that more than 4% of all cancers worldwide may be caused by HPV. [15,16]

On the back of such evidence, the Australian Government has announced the introduction of the quadrivalent HPV vaccination for males in the 12 -13 age group, with a catch-up program for males aged 14-15 years at school. [11,12] Early data show that 73% of females in the 12-13 age group received the full course of three doses (Figure 1). This level of coverage is significantly higher than the levels in the catch up programs where the lowest level is 30% in the 20-26 year old age group. Therefore, introducing an immunisation program for boys is a significant move towards preventing the many HPV attributable cancers and genital warts by accelerating coverage and the levels of herd immunity against HPV.

Therefore, the aim of this article is to examine the evidence which exists globally in supporting HPV vaccination and to identify any additional benefits routine male vaccination may provide. The article will also consider high risk population groups, the cost effectiveness of widespread HPV vaccination and the long term monitoring goals for the Australian vaccination program.

Methods

The review of the literature was undertaken through a search of Ovid MEDLINE, The Cochrane Library, Google Scholar, BMJ Journals, and JSTOR. The search aimed to find original research articles, reviews, case studies, and opinion pieces that related to HPV vaccinations and the spread of sexually transmitted infections. The terms used in our search ensured we reviewed a broad range of relevant studies. These terms were: ‘human papilloma virus’, ‘males’, ‘quadrivalent’, ‘vaccine’, ‘sexually transmitted disease’, ‘cervical cancer’, ‘penile carcinoma’, ‘herd immunity’, ‘genital warts’, ‘cost effectiveness’ and ‘pap smear’. We also sought to review the ‘grey literature’, and therefore searched a broad range of internet sources, including government websites. These were accessed for up-to-date information on the HPV vaccination program, the cervical screening program, and relevant legislation. The studies were limited to those published in the English language after the year 2000.

Using the methodology described above, 63 articles and documents found during the search were selected for consideration. After individually analysing all the identified documents, 39 publications were selected for inclusion in the final review with preference given to more recent publications and those with data which could be applied to the Australian program. Of these remaining publications, 16 were original research articles, 15 were review articles, 6 were Australian Government reports or legislation, 1 was a professional communication, and 1 was a media release. The remaining 24 publications were excluded as they were assessed as not relevant to the Australian program.

Discussion

Evidence for HPV vaccination in men

HPV vaccinations worldwide has revealed no major safety concerns, [5] and recent clinical trial data show that the safety profile for males is the same as for females. [18] The most commonly reported side effects have been mild and include fever, nausea and localised injection site pain. [19] Furthermore, there have been no reported deaths that are directly attributable to the vaccine. [5,20]

Currently, only the quadrivalent vaccine has demonstrated protective effects for males in clinical trials. [18] Boys vaccinated with the quadrivalent vaccine have the same seroconversion rates as girls, which is as high at 99%. [21] In addition, the current implementation of the HPV vaccination program for girls in Australia does not have full coverage. [8] Vaccinating males will provide indirect protection to the targeted females in the school HPV vaccination program who were not fully vaccinated, by increasing herd immunity. [8] This protection is vital because there is good evidence that vaccines which include HPV 16 and 18 prevent persistent HPV infections and precancerous cervical, vulvar, and vaginal lesions in females. [14]

Therefore, the inclusion of males into the HPV vaccination program will provide them, and possibly their unvaccinated sexual partners, with protection from HPV. [14] This will also result in higher levels of herd immunity, which refers to the protective effect offered to the unvaccinated and susceptible population by having high rates of acquired immunity in the vaccinated population. [22] This phenomenon acts to limit the cases of transmission and the reservoirs of disease. One example of herd immunity is the widespread vaccination of males against rubella even though the virus is of little clinical significance in males. This vaccination program in Australia has led to a significant reduction in the transmission of rubella to susceptible pregnant females and the subsequent development of congenital rubella syndrome. [6,23]

Male vaccination not only provides direct protection to its recipients, it also further reduces rates of transmission [5] and provides indirect population benefits to protect members of both sexes through herd immunity. [24] A retrospective seminal study across Australia compared rates of genital warts before and after female vaccination and post immunisation in the 2004-2009 time period. Results demonstrated a 59% decrease in genital warts in age matched females who were eligible for free vaccination and a corresponding decrease of 28% in heterosexual males in the same age bracket who were ineligible for free vaccination. [25] These trends were supported by another Melbourne study which reported the near disappearance of genital warts in heterosexual females and males under 21 years of age. [26] These studies provide early evidence of the benefits of vaccination providing herd immunity which has reduced the clinical burden of genital warts, the high costs of treatment, [27] and the psychological impact associated with the condition. [28,29]

However, the impact of genital warts in the Australian community can be further reduced. One model of HPV transmission suggests that if vaccination rates for boys reached the same 73% level attained by girls in 2011, then by 2050 the vaccination of boys would have prevented an additional 24% of new HPV infections. [5] Other mathematical models suggest that while vaccination of 12 year old girls alone would reduce the incidence of genital warts by 83% and cervical cancer by 78%, including boys in the vaccination program would reduce the incidence of genital warts by 97% and cervical cancer by 91%. [30]

The vaccination of males would not only help the female population, but would also reduce the disease burden for males. This was demonstrated in study of 4065 healthy boys which demonstrated a clear reduction in the development of external genital lesions. [18] One month after the boys received their third and final vaccination,
seroconversion for all four types of HPV had occurred in 97.4% of boys, with an additional 1.5% of the cohort seroconverting for only three types of the four. [18] Vaccination was shown to reduce the incidence of external genital lesions, due to infection with HPV types 6, 11, 16 and 18, by 90.4% in the per-protocol population. [18]

Nonetheless, the lack of long term data means there is currently no clinical evidence demonstrating a reduction in HPV related male cancers after vaccination. However, two of the quadrivalent vaccine types, HPV16 and HPV18, are responsible for 90% of all HPV attributable cancers in men. [5] Therefore, since the quadrivalent vaccine has demonstrated a reduction in high grade cervical lesions in women, [8] there is an expectation that vaccination will have the same effect for cancers in men. [8,31] Worldwide, HPV types 16 and 18 are associated with over 80% of anal cancers, 25% of penile cancers [6,14,15] and 31% of oropharangeal cancers, [14] so the potential for benefit is significant.

In addition, with the reduced rates of smoking, HPV is becoming an increasingly significant cause of oropharyngeal cancer. [32] Most of the oropharyngeal cancers in non-smokers are caused by HPV infections, and the majority of patients are men. [32] Vaccinating women alone is less effective in reducing the rates of infection and both males and females need to be vaccinated for maximal benefit. [22] Male HPV vaccination is expected to lead to a reduction in the oncogenic HPV prevalence in the community and together with female HPV vaccination, it may reduce the incidence of HPV related oropharyngeal cancers in non-smokers. [32]

However, the lack of long term data means that it is uncertain how long immunity will last before a booster is necessary. Current followup studies suggest that the vaccine remains effective in a population vaccinated 8.5 years ago. [8] Further follow-up is necessary to ensure that the vaccine continues to be effective over longer periods of time.

Populations at risk

There is poor uptake of the National Cervical Screening program among women of Aboriginal and Torres Strait Islander (ATSI) background. [7] Among other factors, this poor uptake is one of the reasons why they have twice the risk of developing cervical cancer and their mortality rate is 5 times higher than the general population. [7]

Including boys in the vaccination program has been modelled to further decrease the rates of genital warts and cervical cancer beyond that which would be attained by female vaccination alone. [30] However, the argument has been made that if there is sufficient uptake of vaccination among girls most males would eventually be protected through female vaccination alone. [22] This argument has merit if the vaccination rates among girls are extremely high, but it assumes transmission only through heterosexual relationships. One of the populations at highest risk of HPV infection is men who have sex with men (MSM). [5] This population acquires little benefit from the current HPV vaccination program, [5] and logic suggests that the HPV infections would persist in this population even with immunisation of all females. MSM are at 30 times the risk of anal cancer when compared with other men. [5] As 90% of anal cancers are associated with HPV, [6] the vaccine has the potential to provide significant benefits for this high risk population. However, it would be difficult on many levels to target the MSM population for immunisation. Targeted immunisation would need to reach MSM at an early stage of sexual activity, but at that time many may be reluctant to disclose their sexual orientation due to a fear of stigma. [5] Therefore, a program of routine male vaccination solves the need to target this group specifically by immunising all young boys prior to sexual debut.

Another population which is at higher risk of HPV infection is men and women with impaired immunity such as organ transplant recipients. [6] Although heterosexual males with impaired immunity may have some protection from the HPV vaccination program for girls, [5,30] heterosexual females and MSM with impaired immunity would not receive the same degree of protection. Immunosuppressed populations are more likely to develop persistent infections which progress to dysplasia and cancer. [6] Wide vaccine coverage would ensure high levels of immunity in the community that should lower the risk of HPV transmission to all high risk groups.

Cost effectiveness

The immediate costs of implementing and monitoring the female-only HPV program was reported in 2007 to be AU$103.5 million over five years. [33] The addition of males to the program added AU$21.1 million over four years in 2012. [12] Although the Australian Government has approved the addition of males to the HPV vaccination program, the cost effectiveness of such a move is still debated in Australia and worldwide. [5,14,34,35]

Some studies have reported that the vaccination of males is not cost effective when compared to female vaccination alone. [5,14,34,35] These reports were made with the commonly used consideration that an incremental cost-effectiveness ratio (ICER) of greater than US$50,000 per quality-adjusted life-year (QALY) is not considered costeffective. [5] However, other studies have shown that the equation becomes much more favourable when protection against all HPV related diseases affecting men and women are included, as that drops the ICER to US$25,664 per QALY. [36]

Although the Australian Government has not released their analysis on the cost effectiveness of including males in the HPV vaccination program, past experience suggests that anything below an ICER of less than AU$60,000 per QALY is generally accepted. [5]

The cost models can only provide an estimation of the impact of HPV not be apparent for some time. This is due to the time interval between HPV infection and the development of cancer. [3,36] However, the rates of genital warts, which are more prevalent and develop more quickly, are already decreasing. [25,26]

The cost per person of vaccination may seem high initially but when the cumulative effects of herd immunity are taken in to account the equation becomes more favourable. [24] In addition, the benefits of HPV vaccination are many, and cost effectiveness studies should take into account the psychosocial benefit, the reduction in the clinical burden of disease, as well as the reduced costs of treating the various presentations of HPV related cancers and genital warts. For example, the treatment of genital warts alone is estimated to cost AU$14 million annually in Australia. [27]

Future research and monitoring

Monitoring the efficacy, safety and the impact of HPV vaccination is an important step in measuring the effectiveness of the vaccination program and in guiding future policy. There are some challenges in vaccine program monitoring due to the long time interval between HPV infection and the development of HPV related cancers, as well as the asymptomatic and transient nature of infection. [3,37] However, the setup of the National HPV Vaccine Program Register (NHVPR) is a key step towards collecting vaccine coverage and dose status data of the target population, as well as collecting basic demographic data of recipients across Australia. [33] This information is only collected with prior consent and enables administrators to match accurate data collected from different registers to individuals. This allows them to run follow up programs to send reminders for missed doses or for boosters if they are required in the future. These data, combined with the information collected by state based cervical cytology registers and the Australasian Association of Cancer Registries provides a powerful tool to quantify the impact of the vaccination program on the incidence of cervical and other HPV related cancers in the long term.

Information regarding the safety of the vaccine and any associated adverse effects is collected by the Medicines Safety Monitoring office of the Therapeutic Goods Administration. [20] However, currently there are no nationally funded programs which monitor HPV genotypes in the general population and the vaccinated group. This could be a method to monitor HPV prevalence in the future or a way to screen for HPV related cancers. [7] The impact of vaccination on targeted groups such as MSM and ATSI Australians should also be monitored to evaluate the impact of the prophylactic vaccine on these high risk groups.

Summary

The aim of the Australian immunisation program is to introduce immunity against the included HPV types before the commencement of sexual activity through a prophylactic HPV vaccine. Through this program, males and females in the pre-adolescent age group are immunised before their sexual debut (which usually creates a peak in incidence of HPV). [38]

Although the use of barrier contraception such as condoms, and male circumcision may offer some protection, any skin-to-skin contact during sexual activity can result in the transmission of HPV. [3] Currently, HPV vaccination is the only reliable and realistic method of primary prevention of HPV infection. It has proven to be safe with a high efficacy and minimal side effects. [20,21] The vaccination has the potential to significantly reduce the clinical burden of HPV-related disease, the associated high costs of treatment, and the adverse psychological impact which can be caused by the diagnosis of a HPV related disease. [28,29]

Male vaccination not only provides benefits to its recipients but also provides indirect benefits to females and the wider population. This will result in accelerated herd immunity and increase the protection offered to susceptible and high risk groups such as unvaccinated females, MSM, immunocompromised individuals, and members of the ATSI community.

Furthermore, the introduction of HPV vaccination for all young males and females will further Australia’s contribution to the prevention of HPV associated diseases worldwide and provide invaluable data describing the long term effects of HPV vaccination. For a population based primary prevention program to be successful there needs to be strict and persistent surveillance and monitoring of its implementation. Currently, Australia has no national program for the surveillance of HPV or genital warts, although it has setup the NHVPR, which monitors the population vaccination coverage. In collaboration with the PAP test and cancer registries, the information collected through this register should provide invaluable data on the impact of HPV vaccination in females. This monitoring will be extended in 2014 to include males, providing a robust data set enabling the measurement of the impact of HPV vaccination on the incidence of HPV related cancers in the coming years.

Conflict of interest

None declared.

Acknowledgements

We would like to thank Dr. Richard Mayes and Dr. Catherine Foley for their assistance and support.

Correspondence

M Boulat: mbou13@student.monash.edu
A Hatwal: ahat5@student.monash.edu

References

[1] Gottschling M, Goker M, Stamatakis A, Bininda-Emonds ORP, Nindl I, Bravo IG. Quantifying the phylodynamic forces driving papillomavirus evolution. Molecular Biology & Evolution. 2011 July; 28(7): p. 2101-13.
[2] Trottier H, Burchell AN. Epidemiology of Mucosal Human Papillomavirus Infection and Associated Diseases. Public Health Genomics. 2009 August 11; 12(5): p. 291-307.
[3] Stanley M. Pathology and epidemiology of HPV infection in females. Gynecologic Oncology. 2010 January; 117(2): p. 5-10.
[4] Stevens MP, Garland SM, Tan JH, Quinn MA, Petersen RW, Tabrizi SN. HPV Genotype Prevalence in Women With Abnormal Pap Smears in Melbourne, Australia. Journal of Medical Virology. 2009 July; 81(7): p. 1283–1291.
[5] Georgousakis M, Jayasinghe S, Brotherton J, Gilroy N, Chiu C, Macartney K. Populationwide vaccination against human papillomavirus in adolescent boys: Australia as a case study. The Lancet Infectious Diseases. 2012 August; 12(8): p. 627-34.
[6] Barroso LF, Wilkin T. Human Papillomavirus Vaccination in Males: The State of the Science. Current Infectious Disease Reports. 2011 April; 13(2): p. 175-81.
[7] Australian Institute of Health and Welfare. Cervical screening in Australia 2009-2010. Canberra:, Australian Government Department of Health and Ageing; 2012.
[8] Immunise Australia Program. Fact Sheet: National Immunisation Program – HPV Vaccination for Boys. Canberra:, Australian Government Department of Health and Aging; 2012.
[9] M GS, Skinner SR, Brotherton JML. Adolescent and young adult HPV vaccination in Australia: Achievements and Challenges. Preventative Medicine. 2011 October; 53(1): p.29-35.
[10] The National HPV Vaccination Program. Protecting you daughter from cervical cancer. Immunise Australia Program; 2007 March.
[11] Kirby T. Australia to be first country to vaccinate boys against HPV. The Lancet. 2012 August; 13(8): p. 333.
[12] Plibersek T. Minister for Health. [Online]. Canberra; 2012 [cited 2012 10 30. Available from: http://www.health.gov.au/internet/ministers/publishing.nsf/Content/mr-yr12-tptp059.htm
[13] Koutsky L. The Epidemiology behind the HPV Vaccine Discovery. Annals of Epidemiology. 2009 April; 19(4): p. 239-44.
[14] Kim JJ, Goldie SJ. Cost effectiveness analysis of including boys in a human papillomavirus vaccination programme in the United States. British Medical Journal. 2009 October; 339:b3884.
[15] de Martel C, Ferlay J, Franceschi S, Vignat J, Bray F, Forman D, et al. Global burden of cancers attributable to infections in 2008: a review and synthetic analysis. The Lancet Oncology. 2012 June; 13(6): p. 607-615.
[16] Parkin M, Bray F. Chapter 2: The burden of HPV-related cancers. Vaccine. 2006 August; 24(3): p. 11-25.
[17] National HPV Vaccination Program Register. Immunise Australia Program. [Online].; 2011 [cited 2012 October 22. Available from: http://www.immunise.health.gov.au/internet/immunise/publishing.nsf/Content/immunise-hpv
[18] Giuliano AR, Palefsky JM, Goldstone S, Moreira ED, Penny ME, Aranda C, et al. Efficacy of Quadrivalent HPV Vaccine against HPV Infection and Disease in Males. The New England Journal of Medicine. 2011 February; 364(5): p. 401-11.
[19] Joura EA, Leodolter S, Hernandez-Avila M, Wheeler CM, Perez G, Koutsky LA, et al. Efficacy of a quadrivalent prophylactic human papillomavirus (types 6, 11, 16, and 18) L1 virus-like-particle vaccine against high-grade vulval and vaginal lesions: a combined analysis of three randomised clinical trials. The Lancet. 2007 May; 369(9574): p. 1693-702.
[20] Therapeutic Goods Administration. Gardasil (human papillomavirus vaccine). [Online].; 2010 [cited 2012 October 12. Available from: http://www.tga.gov.au/safety/alerts-medicine-gardasil-070624.htm
[21] Block SL, Nolan T, Sattler C, Barr E, Giacoletti KE, Marchant CD, et al. Comparison of the immunogenicity and reactogenicity of a prophylactic quadrivalent human papillomavirus (types 6, 11, 16, and 18) L1 virus-like particle vaccine in male and female adolescents and young adult women. Pediatrics. 2006 November; 118(5): p. 2135-45.
[22] Garnett GP. Role of Herd Immunity in Determining the Effect of Vaccines against Sexually Transmitted Disease. The Journal of Infectious Diseases. 2005 February; 191(1): p. 97-106.
[23] Song N, Gao Z, Wood JG, Hueston L, Gilbert GL, MacIntyre CR, et al. Current epidemiology of rubella and congenital rubella syndrome in Australia: Progress towards elimination. Vaccine. 2012 May; 30(27): p. 4073-8.
[24] Clemens J, Shin S, Ali M. New approaches to the assessment of vaccine herd protection in clinical trials. the lancet infectious diseases. 2011 June; 11(6): p. 482-7.
[25] Donovan B, Franklin N, Guy R, Grulich AE, Regan DG, Ali H, et al. Quadrivalent human papillomavirus vaccination and trends in genital warts in Australia: analysis of national sentinel surveillance data. The Lancet Infectious Diseases. 2011 January; 11(1): p. 39-44.
[26] Read TR, Hocking JS, Chen MY, Donovan B, Bradshaw CS, Fairley CK. The near disappearance of genital warts in young women 4 years after commencing a national human papillomavirus (HPV) vaccination programme. Sexually Transmitted Infections. 2011 December; 87(7): p. 544-7.
[27] Pirotta M, Stein AN, Conway EL, Harrison C, Britt H, Garland S. Genital warts incidence and healthcare resource utilisation in Australia. Sexually Transmitted Infections. 2010 June; 86(3): p. 181-6.
[28] Pirotta M, Ung L, Stein A, Conway EL, Mast TC, Fairley CK, et al. The psychosocial burden of human papillomavirus related disease and screening interventions. Sexually Transmitted Infections. 2009 December; 85(7): p. 508-13.
[29] Woodhall S, Ramsey T, Cai C, Crouch S, Jit M, Birks Y, et al. Estimation of the impact of genital warts on health-related quality of life. Sexually Transmitted Infections. 2008 June; 84(3): p. 161-6.
[30] Garland SM. Prevention strategies against human papillomavirus in males. Gynecologic Oncology. 2010 May; 117(2): p. 20-5.
[31] Miralles-Guri C, Bruni L, Cubilla AL, Castellsagué X, Bosch FX, de Sanjosé S. Human papillomavirus prevalence and type distribution in penile carcinoma. Journal of Clinical Pathology. 2009 October; 62(10): p. 870-8.
[32] Sturgis EM, Cinciripini PM. Trends in head and neck cancer incidence in relation to smoking prevalence: an emerging epidemic of human papillomavirus-associated cancers? Cancer. 2007 October; 110(7): p. 1429-35.
[33] Abbott T. National Health Amendment (National HPV Vaccination Program Register) Bill 2007. Canberra: The Parliament of the Commonwealth of Australia, House of Representatives; 2007.
[34] Taira AV, Neukermans CP, Sanders GD. Evaluating human papillomavirus vaccination programs. Emerging Infectious Diseases. 2004 November; 10(11): p. 1915-23.
[35] Jit M, H CY, J EW. Economic evaluation of human papillomavirus vaccination in the United Kingdom. British Medical Journal. 2008 July; 337:a769.
[36] Elbasha EH, Dasbach EJ. Impact of vaccinating boys and men against HPV in the United States. Vaccine. 2010 October; 28(42): p. 6858-67.
[37] Brotherton JM, Kaldor JM, Garland SM. Monitoring the control of human papillomavirus (HPV) infection and related diseases in Australia: towards a national HPV surveillance strategy. Sexual Health. 2010 September; 7(3): p. 309-10.
[38] Gertig DM, Brotherton JM, M S. Measuring human papillomavirus (HPV) vaccination coverage and the role of the National HPV Vaccination Program Register, Australia. Sexual Health. 2011 June; 8(2): p. 171-8.
[39] de Villiers EM, Fauquet C, Broker TR, Bernard HU, Hausena Hz. Classification of papillomaviruses. Virology. 2004 June 20; 324(1): p. 17-27.

Categories
Review Articles Articles

Treatment of persistent diabetic macular oedema – intravitreal bevacizumab versus laser photocoagulation: A critical appraisal of BOLT Study for an evidence based medicine clinical practice guideline

Laser photocoagulation has remained the standard of treatment for diabetic macular oedema (DME) for the past three decades. However, it has been shown to be unbeneficial in chronic diffuse DME. Intravitreal bevacizumab (ivB) has been proposed as an alternate and effective treatment of DME. This review evaluates the evidence behind comparing bevacizumab to laser photocoagulation in treating persisting DME. A structured systematic search of literature, with critical appraisal of retrieved trials, was performed. Four randomised controlled trials (RCTs) supported beneficial effects of ivB over laser photocoagulation. Only one RCT, the BOLT study, compared laser to ivB effect in persistent DME. The results from the study showed significant improvement in mean best corrected visual acuity (BCVA) and greater reduction in mean central macular thickness (CMT) in the ivB group, with no significant difference in safety outcome measures.

Introduction

Diabetic macular oedema is a frequent manifestation of diabetic retinopathy and is one of the leading causes of blindness and visual acuity loss worldwide. [1] The presence of DME varies directly in proportion with the duration and stage of diabetic retinopathy, with a prevalence of three percent in mild non-proliferating retinopathy, 38% in moderate-to-severe non-proliferating retinopathy and 71% with proliferative retinopathy. [2]

Diabetic macular oedema (DME) is a consequence of micro-vascular changes in the retina that lead to fluid/plasma constituent accumulation in the intra-retinal layers of the macula thereby increasing macular thickness. Clinically significant macular oedema (CSME) is present when there is thickening within or close to the central macula with hard exudates within 500μm of the centre of the macula and with retinal thickening of at least one disc area in size. [3,4] As measured in optical coherence tomography, central macular thickness (CMT) corresponds approximately to retinal thickness at the foveal region and can quantitatively reflect the amount of CSME a patient has. [5] Two different types of DME exist: focal DME (due to fluid accumulation from leaking micro-aneurysms) and diffuse DME (due to capillary incompetence and inner-retinal barrier breakdown).

Diabetic macular oedema pathogenesis is multi-factorial; influenced by diabetes duration, insulin dependence, HbA1C levels and hypertension. [6] Macular laser photocoagulation has remained the standard treatment for both focal and diffuse DME, based on the recommendations of the Early Treatment Diabetic Retinopathy Study (ETDRS) since 1985. This study showed the risk of CSME decreases by approximately 50% (from 24% to 12%) at three years with the use of macular laser photocoagulation. However, the improvement in visual acuity is modest, observed in less than three percent of patients. [3]

Recent research indicates that macular laser therapy is not always beneficial and has limited results, especially for chronic diffuse DME, [3,7] with visual acuity improving in only 14.5% of patients. [8] Following laser treatment, scars may develop and reduce the likelihood of vision improvement [3] hence alternate treatments for DME such as intravitreal triamcinolone (ivT), have been investigated. Intravitreal triamcinolone (ivT) works via a number of mechanisms including reducing vascular permeability and down regulating VEGF (vascular endothelial growth factor). Anti-VEGF therapies have been the focus of recent research, and those modalities have been shown to potently suppress angiogenesis and to decrease vascular permeability in ocular disease such as DME, leading to improvement in visual acuity. [9] The results of treating DME with anti-VEGFs are controversial and are in need of larger prospective RCTs. [10]

Currently used anti-VEGFs include bevacizumab, ranibizumab and pegatanib. Ranibizumab has been shown to be superior in treating DME, both in safety and efficacy, compared to laser therapy, in several studies that include RESTORE, RESOLVE, RISE and RIDE studies. [11-13] It has been recently approved by the Food and Drug Administration (FDA) for treating DME in the United States of America. [14] Bevacizumab (Avastin®) is a full length monoclonal antibody against VEGF, binding to all subtypes of VEGF. [10] In addition to treating metastatic colon cancer, bevacizumab is also used extensively off-label for many ocular conditions that include age related macular degeneration (AMD), DME, retinopathy of prematurity and macular oedema secondary to retinal vein occlusion. [15] Documented adverse effects of ivB include transiently elevated intraocular pressure (IOP) and endopthalmitis. [16] Systemic effects associated with ivB injection include rise in blood pressure, thrombo-embolic events, myocardial infarction (MI), transient ischemic attack and stroke. [16,17] Other significant adverse events of bevacizumab when given systemically include delayed wound healing, impaired female fertility, gastrointestinal perforations, haemorrhage, proteinuria, congestive heart failure and hypersensitivity reactions. [17] Although not currently approved, a 1.25-2.5mg infusion of ivB is used for treating DME without significant ocular/systemic toxicity. [15]

The DRCR.net study (2007) has shown that ivB can reduce DME. [18] In addition, several studies, which have been carried out on diabetic retinopathy patients with CSME evaluating the efficacy of ivB ± ivT versus laser, demonstrated better visual outcomes with BCVA. [6,19- 21] Meta-analysis of those studies indicated ivB to be an effective short-term treatment for DME, with efficacy waning after six weeks. [6] This review evaluates the evidence behind the effect of ivB, compared to laser, in treating persisting DME despite standard treatment.

Clinical question

Our clinical question for this focused evidence based medicine article has been constructed to address the four elements of the problem, the intervention, the comparison and the outcomes as recommended by Strauss et al. (2005) [22]. “In diabetic patients with persistent clinically significant macular oedema (CSME) is intravitreal Bevacizumab (Avastin®) injection better than focal/grid laser photocoagulation in preserving the best-corrected visual acuity (BCVA)?”

Methodology

Comprehensive electronic searches in the British Medical Journal, Medical Journal of Australia, Cochrane Central Register of Controlled Trials, MEDLINE and PUBMED were performed for relevant literature, using the search terms diabetic retinopathy, CSME, CMT, bevacizumab and laser photocoagulation. Additional information from the online search engine, Google, was also incorporated. Reference lists of studies were then hand-searched for relevant studies/trials.

Selection

Results were restricted to systematic reviews, meta-analysis and randomised clinical trials (RCTs). Overall six RCTs were identified, which evaluated the efficacy of ivB compared to lasers in treating DME. [18- 21,23,24] There was also one meta-analysis comparing ivB to non-drug control treatment (lasers or sham) in DME. [7] One study was published showing pilot study results of the main trial, so the final version was selected for consideration to avoid duplication of results. [20,23] One study was excluded because it excluded focal DME patients. [19] The DRCR study (2007) was excluded because it was not designed to evaluate if treatment with ivB was beneficial in DME patients. [18] A meta-analysis by Goyal et al. was also excluded because it evaluated bevacizumab with sham treatment and not laser therapy. [7]

Thus, three relevant RCTs were narrowed down for analysis (Table 1) in this evidence based medicine review. [20,21,24] However, only the BOLT study (2012) evaluated the above treatment modalities in persistent CSME. The other two RCTs evaluated the treatment efficacies in patients with no prior laser therapies for CSME/diabetic retinopathy. Hence, only the BOLT study (2012) has been critically appraised in this report. The study characteristics of the other relevant RCTs evaluating ivB versus lasers are represented in Table 1, and where possible will be included in the discussion.

Outcomes

The primary outcomes of interest are changes in BCVA and CMT, when treated with ivB or lasers for DME, whilst the secondary outcomes are any associated adverse events. All three studies were prospective RCTs with NHMRC level-II evidence. Table 1 summarises the overall characteristics of the studies.

Critical appraisal

The BOLT Study (2010) is a twelve month report of a two year long single centre, two arm, randomised, controlled, masked clinical trial from the United Kingdom (UK). As such, it qualifies for NHMRC [25] level-II quality of evidence. It is the only RCT that compared the efficacy of ivB with laser in patients with persistent CSME (both diffuse and focal DME) who had undergone at least one laser therapy for CSME previously. Comparison of study characteristics of the three RCTs chosen are presented in Table 2.

Major strengths of the BOLT Study compared to Soheilian et al. and Faghihi et al. studies include the duration of study and increased frequency of review of patients in ivB groups. The BOLT Study was a two year study, whereas the other two studies’ duration was limited to less than a year (Table 1). Because of its lengthy duration, it was possible to evaluate the safety outcome profile of ivB in the BOLT Study, unlike in the other two studies.

Research has indicated that the effects of ivB could last between two to six weeks, [6] and the effects of lasers could last until three to six months. [3] In BOLT, the ivB group was assessed every six weeks, and re-treatment provided with ivB as required, while the laser group were followed up every four months ensuring the preservation of efficacy profile and its reflection in the results. Whereas, in Sohelian et al., [20] follow up visits were scheduled every twelve weeks after the first visit, and in Faghihi et al., [21] follow up was at six and sixteen weeks. Therefore, there may have been a bias against the efficacy profile of ivB, given the insufficiency in the nature of follow up/treatment. Apart from the follow up and therapy modalities, the groups were treated equally in BOLT, preserving the analysis against treatment bias.

Weaknesses of the BOLT Study [24] include limited number of patients: 80 eyes in total, with 42 patients allocated to ivB and 38 patients to laser therapy. Of them, in the ivB group, six patients discontinued intervention; only 37 patients were included in the analysis at 24 months and five were excluded as the data was not available. Similarly, of the 38 patients allocated to the laser group, 13 patients discontinued the intervention; 28 patients were analysed overall where ten were excluded from analysis. However, the BOLT Study performed intention to treat analysis minimizing dropout effects. Given these, we feel the BOLT Study fulfills the criteria for a valid RCT with significant strengths.

Magnitude and precision of treatment effect from BOLT Study

Best corrected visual acuity outcomes

Significant difference existed between mean ETDRS BCVA at 24 months in the ivB group (64.4±13.3) compared to the laser group (54.8±12.6) with p=0.005 (Any p-value <0.05 indicates statistical significance between the groups under comparison). Furthermore, the study reports of the ivB group gaining a median of 9 ETDRS letters whereas the laser group gaining a median of 2.5 letters (p=0.005). Since there was a significant difference between the duration of CSME between the two groups, the authors of the study performed analysis after adjusting for this variable. They also adjusted for the baseline BCVA and for patients who had cataract surgery during the study. The mean BCVA still remained significantly higher in the ivB group compared to laser.

Marked difference has also been shown in the proportion of people who gained or lost vision between the two treatment groups. Approximately, 49% of patients in the ivB group gained more than or equal to ten ETDRS letters compared to seven percent of patients in laser group (p-value = 0.01). Similarly, none of the patients in the ivB group, compared to 86% in the laser group (p=0.002), lost fewer than 15 ETDRS letters. In addition, the study also implied that BCVA and CMT can be maintained long term with reduced injection frequency of six to twelve months. However, the authors also suggest that increasing the frequency of injections to every four weeks (rather than the six week frequency opted in the study) may provide better visual acuity gains as reported in RISE and RIDE studies. [13]

Central macular thickness outcomes

The mean change in the CMT over the 24 month period was -146±171μm in ivB group compared to -118±112μm in the laser group (p=0.62), showing statistically no significant difference in ivB/laser effectively reducing the CMT. This differed from the twelve month report of the same study that indicated improvement in CMT in the ivB group compared to the laser group.

Retinopathy

Results of the BOLT Study indicated a trend of reducing retinopathy severity level in the ivB group, while the laser group showed stabilised grading. However, the Mann-Whitney test indicated no significant difference between the groups (p=0.13). [24]

We summarised the results of the author’s analysis of the step-wise changes in retinopathy grading levels, for further analysis, into three categories: deteriorating, stable and improving (Table 3). As shown in the table, we calculated the p-values using the chi-square test between both groups for each category.

We attempted to further quantify the magnitude of ivB treatment compared to lasers on the retinopathy severity level by calculating the number needed to treat (NNT) using the data in Table 3. The results showed an absolute risk reduction of nine percent with an NNT of 10.9 (95% CI indicating harm in 21.6 harm to benefit in 3.6 patients treated). Since the confidence interval indicates an uncertainty between benefit and harm, this trial does not give sufficient information to inform clinical decision making regarding change in retinopathy severity levels with ivB treatment.

Safety outcome measures

As mentioned, one of the strengths of the BOLT Study is evaluating the safety profile of ivB given its two year duration. The study analysed the safety outcomes of macular perfusion and retinal nerve fibre layer (RNFL) thickness in detail. The results indicated no significant difference in the mean greatest linear diameter of foveal avascular zone between the laser and the ivB group, from baseline or in the worsening of severity grades. Similarly, no significant changes in median RNFL thickness have been reported between ivB and laser groups.

At 24 months, the number of observed adverse events, ocular and systemic, in the study was low. We have analysed the odds ratio (Table 4) as per the published results in the study. Statistically significant higher chances of having eye pain and irritation (eighteen times greater risk) during or after intervention, sustaining sub-conjunctiva haemorrhage and of having a red eye (eighteen times greater risk) was found in the ivB group compared to lasers. As can be further inferred from the table, no significant differences in sustaining other non-ocular adverse events, ocular serious adverse events or non-ocular serious adverse events including stroke/MI/other thrombo-embolic events were found between both the groups.

Clinical applicability of results

The BOLT Study participants were from Moorfields Eye Hospital (UK) and had comparable demographics and healthcare standards to Australia. In the study, both patient (BCVA, retinopathy severity level changes, adverse events) and disease-oriented outcomes (CMT) were considered, making the study both theoretically and practically relevant, informing both clinicians and researchers of the outcomes. Given this, clinical applicability of the results to the Australian population appears reasonable. All other personnel involved in the study (outcome assessors) and imaging technology are available as well, making the treatment feasible in our setting.

In Australia, the overall diabetic retinopathy prevalence is 24.5%, [6] the statistics associated with it rise every year due to the progressing obesity/diabetes epidemic. Bevacizumab is currently approved under the pharmaceutical benefits scheme for metastatic colon cancer.

It is being successfully used ‘off-label’ for the treatment of ocular conditions including age related macular degeneration and diabetic macular oedema. It costs about 1/40th the cost of ranibizumab, another anti-VEGF drug that has current approval for AMD treatment in Australia and FDA approval for DME treatment in America. [26] Since recent studies indicate no superior effect of ranibizumab versus bevacizumab in safety and efficacy profile in preserving visual acuity, [27,28] and since recent NICE guidelines also recommend not using ranibizumab for diabetic macular oedema due to high costs involved with the administration of that drug, [29] bevacizumab must be further considered and evaluated for cost effectiveness in routine usage in clinical practice.

Given the benefits with ivB, that is, improved BCVA, no significant adverse events and no risk of permanent laser scarring of the retina, and the aforementioned discussion, using ivB in treatment for persisting DME appears to be evidence based, and relatively safe practice.

Conclusion

The BOLT Study assessed the safety and efficacy of ivB in persistent DME despite previous laser therapy. The power of the study was 0.8 enabling it to detect BCVA differences between two groups. In line with many other previous studies evaluating ivB’s efficacy, the results indicate significant improvement in the mean ETDRS BCVA, and no significant differences in severe systemic/ocular adverse events compared to the laser group. This study supports the use of ivB in patients with CSME, with adequate precision. However the magnitude of the effect on changes in the severity of diabetic retinopathy, in CMT changes and other adverse events, needs to be evaluated further through large prospective RCTs.

Conflict of interest

None declared.

Correspondence

pavani.kurra@gmail.com

Categories
Articles Review Articles

Seasonal influenza vaccination in antenatal women: Views of health care workers and barriers in the delivery of the vaccine

Background: Pregnant women are at an increased risk of developing influenza. The National Health and Medical Research Council recommends seasonal influenza vaccination for all pregnant women who will be in their second or third trimester during the influenza season. The aim of this review is to explore the views of health care workers regarding seasonal influenza vaccination in antenatal women and describe the barriers in the delivery of the vaccine. Methods: A literature search was conducted using MEDLINE for the terms: “influenza,” “pregnancy,” “antenatal,” “vaccinations,” “recommendations,” “attitudes,” “knowledge” and “opinions”. The review describes findings of publications concerning the inactivated influenza vaccination only, which has been proven safe and is widely recommended. Results: No studies have addressed the knowledge and attitudes of Australian primary health care providers towards influenza vaccination despite their essential role in immunisations in Australia. Overseas studies indicate that factors that contribute to the low vaccination rates are 1) the lack of general knowledge of influenza and its prevention amongst health care workers (HCWs) 2) variable opinions and attitude regarding the vaccine 3) lack of awareness of the national guidelines 4) and lack of discussion of the vaccine by the HCW. Lack of maternal knowledge regarding the safety of the vaccine and the cost-burden of the vaccine are significant barriers in the uptake of the vaccination. Conclusion: Insufficient attention has been given to the topic of influenza vaccinations in pregnancy. Significant efforts are required in Australia to obtain data about the rates of influenza vaccination of pregnant women.

Introduction

Seasonal influenza results in annual epidemics of respiratory diseases. Influenza epidemics and pandemics increase hospitalisation rates and mortality, particularly among the elderly and high risk patients with underlying conditions. [1-3] All pregnant women are at an increased risk of developing influenza due to progressive suppression of Th1- cell-mediated immunity and other physiological changes that cause culmination of morbidity towards the end of pregnancy. [4-7]

Annual influenza vaccination is the most effective method for preventing influenza virus infection and its complications [8] Trivalent inactivated influenza vaccine (TIV) has been proven safe and is recommended for person aged ≥6 months, including those with high- risk conditions such as pregnancy. [8-10] A randomised controlled study in Bangladesh demonstrated that TIV administered in the third trimester of pregnancy resulted in reduced maternal respiratory illness and reduced infant influenza infection. [11, 12] Another randomised controlled trial has shown that influenza immunisation of pregnant women reduced influenza-like illness by more than 30% in both the mothers and the infants, and reduced laboratory-proven influenza infections in 0- to 6-month-old infants by 63%. [13]

The current Australian Immunisation Guidelines recommend routine administration of influenza vaccination for all pregnant women who will be in the second or third trimester during the influenza season, including those in the first trimester at the time of vaccination. [4,14,15] The seasonal influenza vaccination has been made available for free to all pregnant women in Australia since 2010. [4] However, The Royal Australian and New Zealand College of Obstetricians and Gynaecologists (RANZOG) statement for ‘Pre-pregnancy Counselling and routine Antenatal Assessment in the absence of pregnancy Complications’ does not explicitly mention routine delivery of influenza vaccination to healthy pregnant women. [16] RANZCOG recently published the college statement on swine flu vaccination during pregnancy; advising that pregnant women without complications and recent travel history must weigh the risk-benefit ratio before deciding to uptake the H1N1 influenza immunisation. [17] Therefore, it is evident that there is conflicting advice in Australia about the routine delivery of influenza vaccination to healthy pregnant women. In contrast, firm recommendation for routine influenza vaccination for pregnant women was established in 2007, by the National Advisory Committee on Immunisations (NACI) in Canada, with minimal conflict from The Society of Obstetricians and Gynaecologists of Canada (SOGC). [6] Succeeding the 1957 influenza pandemic, the rate of influenza immunisations increased significantly with greater than 100,000 women receiving the vaccination annually between 1959-1965 in the United States. [8] Since 2004 the American Advisory Committee on Immunisation Practice (ACIP) has recommended influenza vaccination for all pregnant women, at any stage of gestation. [9] This is supported by The American College of Obstetricians and Gynaecologists’ Committee on Obstetric Practice. [18]

A recent literature review performed by Skowronski et al. (2009) found that TIV is warranted to protect women against influenza- related hospitalisation during the second half of normal pregnancy, but evidence is otherwise insufficient to recommend routine TIV as the standard of practice for all healthy women beginning in early pregnancy. [6] Similarly, another review looked at the evidence for the risks of influenza and the risks and benefits of seasonal influenza vaccination in pregnancy and concluded that data on influenza vaccine safety in pregnancy is inadequate. [19] However, based on the available literature, there was no evidence of serious side effects in women or their infants, including no indication of harm from vaccination in the first trimester. [19]

We aim to review the literature published on the delivery and uptake of influenza vaccination during pregnancy and identify the reasons for low adherence to guidelines. The review will increase our understanding of how the use of the influenza vaccination is perceived by health care providers and the pregnant women.

Evidence of health care provider’s attitude, knowledge and opinions

Several published studies have revealed data supporting deficits in the knowledge of health care providers regarding the significance of the vaccine and the national guidelines, hence suggesting a low rate of vaccine recommendation and uptake by pregnant women. [20] A research project in 2006 performed a cross-sectional study of the knowledge and attitudes towards the influenza vaccination in pregnancy amongst all levels of health care workers (HCW’s) working at the Department for Health of Women and Children at University of Milan, Italy. [20] The strength of this study was that it included 740 HCWs representing 48.4% working in obstetrics/gynaecology, 17.6% in neonatology and 34% in paediatrics, of whom 282 (38.1%) were physicians, 319 (43.1%) nurses, and 139 (18.8%) paramedics (health aides/healthcare assistants). The respondents were given a pilot-tested questionnaire about their perception of the seriousness of influenza, their general knowledge of influenza recommendations and preventive measures, and their personal use of influenza vaccination; which was to be self-completed in 20 mins in an isolated room. Descriptive analysis of the 707 (95.6%) HCWs that completed the questionnaire revealed that the majority (83.6%) of HCW’s in obstetrics/gynaecology never recommended the influenza vaccination to healthy pregnant women. Esposito et al. (2007) highlighted that only a small number of nurses and paramedics, from each speciality, regarded influenza as serious in comparison to the physicians. [20] Another study investigating practices of the Midwives found that only 37% believed that influenza vaccine is effective and 22% believed that the vaccine was a greater risk than influenza. [21] The results from these studies clearly indicate deficiencies in the general knowledge of influenza and its prevention amongst health care staff.

In contrast, a study by Wu et al. (2006) suggested unusually high vaccination uptake rate of the fellows from the American College of Obstetricians and Gynaecologists (ACOG) who live and practice in Nashville, Tennessee. [22] The survey focussed on physician knowledge, practices, and opinions regarding influenza vaccination of pregnant women. Results revealed that 89% of practitioners responded that they routinely recommend the vaccine to pregnant women and 73% actually administered the vaccination to pregnant and postpartum women. [21] Sixty-two percent responded that the earliest administration of the vaccine should be the second trimester, while 32% reported that it should be offered in the first trimester. Interestingly, 6% believed that it should not be delivered at all during the pregnancy. Despite the national recommendation to administer the vaccination routinely to all pregnant women, [4] more than half of the obstetricians preferred to withhold it until second trimester due to concerns regarding vaccine safety, association with spontaneous abortion and possibility of disruption in embryogenesis. [22] Despite the high uptake rate identified by the respondents, there are a few major limitations in this study. First, the researchers excluded the family physicians and midwives practicing obstetrics in their survey, which prevents a true representation of the sample population. Second, the vaccination rates were identified by the practitioners and not validated, which increases the likelihood of personal bias by the practitioners.

It is evident that HCWs attending to pregnant women and children have limited and frequently incorrect beliefs concerning influenza and its prevention. [20,23] A recent study by Tong et al. (2008) demonstrated that only 40% of the health care providers at the three hospitals studied in Toronto were aware of the high-risk status of pregnant women and only 65% were aware of the NACI recommendations. [23] Furthermore, obstetricians were less likely than family physicians to indicate that it was their responsibility to discuss, recommend, or provide influenza vaccination. [23] Tong et al. (2008) also demonstrated that high levels of provider knowledge about influenza and maternal vaccination, positive attitudes towards influenza vaccination, increased age, being a family physician, and having been vaccinated against influenza, were associated with recommending influenza vaccine to pregnant women. [23] This data is also supported by Wu et al. and Espostio et al.

In 2001, Silverman et al. (2001) concluded that physicians were more likely to recommend vaccine if they were aware of current ‘Centers for Disease Prevention and Control’ guidelines, gave vaccinations in their offices and had been vaccinated against influenza themselves. [24] Similarly, Lee et al. (2005) showed that midwives who received the immunisation themselves and firmly believed in its benefits, were more likely to offer it to pregnant women. [21] Wallis et al. (2006) conducted a multisite interventional study involving educational sessions with the physicians and the use of “Think Flu Vaccine” notes on active obstetric charts, to illustrate a fifteen fold increase in the rate of influenza vaccinations in pregnancy. [25] This study also demonstrated that increase in uptake was greater in family practices versus obstetric practices, and furthermore increased in small practices as opposed to large practices.

Overall, the literature here is derived mostly from American and Canadian studies as there is no data available for Australia. Existing data suggest that there is a significant lack of understanding regarding influenza vaccine safety, benefits and recommendations amongst the HCW’s. [20-27] These factors may lead to wrong assumptions and infrequent vaccine delivery.

Barriers in delivering the influenza vaccinations to pregnant women

Aside from the gaps in the health care provider’s understanding of vaccine safety and national guidelines, several other barriers in delivering the influenza vaccine to pregnant women have been identified. A study published in 2009, based on CDC analysis of data from the Pregnancy Risk Assessment and Monitoring System from Georgia and Rhode Island over the period of 2004-2007, showed that the most common reasons for not receiving the vaccination were, “I don’t normally get the flu vaccination” (69.4%), and “my physician did not mention anything about a flu vaccine during my pregnancy” (44.5%). [28] Lack of maternal knowledge about the benefits of the influenza vaccination has also been demonstrated by Yudin et al. (2009), who conducted a cross-sectional in hospital survey of 100 postpartum women during the influenza season in downtown Toronto. [29] This study concluded that 90% of women incorrectly believed that pregnant women have the same risk of complications as non-pregnant women and 80% incorrectly believed that the vaccine may cause birth defects. [29]. Another study highlighted that 48% of physician listed patient refusal as a barrier for administering the vaccine. [22] These results were supported by Wallis et al. (2006), which focused on using simple interventions such as chart reminders to surmount the gaps in knowledge of women. [25] ‘Missed opportunities’ by obstetricians and family physicians to offer the vaccination have been suggested as a major obstacle in the delivery of the influenza vaccination during pregnancy. [14,23,25,28]

During influenza season, hospitalized pregnant women with respiratory illness had significantly longer lengths of stay and higher odds of delivery complications than hospitalized pregnant women without respiratory illness. [5] In some countries cost-burden of the vaccine to women is another major barrier that contributes to lower vaccination rates among pregnant women. [22] This is not an issue in Australia where the vaccination is free for all pregnant women. Provision of free vaccination to all pregnant women is likely to have a significant advantage when considering the cost-burden of influenza on the health-care sector. However, the cost-burden on the patient can be viewed as lack of access, as reported by Shavell et al. (2012) As such patients that lacked insurance and transportation were less likely to receive the vaccine. [30]

This is supported by several studies that have shown that the vaccine is comparatively cost-effective when considering the financial burden of influenza related morbidity. [31] A 2006 study based on decision analysis modelling revealed that vaccination rate of 100% in pregnant women would save approximately 50 dollars per woman, resulting in a net gain of approximately 45 quality-adjusted hours relative to providing supportive care alone in the pregnant population. [32] Beigi et al. (2009) demonstrated that maternal influenza vaccination using either the single- or 2-dose strategy is a cost-effective approach when influenza prevalence is 7.5% and influenza-attributable mortality is 1.05%. [32] As the prevalence of influenza and/or the severity of the outbreak increases the incremental value of vaccination also increases. [32] Moreover, a study in 2006 has proven the cost-effectiveness to the health sector of the single dose influenza vaccination for influenza like illness. [31] Therefore, patient education about the relative cost- effectiveness of the vaccine and adequate reimbursement by the government is required to alleviate this barrier in other nations but not in Australia where the vaccination is free for all pregnant women.

Lack of vaccine storage facilities in physician offices is an important barrier preventing the recommendation and uptake of the vaccine by pregnant women. [23,33] A recent study monitoring the immunisation practices amongst practicing obstetricians found that less than 30% store influenza vaccine in their office. [18] One study showed acceptance rates of influenza vaccine of 71% of 448 eligible pregnant women who were offered the influenza vaccine at routine prenatal visit due to the availability of storage facilities at the practice, suggesting that the uptake of vaccination can be increased by simply overcoming the logistical and organisational barriers such as vaccine storage, inadequate reimbursement and patient education. [34]

Conclusion

From the limited data available, it is clear that there are is a variable level of knowledge of influenza and its prevention amongst HCWs. There is also and a general lack of awareness of the national guidelines in their countries. However, there is no literature for Australia to compare with other nations. There is some debate regarding the trimester in which the vaccine should be administered. There is further lack of clarity in terms of who is responsible for the discussion and delivery of the vaccine – the general practitioner or the obstetrician. These factors contribute to a lack of discussion of vaccine use and amplify the amount of ‘missed opportunities.’

Lack of maternal knowledge about the safety of the vaccine and its benefits is also a barrier that must be overcome by the HCW through facilitating an effective discussion about the vaccine. Since the vaccine has been rendered free in Australia, cost should not prevent vaccination. Regular supply and storage of vaccines especially in remote towns of Australia is likely to be a logistical challenge.

There is limited Australian literature exploring the uptake of influenza vaccine in pregnancy and the contributing factors such as the knowledge, attitude and opinion of HCWs, maternal knowledge of the vaccine and logistical barriers. A reasonable first step would be to determine the rates of uptake and prevalence of influenza vaccination in antenatal women in Australia.

Conflict of interest

None declared.

Correspondence

S Khosla: surabhi.khosla@my.jcu.edu.au

 

Categories
Review Articles Articles

Spontaneous regression of cancer: A therapeutic role for pyrogenic infections?

Spontaneous regression of cancer is a phenomenon that is not well understood. While the mechanisms are unclear, it has been hypothesised that infections, fever and cancer are linked. Studies have shown that infections and fever may be involved in tumour regression and are associated with improved clinical outcomes. This article will examine the history, evidence and future prospects of pyrogenic infections towards explaining spontaneous regression and how they may be applied to future cancer treatments.

Introduction

Spontaneous regression of cancer is a phenomenon that has been observed since antiquity. [1] It can be defined as a reversal or reduction of tumour growth in instances where treatment has been lacking or ineffectual. [2] Little is known about its mechanism but two observations in cancer patients are of particular interest: first, infections have been shown to halt tumour progression while second, development of fever has been associated with improved prognosis.

Until recently, fever and infections have been regarded as detrimental states that should be minimized or prevented. However, in the era preceding the use of antibiotics and antipyretics, the prior observations were prevalent and were used as the basis of crude yet stunningly effective immunological-based treatments. The promise of translating that success to modern cancer treatment is a tempting one and should be examined further.

History: Spontaneous Regression & Coley’s Toxins

Spontaneous regression of cancers was noted as early as the 13th century. The Italian Peregrine Lazoisi was afflicted with painful leg ulcers which later developed into a massive cancerous growth. [3]The growth broke through the skin and became badly infected. Miraculously, the infection induced a complete regression of the tumour and surgery was no longer required. He later became the patron saint of cancer sufferers.

Reports that associated infections and tumour regression continued to grow. In the 18th century, Trnka and Le Dran reported cases of breast cancer regressions which occurred after tumour site infection. [4, 5] These cases are often accompanied by signs of inflammation and fever and gangrene are common. [3]

In the 19th century, such observations became the basis of early clinical trials by physicians such as Tanchou and Cruveillhier. Although highly risky, they attempted to replicate the same conditions artificially by applying a septic dressing to the wound or injecting patients with pathogens such as malaria. [1] The results were often spectacular and suddenly, this rudimentary form of ‘immunotherapy’ seemed to offer a genuine alternative to surgery.

Until then, the only option for cancer was surgery and outcomes were at times very disappointing. Dr. William Coley (a 19th century New York surgeon) related his anguish after his patient died despite radical surgery to remove a sarcoma of the right hand. [3] Frustrated by the limitations of surgery, he sought an alternative form of treatment and came across the work of the medical pioneers Busch and Fehleisen. They had earlier experimented with erysipleas, injecting or physically applying the causative pathogen, Streptococcus pyogenes, onto the tumour site. [6] This was often followed by a high fever which correlated with a concomitant decrease in tumour size in a number of patients. [3] Coley realized that using live pathogens was very risky and he eventually modified the approach using a mixture of killed S. pyogenes and Serratia marescens. [7] The latter potentiated the effects of S. pyogenes such that a febrile response can be induced safely without an ‘infection’, and this mixture became known as Coley’s toxins. [1]

A retrospective study in 1999 showed that there was no significant difference in cancer death risk between patients treated using Coley’s toxins and those treated with conventional therapies (i.e. chemotherapy, radiotherapy and surgery). [8] Data from the second group was obtained from the Surveillance Epidemiology End Result (SEER) registry in the 1980s. [3] This observation is remarkable given that Coley’s toxins were developed at a fraction of the cost and resources afforded to current conventional therapies.

Researchers also realized that Coley’s toxins have broad applicability and are effective across cancers of mesodermal embryonic origin such as sarcomas, lymphomas and carcinomas. [7] One study comparing the five-year survival rate of patients with either inoperable sarcomas or carcinomas found that those treated with Coley’s toxin showed had a survival rate as high as 70-80%. [9]

Induction of a high grade fever proved crucial to the success of this method. Patients with inoperable sarcoma who were treated with Coley’s toxins and developed a fever between 38-40 oC had a five-year survival rate three times higher than that of afebrile patients. [10] As cancer pain can be excruciating, pain relief is usually required. Upon administration of Coley’s toxins, an immediate and profound analgesic effect was often observed; allowing the discontinuation of narcotics. [9]

Successes related to ‘infection’ based therapies are not isolated. In the early 20th century, Nobel laureate Dr. Julius Wagner-Jauregg used tertian malaria injections in the treatment of neurosyphilis-induced dementia paralytica. [3]This approach relied on the induction of prolonged and high grade fevers. Considering the high mortality rate of untreated patients in the pre-penicillin era, he was able to achieve an impressive remission rate of approximately one in two patients. [11]

More recently, Bacillus Calmette-Guérin (BCG) vaccine has been used in the treatment of superficial bladder cancers. [12] BCG consists of live attenuated Mycobacterium bovis and is commonly used in tuberculosis vaccinations. [12,13] Its anti-tumour effects are thought to involve a localized immune response stimulating production of inflammatory cytokines such as tumour necrosis factor α (TNF-α) and interferon γ (IFN-γ). [13] Similar to Coley’s toxins, it uses a bacterial formulation and requires regular localized administration over a prolonged period. BCG is shown to reduce bladder cancer recurrence rates in nearly 70% of cases and recent clinical trials suggest a possible role in colorectal cancer treatment. [14] From these examples, we see that infections or immunizations can have broad and effective therapeutic profiles.

Opportunities Lost: The End of Coley’s Toxins

After the early success of Coley’s toxins, momentum was lost when Coley died in 1936. Emergence of chemotherapy and radiotherapy overshadowed its development while aseptic techniques gradually gained acceptance. After World War II, large-scale production of antibiotics and antipyretics also allowed better suppression of infections and fevers. [1] Opportunities for further clinical studies using Coley’s toxins were lost when despite decades of use, it was classified as a new drug by the US Food and Drug Administration (FDA). [15] Tightening of regulations regarding clinical trials of new drugs after the thalidomide incidents in the 1960s meant that Coley’s toxins were highly unlikely to pass the stringent safety requirements. [3]

With fewer infections, spontaneous regressions became less common. An estimated yearly average of over twenty cases in the 1960-80s decreased to less than ten cases in the 1990s. [16] It was gradually believed that the body’s immune system had a negligible role in tumour regression and focus was placed on chemotherapy and radiotherapy. Despite initial promise, these therapies have not fulfilled their full potential and the treatment for certain cancers remains out of reach.

In a curious turn of events, advances in molecular engineering have now provided us with the tools to transform immunotherapy into a viable alternative. Coley’s toxins have provided the foundations for early immunotherapeutic approaches and may potentially contribute significantly to the success of future immunotherapy.

Immunological Basis of Pyrogenic Infections

The most successful cases treated by Coley’s toxins are attributed to: successful infection of the tumour, induction of a febrile response and daily intra-tumoural injections over a prolonged period.

Successful infection of tumour

Infection of tumour cells results in infiltration of lymphocytes and antigen-presenting cells (APCs) such as macrophages and dendritic cells (DCs). Binding of pathogen-associated molecular patterns (PAMPs) (e.g. lipopolysaccharides) to toll-like receptors (TLRs) on APCs induces activation and antigen presentation. The induction process also leads to the expression of important co-stimulatory molecules such as B7 and interleukin-12 (IL-12) required for optimal activation of B and T cells. [17] In some cases, pathogens such as the zoonotic vesicular stomatitis virus (VSV) have oncolytic properties and selectively lyse tumour cells to release antigens. [18]

Tumour regression or progression depends on the state of the immune system. A model of duality in which the immune system performs either a defensive or reparative role has been proposed. [1, 3] During the defensive mode, tumour regression occurs and immune cells are produced, activated and mobilized against the tumour. In the reparative model, tumour progression is favoured and invasiveness is promoted via immunosuppressive cytokines, growth factors, matrix metalloproteinases and angiogenesis factors. [1, 3]

The defensive mode may be activated by external stimuli during infections; this principle can be illustrated by the example of M1/M2 macrophages. M1 macrophages are involved in resistance against infections and tumours and produce pro-inflammatory cytokines such as IL-6, IL-12 and IL-23. [19, 20] M2 macrophages promote tumour progression and produce anti-inflammatory cytokines such as IL-10 and IL-13. [19, 20] M1 and M2 macrophage polarization is dependent on transcription factors such as interferon response factor 5 (IRF5). [21] Inflammatory stimuli such as bacterial lipopolysaccharides induce high levels of IRF5 and this commits macrophages to the M1 lineage while also inhibiting expression of M2 macrophage marker expression. [21] This two-fold effect may be instrumental in facilitating a defensive mode.

Induction of febrile response

In Matzinger’s ‘danger’ hypothesis, the immune system responds to signals produced during distress known as danger signals, including inflammatory factors released from dying cells. [22] T cells remain anergic unless both danger signals and tumour antigens are provided. [23] A febrile response is advantageous as fever is thought to facilitate inflammatory factor production. Cancer cells are also more vulnerable to heat changes and elevated body temperature during fever may promote cell death and the massive release of tumour antigens. [24]

Besides a physical increase in temperature, fever encompasses profound physiological effects. An example of this is the induction of heat-shock protein (HSP) expression on tumour cells. [16] Studies have shown that Hsp70 expression on carcinoma cells promotes lysis by natural killer T (NKT) cells in vitro, while tumour expression of Hsp90 may play a key role in DC maturation. [25, 26] Interestingly, HSPs also associate with tumour peptides to form immunogenic complexes involved in NK cell activation. [25] This is important since NK cells help overcome subversive strategies by cancer cells to avoid T cell recognition. [27] Down regulation of major histocompatibility complex (MHC) expression on cancer cells results in increased susceptibility to NK cell attacks. [28] These observations show that fever is equally adept at stimulating innate and adaptive responses.

Route and duration of administration

The systemic circulation poses a number of obstacles for successful delivery of infectious agents to the tumour site. Neutralization by pre-immune Immunoglobulin M (IgM) antibodies and complement activation impede pathogens. [18] Infectious agents may bind non- specifically to red blood cells and undergo sequestration by the reticuloendothelial system. [29] In the liver, specialized macrophages called, Kupffer cells, can also be activated by pathogen-induced TLR binding and cause inflammatory liver damage. [29] An intratumoural route therefore has the advantage of circumventing most of these obstacles to increase the probability of successful infection. [18]

It is currently unclear if innate or adaptive immunity is predominantly responsible for tumour regression. Coley observed that shrinkage often occurred hours after administration whereas if daily injections were stopped, even for brief periods, the tumour continued to progress. [30] Innate immunity may therefore be important and this is consistent with insights from vaccine development, in which adjuvants enhance vaccine effectiveness by targeting innate immune cells via TLR activation. [1]

Although T cell numbers in tumour infiltrates are substantial, tolerance is pervasive and attempts to target specific antigens have been difficult due to antigenic drift and heterogeneity of the tumour microenvironment. [31] A possible explanation for the disproportionality between T cell numbers and the anti-tumour response is that the predominant adaptive immune responses are humoral rather than cell-mediated. [32] Clinical and animal studies have shown that spontaneous regressions in response to pathogens like malaria and Aspergillus are mainly antibody mediated. [3] Further research will be required to determine if this is the case for most infections.

Both innate and adaptive immunity are probably important at specific stages with sequential induction holding the key to tumour regression. In acute inflammation, innate immunity is usually activated optimally and this in turn induces efficient adaptive responses. [33] Conversely, chronic inflammation involves a detrimental positive feedback loop that acts reversibly and over-activates innate immune cells. [34] Instability of these immune responses can result in suboptimal anti- tumour responses.

Non-immune considerations and constructing the full picture

Non-immune mechanisms may be partly responsible for tumour regression. Oestrogen is required for tumour progression in certain breast cancers and attempts to block its receptors by tamoxifen have proved successful. [35] It is likely that natural disturbances in hormone production may inhibit cancerous growth and promote regression in hormone dependent malignancies. [36]

Genetic instability has also been mentioned as a possible mechanism. In neuroblastoma patients, telomere shortening and low levels of telomerase have been associated with tumour regression. [37] This may be due to the fact that telomerase activity is required for cell immortality. Other potential considerations may include stress, hypoxia and apoptosis but these are not within the scope of this review. [38]

As non-immune factors tend to relate to specific subsets of cancers, they are unlikely to explain tumour regression as a whole. They may instead serve as secondary mechanisms  which support a primary immunological system. During tumour progression, these non-immune factors may either malfunction or become the target of subversive strategies.

A simplified outline of the possible role of pyrogenic infections in tumour kinetics is illustrated below (Figure 1).

Discussion

The intimate link between infections, fever and spontaneous regression is slowly being recognized. While the incidence of spontaneous regression is steadily decreasing due to circumstances in the modern clinical se

Categories
Review Articles Articles

The therapeutic potentials of cannabis in the treatment of neuropathic pain and issues surrounding its dependence

Cannabis is a promising therapeutic agent, which may be particularly beneficial in providing adequate analgesia to patients with neuropathic pain intractable to typical pharmacotherapy. Cannabinoids are the lipid-soluble compounds that mediate the analgesic effects associated with cannabis by interacting with the endogenous cannabinoid receptors CB1 and CB2, which are distributed along neurons associated with pain transmission. From the 60 different cannabinoids that can be found in cannabis plants, delta-9 tetrahydrocannabinol (THC) and cannabidiol are the most important in regards to analgesic properties. Whilst cannabinoids are effective in providing diminished pain responses, their therapeutic use is limited due to psychotropic side effects via interaction with CB1, which may lead to cannabis dependence. Cannabinoid ligands also interact with glycine receptors, selectively to CB2 receptors, and act synergistically with opioids and non-steroidal anti-inflammatory drugs (NSAIDs) to attenuate pain signals. This may be of therapeutic potential due to the lack of psychotropic effects produced. Clinical trials of cannabinoids in neuropathic pain have shown efficacy in providing analgesia; however, the small number of participants involved in these trials has greatly limited their significance. Although the medicinal use of cannabis is legal in Canada and some parts of the United States, its use as a therapeutic agent in Australia is not permitted. This paper will review the role cannabinoids play in providing analgesia, the pharmacokinetics associated with various routes of administration and dependence issues that may arise from its use.

Introduction

Compounds in plants have been found to be beneficial, and now contribute to many of the world’s modern medicines. Delta-9- tetrahydrocannibinol (THC), the main psychoactive cannabinoid derived from cannabis plants, mediates its analgesic effects by acting at both the central and peripheral cannabinoid receptors.[1] The analgesic properties of cannabis were first observed by Ernest Dixon in 1899, who discovered that dogs failed to react to pin pricks following the inhalation of cannabis smoke.[2] Since that time, there has been extensive research into the analgesic properties of cannabis, including whole plant and synthetic cannabinoid studies. [3-5]

Although the use of medicinal cannabis is legal in Canada and parts of the United States, every Australian jurisdiction currently prohibits its use.[6] Despite this, Australians lead the world in the illegal use of cannabis for both medicinal and recreational reasons. [7]

Although the analgesic properties of cannabis could be beneficial in treating neuropathic pain, the use of cannabis in Australia is a controversial, widely debated subject. The issue of dependence to cannabis arising from medicinal cannabis use is of concern to both medical and legal authorities. This review aims to discuss the pharmacology of cannabinoids as it relates to analgesia, and also the dependence issues that may arise from the use of cannabis.

Medicinal cannabis can be of particular benefit in the treatment of neuropathic pain that is intractable to the typical agents used, such as tricyclic antidepressants, anticonvulsants and opioids. [3,8] Neuropathic pain is a disease affecting the somatosensory nervous system which thereby causes pain that is unrelated to peripheral tissue injury. Treatment options are limited. The prevalence of chronic pain in Australia has been estimated at 20% of the population, [9] with neuropathic pain estimated to affect up to 7% of the population. [10]

The role of cannabinoids in analgesia

Active compounds found in cannabis

Cannabis contains over 60 cannabinoids, with THC being the quintessential mediator of analgesia and the only psychoactive constituent found in cannabis plants. [11] Another cannabinoid, cannabidiol, also has analgesic properties; however, instead of interacting with cannabinoid receptors, its analgesic properties are attributed to inhibition of anandamide degradation. [11] Anandamide is the most abundant endogenous cannabinoid in the CNS and acts as an agonist at cannabinoid receptors. By inhibiting the breakdown of anandamide, its time in the synapse is prolonged and its analgesic effects are perpetuated.

Cannabinoid and Vanilloid receptors

Distributed throughout the nociceptive pathway, cannabinoid receptors are a potential target for the administration of exogenous cannabinoids to suppress pain. Two known types of cannabinoid receptors, CB1 and CB2, are involved in pain transmission. [12] The CB1 cannabinoid receptor is highly expressed in the CNS as well as in peripheral tissues, and is responsible for the psychotropic effects produced by cannabis. There is debate regarding the location of the CB2 cannabinoid receptor, previously found to be largely distributed in peripheral immune cells. [12-13] Recent studies, however, suggest that CB2 receptors may also be found on neurons. [12-13] The CB2 metabotropic G-protein coupled receptors are negatively coupled to adenylate cyclase and positively coupled to mitogen-activated protein kinase. [14] The cannabinoid receptors are also coupled to pre-synaptic voltage-gated calcium channel inhibition and inward- rectifying potassium channel activation, thus depressing neuronal excitability, eliciting an inhibitory effect on neurotransmitter release and subsequently decreasing pain transmission. [14]

Certain cannabinoids have targets other than cannabinoid receptors through which they mediate their analgesic properties. Cannabidiol can act at vanilloid receptors, where capsacsin is active, to produce analgesia. [15] Recent studies have found that the actions of administered cannabinoids in mice have a synergestic effect to the response of glycine, an inhibitory neurotransmitter that may contribute to its analgesic effects. Analgesia was absent in mice that lacked glycine receptors, but not in those lacking cannabinoid receptors, thus indicating an important role of glycine in the analgesic affect of cannabis. [16] Throughout this study, modifications were made to the compound to enhance binding to glycine receptors and diminish binding to cannabinoid receptors, which may be of therapeutic potential to achieve analgesia without psychotropic side effects. [16]

Mechanism of action in producing analgesia and side effects

Cannabinoid receptors also play an important role in the descending inhibitory pathways via the midbrain periaqueductal grey (PAG) and the rostral ventromedial medulla (RVM). [17] Pain signals are conveyed via primary afferent nociceptive fibres to the brain via ascending pain pathways that synapse on the dorsal horn of the spinal cord. The descending inhibitory pathway modulates pain transmission in the spinal cord and medullary dorsal horn via the PAG and RVM before noxious stimuli reaches a supraspinal level and is therefore interpreted as pain. [17] Cannabinoids activate the descending inhibitory pathway via gamma-aminobutyric acid (GABA)-mediated disinhibition, thus decreasing GABAergic inhibition and enhancing impulses responsible for the inhibition of pain; this is similar to opioid-mediated analgesia. [17]

Cannabinoid receptors, in particular CB1, are distributed throughout the cortex, hippocampus, amygdala, basal ganglia outflow tracts and cerebellum, which corresponds to the capacity of cannabis to produce motor and cognitive impairment. [18] These deleterious side effects limit their therapeutic use as an analgesic. Since ligands binding to CB1 receptors are responsible for mediating the psychotropic effects of cannabis, studies have been undertaken on the effectiveness of CB2 agonists; they were found to attenuate neuropathic pain without experiencing CB1-mediated CNS side effects. The discovery of a suitable CB2 agonist may be of therapeutic potential. [19]

Synergism with commonly used analgesics

Cannabinoids are also important in acting synergistically with non- steroidal anti-inflammatory drugs (NSAIDs) and opioids to produce analgesia; cannabis could thus be of benefit as an adjuvant to typical analgesics. [20] A major central target of NSAIDs and opioids is the descending inhibitory pathway. [20] The analgesia produced by NSAIDs through its action on the descending inhibitory pathway requires simultaneous activation of the CB1 cannabinoid receptor. In the presence of an opioid antagonist, cannabinoids are still effective analgesics. Whilst cannabinoids do not act via opioid receptors, cannabinoids and opioids show synergistic activity. [20] On the other hand, Telleria-Diaz et al. reported that the analgesic effects of non- opioid analgesics, primarily indomethacin, in the spinal cord can be prevented by a CB1 receptor antagonist, thus highlighting synergism between the two agents. [21] Although no controlled studies in pain management have used cannabinoids with opioids, anecdotal evidence suggest synergistic benefits in analgesia, particularly in patients with neuropathic pain. [20] Whilst the interaction between opioids, NSAIDs and cannabinoids is poorly understood, numerous studies do suggest that they act in a synergistic manner in the PAG and RVM via GABA- mediated disinhibition to enhance descending flow of impulses to inhibit pain transmission. [20]

Route of Administration

Clinical trials of cannabis as an analgesic in neuropathic pain have shown cannabis to reduce the intensity of pain. [5,22] The most common administration of medicinal cannabis is through inhalation via smoking. Two randomised clinical trials assessing smoked cannabis showed that patients with HIV-associated neuropathic pain achieved significantly reduced pain intensity (34% and 46%) compared to placebo (17% and18% respectively). [5,22] One of the studies was composed of participants whose pain was intractable to first-line analgesics used in neuropathic pain, such as tricyclic antidepressants and anticonvulsants. [22] The numbers needed to treat (NNT=3.5) were comparable to agents already in use (gabapentin: NNT=3.8 and lamotrigine: NNT=5.4). [22] All of the studies undertaken on smoked cannabis have been short-term studies and do not address long- term risks of cannabis smoking. An important benefit associated with smoking cannabis is that the pharmacokinetic profile is superior to orally ingested cannabinoids. [23]After smoking one cannabis cigarette, peak plasma levels of THC are reached within 3-10 minutes and due to its lipid solubility, levels quickly decrease as THC is rapidly distributed throughout the tissues. [23] While the bioavailability of THC when inhaled via smoke is much higher than oral preparations, due to first pass metabolism, there are obvious harmful affects associated with smoking which warranted the study of using other means of inhalation such as vapourisation. In medicinal cannabis therapy, vapourisation may be less harmful than smoking as the cannabis is heated below the point of combustion where carcinogens are formed. [24] A recent study found that the transition from smoking to vapourising in cannabis smokers improved lung function measurements and, following the study, participants refused to participate in a reverse design in which they would return to smoking. [24]

Studies undertaken on the efficacy of oro-mucosal cannabinoid preparations (Sativex) showed a 30% reduction in pain as opposed to placebo; the NNT was 8.6.[4] Studies comparing oral cannabinoid preparations (Nabilone) to dihydrocodeine in neuropathic pain found that dihydrocodeine was a more effective analgesic. [25] The effects of THC from ingested cannabinoids lasted for 4-12 hours with a peak plasma concentration at 2-3 hours. [26] The effects of oral cannabinoids was variable due to first pass metabolism where significant amounts of cannabinoids are metabolized by cytochrome P450 mixed-function oxidases, mainly CYP 2C9. [26] First pass metabolism is very high and bioavailability of THC is only 6% for ingested cannabis, as opposed to 20% for inhaled cannabis. [26] The elimination of cannabinoids occurs via the faeces (65%) and urine (25%), with a clinical study showing that after five days 90% of the total dose was excreted. [26]

The issue of cannabis dependence

One of the barriers to the use of medicinal cannabis is the controversy regarding cannabis dependence and the adverse effects associated with chronic use. Cannabis dependence is a highly controversial but important topic, as dependence may increase the risk of adverse effects associated with chronic use. [27] Adverse effects resulting from long-term use of cannabis include short term memory impairment, mental health problems and, if smoked, respiratory diseases. [28] Some authors report that cannabis dependence and subsequent adverse negative effects upon cessation are only observed in non- medical cannabis users, other authors report that dependence is an issue for all cannabis users, whether its use is for medicinal purposes or not. An Australian study assessing cannabis use and dependence found that one in 50 Australians had a DSM-IV cannabis use disorder, predominately cannabis dependence. [27] They also found that cannabis dependence was the third most common life-time substance dependence diagnosis following tobacco and alcohol dependence. [27] Cannabis dependence can develop; however, the risk factors for dependence come predominantly from studies that involve recreational users, as opposed to medicinal users under medical supervision. [29]

A diagnosis of cannabis dependence, according to DSM-IV, is made when three of the following seven criteria are met within the last 12 months: tolerance; withdrawal symptoms; cannabis used in larger amounts or for a longer period than intended; persistent desire or unsuccessful efforts to reduce or cease use; a disproportionate amount of time spent obtaining, using and recovering from use; social, recreational or occupational activities were reduced or given up due to cannabis use; and use continued despite knowledge of physical or psychological problems induced by cannabis. [29] Unfortunately, understanding of cannabis dependence arising from medicinal use is limited due to the lack of studies surrounding cannabis dependence in the context of medicinal use. Behavioural therapies may be of use; however, their efficacy is variable. [30] A recent clinical trial indicated that orally-administered THC was effective in alleviating cannabis withdrawals, which is analogous to other well-established agonist therapies including nicotine replacement and methadone. [30]

The pharmacokinetic profiles also affect cannabis dependence. Studies suggest that the risk of dependence seems to be marginally greater with the oral use of isolated THC than with the oral use of combined THC-cannabidiol. [31] This is important because hundreds of cannabinoids can be found in whole cannabis plants, and cannabidiol may counteract some of the adverse effects of THC; however, more studies are required to support this claim. [31]

The risk of cannabis dependence in the context of long term and supervised medical use is not known. [31] However, some authors believe that the pharmacokinetic profiles of preparations used for medicinal purposes differ from those used for recreational reasons, and therefore causalities in terms of dependence and chronic adverse effects between the two differ greatly. [32]

Conclusion

Cannabis appears to be an effective analgesic and provides an alternative to analgesic pharmacotherapies currently in use for the treatment of neuropathic pain. Cannabis may be of particular use in neuropathic pain that is intractable to other pharmacotherapy. The issue of dependence and adverse side effects including short term memory impairment, mental health problems and if smoked, respiratory diseases arising from medicinal cannabis use is a highly debated topic and more research needs to be undertaken. The ability of cannabinoids to modulate pain transmission by enhancing the activity of descending inhibitory pathways and acting as a synergist to opioids and NSAIDs is important as it may decrease the therapeutic doses of opioids and NSAIDs required, thus decreasing the likelihood of side effects. The possibility of a cannabinoid-derived compound with analgesic properties free of psychotropic effects is quite appealing, and its discovery could potentially lead to a less controversial and more suitable analgesic in the future.

Conflict of interest

None declared.

Correspondence

S Sargent: stephaniesargent@mail.com


Categories
Review Articles Articles

A biological explanation for depression: The role of interleukin-6 in the aetiology and pathogenesis of depression and its clinical implications

Depression is one of the most common health problems addressed by general practitioners in Australia. It is well known that biological, psychosocial and environmental factors play a role in the aetiology of depression. Research into the possible biological mechanisms of depression has identified interleukin-6 (IL-6) as a potential biological correlate of depressive behaviour, with proposed contributions to the aetiology and pathogenesis of depression. Interleukin-6 is a key proinflammatory cytokine involved in the acute phase of the immune response and a potent activator of the hypothalamic-pitutary-adrenal axis. Patients with depression have higher than average concentrations of IL-6 compared to non- depressed controls, and a dose-response correlation may exist between circulating IL-6 concentration and the degree of depressive symptoms. Based on these insights the ‘cytokine theory of depression’ proposes that proinflammatory cytokines, such as IL-6, act as neuromodulators and may mediate some of the behavioural and neurochemical features of depression. Longitudinal and case- control studies across a wide variety of patient cohorts, disease states and clinical settings provide evidence for a bidirectional relationship between IL-6 and depression. Thus IL-6 represents a potential biological intermediary and therapeutic target for the treatment of depression. Recognition of the strong biological contribution to the aetiology and pathogenesis of depression may help doctors to identify individuals at risk and implement appropriate measures, which could improve the patient’s quality of life and reduce disease burden.

Introduction

Our understanding of the immune system has grown exponentially within the last century, and more questions are raised with each new development. Over the past few decades research has emerged to suggest that the immune system may be responsible for more than just fighting everyday pathogens. The term ‘psychoneuroimmunology’ was first coined by Dr Robert Ader and his colleagues in 1975 as a conceptual framework to encompass the emerging interactions between the immune system, the nervous system, and psychological functioning. Cytokines have since been found to be important mediators of this relationship. [1] There is considerable research that supports the hypothesis of proinflammatory cytokines, in particular interleukin-6 (IL-6), in playing a key role in the aetiology and pathophysiology of depression. [1-5] While both positive and negative results have been reported in individual studies, a recent meta-analysis supports the association between depression and circulating IL-6 concentration. [6] This review will explore the impact of depression in Australia, the role of IL-6 and the proposed links to depression and clinical implications of these findings.

Depression in Australia and its diagnosis

Depression belongs to a group of affective disorders and is one of the most prevalent mental illnesses in Australia. [7] It contributes to one of the highest disease burdens in Australia, closely following cancers and cardiovascular diseases. [7] Most of the burden of mental illness, measured as disability adjusted life years (DALYs), is due to years of life lost through disability (YLD) as opposed to years of life lost to death (YLL). This makes mental disorders the leading contributor (23%) to the non-fatal burden of disease in Australia. [7] Specific populations, including patients with chronic disease, such as diabetes, cancer, cardiovascular disease, and end-stage kidney disease, [1,3,4,10] are particularly vulnerable to this form of mental illness.[8, 9] The accurate diagnosis of depression in these patients can be difficult due to the overlapping of symptoms inherent to the disease or treatment and the diagnostic criteria for major depression. [10-12] Nevertheless, accurate diagnosis and treatment of depression is essential and can result in real gains in quality of life for patients with otherwise incurable and progressive disease. [7] Recognising the high prevalence and potential biological underpinnings of depression in patients with chronic disease is an important step in deciding upon appropriate diagnosis and treatment strategies.

Role of IL-6 in the body

Cytokines are intercellular signalling polypeptides produced by activated cells of the immune system. Their main function is to coordinate immune responses; however, they also play a key role in providing information regarding immune activity to the brain and neuroendocrine system. [13] Interleukin-6 is a proinflammatory cytokine primarily secreted by macrophages in response to pathogens. [14] Along with interleukin-1 (IL-1) and tumour necrosis factor-alpha (TNF-α), IL-6 plays a major role in fever induction and initiation of the acute-phase response. [14] The latter response involves a shi

Categories
Review Articles Articles

Where to from here for Australian childhood obesity?

Aim: At least one in twenty Australian school children are obese. [1] The causes and consequences of childhood obesity are well documented. This article examines the current literature on obesity management in school-aged, Australian children. Methods: A systematic review was undertaken to examine the efficacy of weight management strategies of obese Australian school-aged children. Search strategies were implemented in Medline and Pubmed databases. The inclusion criteria required original data of Australian origin, school-aged children (4 to 18 years), BMI defined populations and publication within the period of January 2005 to July 2011. Reviews, editorials and publications with inappropriate focus were excluded. Thirteen publications were analysed. Results: Nine of the thirteen papers reviewed focused on general practice (GP) mediated interventions, with the remainder utilising community, school or tertiary hospital management. Limitations identified by GP-led interventions included difficulties recognising obese children, discussing obesity with families, poor financial reward, time constraints, and a lack of proven management strategies. A school-based program was investigated, but was found to be ineffective in reducing obesity. Successful community- based strategies focused on parent-centred dietary modifications or exercise alterations in children. Conclusion: Obesity-specific management programs of children are scarce in Australia. As obesity remains a significant problem in Australia, this topic warrants further focus and investigation.

Introduction

In many countries the level of childhood obesity is rising. [2] Whilst the popular press have painted Australia as being in a similar situation, research has failed to identify significant increases in the level of childhood obesity since 1997, and in fact, recent data suggests a small decrease. [2,3] Nonetheless, an estimated four to nine percent of school-aged children are obese. [1,4] Consequently, the Australian government have pledged to reduce the prevalence of childhood obesity. [5]

In this review, articles defined Body Mass Index (BMI) by dividing weight (in kilograms) by the square of the height (in metres). [1] BMI was then compared to age- and gender-specific international set points. [6] Obesity was defined as children who had a BMI ≥ 95% of children with the same age and gender. [6] The subjects of this review, Australian school-aged children, were defined as those aged 4 to 18 years in order to include most children from preschool to the completion of secondary school throughout Australia. As evidence suggests that obese individuals have significantly worse outcomes than overweight children, this review focused on obesity rather than overweight and obese individuals. [1]

The aim of this article was to examine the recent Australian literature on childhood obesity management strategies.

Background

Causes of obesity

A myriad of causes of childhood obesity are well established in the literature. Family and culture influence a child’s eating habits, their level of physical activity and ultimately their weight status. [4,7,8] Parental attributes such as maternal obesity and dismissive or disengaged fathers also play a role. [9] Notably, maternal depression or inappropriate parenting styles have little effect on obesity. [10] Children from lower socio-economic status (SES) are at a greater risk of being obese. [9,11-13]

Culture and genetic inheritance also influence a child’s chance of being obese. [8] Evidence suggests that culture influences an individual’s beliefs regarding body size, food and exercise. [7,14] O’Dea (2008) found that Australian children of European and Asian descent had higher rates of obesity when compared with those of Pacific Islander or Middle Eastern heritage. [8] Interestingly, there is conflicting evidence as to whether being an Indigenous Australian is an independent risk factor for childhood obesity. [7,9]

A child’s nutritional knowledge has little impact on their weight. Several authors have shown that while obese and non-obese children have different eating styles, they possess a similar level of knowledge about food. [4,13] Children with a higher BMI had lower quality breakfast and were more likely to omit meals in comparison to normal weight children. [4,7,13]

The environment in which a child lives may impact their weight status; existing literature suggests that the built environment has little influence over dietary intake, physical activity and ultimately weight status. [15,16] However, there is limited research presently available.

Consequences of obesity

Obesity significantly impacts a child’s health, resulting in poorer physical and social outcomes. [4,17] Obese children are at greater risk of becoming obese in adulthood. [4,18] Venn et al. (2008) estimates that obese children are at a four- to nine-fold risk of becoming obese adults. [18] Furthermore, obese children have an increased risk of acquiring type 2 diabetes, sleep apnoea, fatty liver disease, arthritis and cardiovascular disease. [4,19]

An individual’s social health is detrimentally affected by childhood obesity. Obese children have significantly lower self worth, body image and perceived level of social acceptance amongst their peers. [7,20,21] Indeed, overall social functioning is reduced in obese children. [17] Interestingly, some studies identify no differing rates of mental illness or emotional functioning between obese and non-obese children. [12,17,22,23]

Method

Using Medline and Pubmed, searches were undertaken with the following MeSH terms: child, obesity and Australia. Review and editorial publication types were excluded, as only original data was sought for analysis. Further limits to the search included literature available in English, focused on school-aged children from 4 to 18 years, articles which defined obesity in their population using BMI, publications which addressed the research question (management of childhood obesity), and recent literature. Recent literature was defined as articles published from 1 January 2005 until 31 July 2011. This restriction was placed in part due to resource constraints, but January 2005 was specifically chosen, as this marked the introduction of several Australian government strategies to reduce childhood obesity. [5]

In total, 280 publications were identified in the Pubmed and Medline searches. The abstracts of these articles were manually assessed by the investigator for relevance to the research question and described inclusion and exclusion criteria. As a result of inappropriate topic focus, population, publication type, publication date and repetition, 265 articles were excluded. Ten articles were identified as pertinent via Pubmed. Medline searches revealed five articles of relevance, all of which were duplicated in the Pubmed search. Hence, ten publications were examined. Additionally, a search of relevant publications’ reference lists identified three further articles for analysis. Subsequently, this paper reviews thirteen articles.

Publications included in this study were either randomised controlled trials or cross-sectional analyses. The papers collected data from a variety of sources, including children, parents, clinicians and simulated patients. Consequently, population sizes varied greatly throughout the literature.

Results

Much of the Australian literature on childhood weight management does not specifically focus on the obese; instead, it combines the outcomes of obese and overweight children, sometimes including normal weight children.

Thirteen intervention articles were identified in the literature, nine of which employed GP mediated interventions, with the remainder using a community-based approach, school-based or tertiary hospital mediated obesity management.

General practitioner intervention

The National Health and Medical Research Council (NHMRC) guidelines recommend biannual anthropometric screening for children; however, many studies illustrate that few GPs regularly weigh and measure children. [24,25] Whilst Dettori et al. (2009) reported 79% of GPs interviewed measure children’s weight and height, only half of their respondents regularly converted these figures to determine if a child was obese. [26] A possible reason for the low rates of BMI calculation may be that many GPs find it difficult to initiate discussions about weight status in children. [24-27] A number of authors have identified that some GPs fear losing business, alienating or offending their clients. [24,25,27]

There was wide variability in the tools GPs used to screen children, which may ultimately have led to incorrect weight classifications. [24] Spurrier et al. (2006) investigated this further, identifying that GPs may use visual cues to identify normal weight children; however, using visual cues alone GPs are not always able to recognise an obese from an overweight child or an overweight from a normal weight child. [28] Hence, GPs may fail to identify obese children if appropriate anthropometric testing is not performed.

There is mixed evidence regarding the willingness of GPs to manage obese children. Firstly, McMeniman et al. (2007) identified that GPs felt there was a lack of clear management guidelines, with the majority of participants feeling they would not be able to successfully treat an obese child. [27] Some studies identified that GPs see their role as gatekeeper for allied health intervention. [24,25] Another study showed that GPs preferred shared care, providing the primary support for obese children, which involved offering advice on nutrition, weight and exercise, whilst also referring onto other health professionals such as nutritionists, dieticians and physicians. [11]

Other factors impeding GP-managed programs are time and financial constraints. The treatment of childhood obesity in general practice is time consuming. [11,26,27] Similarly, McMeniman et al. [27] highlighted that the majority of responders (75%) felt that there was not adequate financial incentive to identify and manage obese children.

Evidence suggests that providing education to GPs on identifying and managing obesity could be useful in building their confidence. [26] One publication found that over half of GPs receiving education were able to better identify obese children. [26] Similarly, Gerner et al. (2010) illustrated, by using simulated patients, that GPs felt they had improved their competence in the management of obese children. [29] In the Live, Eat and Play (LEAP) trial, patient outcomes at nine months were compared to GP’s self-rated competence, simulated patient ratings and parent ratings on consultations. [29] Interestingly, simulated patient ratings were shown to be a good predictor of real patient outcomes, with higher simulated patient marks correlating to larger drop in a child’s BMI. [29]

Unfortunately, no trials illustrated an effective GP-led child obese management strategy. The LEAP trial, a twelve week GP-mediated intervention focused on nutrition, physical exercise and the reduction of sedentary behaviour, failed to show any significant decrease in BMI of the intervention group compared with the control. [30] Notably, the LEAP trial failed to separate the data of obese and non-obese children. [30]

Further analysis of the LEAP trial illustrated that the program was expensive, with the cost to an intervention family being $4094 greater than of that to a control family. [31] This is a significant burden on families, with an additional fiscal burden of $873 per family to the health sector. [31] Whilst these amounts are likely to be elevated due to the small number of children, program delivery is costly for both families and the health care sector. [31]

Community-based programs

Literature describing community-based obesity reduction was sparse. Two publications were identified, both of which pertained to the HICKUP trial. These articles illustrated that parent-centred dietary program and child-focused exercise approaches can be efficacious in weight reduction in a population of children including the obese. [32,33] In this randomised controlled trial, children were divided into three groups: i) parent-focused dietary program, ii) child-centred exercise, and iii) combination of the aforementioned. [32,33] Dietary programs focused on improving parenting skills to provide behavioural change in children, whilst physical activity program involved improving children’s fundamental skills and competence. [32,33] A significant limitation of the study was that children were identified through responding to advertising in school newsletters and GP practices, lending this investigation to volunteer bias. Additionally, the outcome data in these studies failed to delineate obese children from overweight or normal weight children.

School-based programs

Evidence suggests that an education and exercise-based program can be implemented into a school system. [34] The Peralta et al. (2009) intervention involved a small sample of twelve to thirteen year old boys who were either normal weight, overweight or obese, and were randomised to a control or intervention group. [34] The program’s curriculum focused on education as well as increasing physical activity. Education sessions were based on dietary awareness, goal se

Categories
Review Articles Articles

The future of personalised cancer therapy, today

With the human genome sequenced a decade ago and the concurrent development of genomics, pharmacogenetics and proteomics, the field of personalised cancer treatment appears to be a maturing reality. It is recognised that the days of ‘one-sizefi ts-all’ and ‘trial and error’ cancer treatment are numbered, and such conventional approaches will be refined. The rationale behind personalised treatment is to target the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patients’ genome. That said, a number of key challenges, both scientific and non-scientific, must be overcome if we are to fully exploit knowledge of cancer genomics to develop targeted therapeutics and informative biomarkers. The progress of research has yet to be translated to substantial clinical benefits, with the exception of a handful of drugs (tamoxifen, imatinib, trastuzumab). It is only recently that new targeted drugs have been integrated into the clinical armamentarium. So the question remains: Will there be a day when doctors no longer make treatment choices based on population-based statistics but rather on the specific characteristics of individuals and their tumours?

Introduction

In excess of 100,000 new cases of cancer were diagnosed in Australia in 2010, and the impact of cancer care on patients, their carers, and the Australian society is hard to ignore. Cancer care itself consumes $3.8 billion per year in Australia, constituting close to one-tenth of the annual health budget. [1] As such, alterations to our approach to cancer care will have wide-spread impacts on the health of individuals as well as on our economy. The first ‘golden era’ of cancer treatment began in the 1940s, with the discovery of the effectiveness of the alkylating agent, nitrogen mustard, against non-Hodgkin’s lymphoma. [2] Yet the landmark paper that demonstrated cancer development required more than one gene mutation was published only 25 years ago. [3] With the discovery of the human genome sequence, [4] numerous genes have been implicated in the development of cancer. Data from The Cancer Genome Atlas (TCGA) [5] and the International Cancer Genome Consortium (ICGC) [6] reveal that even within a cancer subtype, the mutations driving oncogenesis are diverse.

The more we learn about the molecular basis of carcinogenesis, the more the traditional paradigm of chemotherapy ‘cocktails’ classified by histomorphological features appears inadequate. In many instances, this classification system correlates poorly with treatment response, prognosis and clinical outcome. Patients within a given diagnostic category receive the same treatment despite biological heterogeneity, meaning that some with aggressive disease may be undertreated, and some with indolent disease may be overtreated. In addition, these generalised cytotoxic drugs have many side eff ects, a low specifi city, low concentration being delivered to tumours, and the development of resistance, which is an almost universal feature of cancer cells.

In theory, personalised treatment involves targeting the genomic aberrations driving tumour development while reducing drug toxicity due to altered drug metabolism encoded by the patient’s genome. The outgrowth of innovations in cancer biotechnology and computational science has enabled the interrogation of the cancer genome and examination of variation in germline DNA. Yet there remain many unanswered questions about the efficacy of personalised treatment and its applicability in clinical practice, which this review will address. The transition from morphology-based to a genetics-based taxonomy of cancer is an alluring revolution, but not without its challenges.

This article aims to outline the current methods in molecular profiling, explore the range of biomarkers available, examine the application of biomarkers in cancers common to Australia, such as melanoma and lung cancer, and to investigate the implications and limitations of personalised medicine in a 21st century context.

Genetic profiling of the cancer genome

We now know that individual tumour heterogeneity results from the gradual acquirement of genetic mutations and epigenetic alterations (changes in DNA expression that occur without alterations in DNA sequence). [7,8] Chromosomal deletions, rearrangements, and gene mutations are selected out during tumour development. These defects, known as ‘driver’ mutations, ultimately modify protein signalling networks and create a survival advantage for the tumour cell. [8-10] As such, pathway components vary widely among individuals leading to a variety of genetic defects between individuals with the same type of cancer.

Such heterogeneity necessitates the push for a complete catalogue of genetic perturbations involved in cancer. This need for a large-scale analysis of gene expression has been realised by current high throughput technologies such as DNA array technology. [11,12] Typically, a DNA array is comprised of multiple rows of complementary DNA (cDNA) samples lined up in dots on a small silicon chip. Today, arrays for gene expression profiling can accommodate over 30,000 cDNA samples. [13] Pattern recognition software and clustering algorithms promote the classification of tumour tissue specimens with similar repertoires of expressed genes. This has led to an explosion of genome-wide association studies (GWAS) which have identified new chromosomal regions and DNA variants. This information has been used to develop multiplexed tests that hunt for a range of possible mutations in an individual’s cancer, to assist clinical decision-making. The HapMap aims to identify the millions of single nucleotide polymorphisms (SNPs), which are single nucleotide differences in the DNA sequence, which may confer individual differences in susceptibility to disease. The HapMap has identified low-risk genes for breast, prostate and colon cancers. [14] TCGA and ICGC have begun cataloguing significant mutation events in common cancers. [5,6] OncoMap provides such an example, where alterations in multiple genes are screened by mass spectrometry. [15]

The reproduction and accuracy of microarray data needs to be addressed cautiously. ‘Noise’ from analysing thousands of genes can lead to false predictions and, as such, it is difficult to compare results across microarray studies. In addition, cancer cells alter their gene expression when extrapolated from their environment, potentially yielding misleading results. The clinical utility of microarrays is difficult to determine, given the variability of the assays themselves as well as the variability between patients and between the laboratories performing the analyses.

Types of cancer biomarkers

This shif offerrom entirely empirical cancer treatment to stratified and eventually personalised approaches requires the discovery of biomarkers and the development of assays to detect them (Table 1). With recent technological advances in molecular biology, the range of cancer biomarkers has expanded, which will aid the implementation of effective therapies into the clinical armamentarium (Figure 1). However, during the past two decades, fewer than twelve biomarker assays have been approved by the US Food and Drug Administration (FDA) for monitoring response, surveillance or the recurrence of cancer. [16]

Early detection biomarkers

Most current methods of early cancer detection, such as mammography or cervical cytology, are based on anatomic changes in tissues or morphologic changes in cells. Various molecular markers, such as protein or genetic changes, have been proposed for early cancer detection. For example, PSA is secreted by prostate tissue and has been approved for the clinical management of prostate cancer. [17] CA-125 is recognised as an ovarian cancer-specific protein. [18]

Diagnostic biomarkers

Examples of commercial biomarker tests include the Oncotype DX biomarker test and MammaPrint test for breast cancer. Oncotype DX is designed for women newly diagnosed with oestrogen-receptor (ER) positive breast cancer which has not spread to lymph nodes. The test calculates a ‘recurrence score’ based on the expression of 21 genes. Not covered by Medicare, it will cost US$4,075 for each woman. One study found that this test persuaded oncologists to alter their treatment recommendations for 30% of their patients. [19]

Prognostic biomarkers

The tumour, node, metastasis (TNM)-staging system is the standard for prediction of survival in most solid tumours based on clinical, gross and pathologic criteria. Additional information can be provided with prognostic biomarkers, which indicate the likelihood that the tumour will return in the absence of any further treatment. For example, for patients with metastatic nonseminomatous germ cell tumours, serum-based biomarkers include α-fetoprotein, human chorionic gonadotropin, and lactate dehydrogenase.

Predictive biomarkers

Biomarkers can also prospectively predict response (or lack of response) to specific therapies. The widespread clinical usage of ER and progesterone receptors (PR) for treatment with tamoxifen, and human epidermal growth factor receptor-2 (HER-2) for treatment with trastuzumab, is evidence of the usefulness of predictive biomarkers. Epidermal growth factor receptor (EGFR) is overexpressed in multiple cancer types. EGFR mutation is a strong predictor of a favourable outcome if treated with EGFR tyrosine kinase inhibitors such as gefitinib in non-small cell lung carcinoma (NSCLC) and anti-EGFR monoclonal antibodies such as cetuximab or panitumumab in colorectal cancer. [20] Conversely, the same cancers with KRAS mutations are associated with primary resistance to EGFR tyrosine kinase inhibitors. [21,22] This demonstrates that biomarkers, such as KRAS mutation status, can predict which patient may or may not benefit from anti-EGFR therapy (Figure 2).

Pharmacodynamic biomarkers

Determining the correct dosage for the majority of traditional chemotherapeutic agents presents a challenge because most drugs have a narrow therapeutic index. Pharmacodynamic biomarkers, in theory, can be used to guide dose selection. The magnitude of BCR–ABL kinase activity inhibition was found to correlate with clinical outcome, possibly justifying the personalised selection of drug dose. [23]

The role of biomarkers in common cancers

Biomarkers currently have a role in the prediction or diagnosis of a number of common cancers (Table 2).

Breast Cancer

Breast cancer can be used to illustrate the contribution of molecular diagnostics to personalised treatment. Discovered in the 1970s, tamoxifen was the first targeted cancer therapy against the oestrogen signalling pathway. [8] Approximately three quarters of breast cancer tumours express hormone receptors for oestrogen and/or progesterone. Modulating either the hormone ligand or the receptor has been shown to be effective in treating hormone receptorposi tive breast cancer for over a century. Although quite effective for a subset of patients, this strategy has adverse partial oestrogenic eff ects in the uterus and vascular system, resulting in an increased risk of endometrial cancer and thromboembolism. [9,10] Alternative approaches to target the ligand production instead of the ER itself was hypothesised to be more effective with fewer side effects. Recent data suggest that the use of specific aromatase inhibitors (anastrozole, letrozole and exemestane), which block the formation of endogenous oestrogen, may be superior in both the adjuvant [24] and advanced disease settings. [25]

Lung Cancer

Lung cancer is the most common cause of cancer-related mortality affecting both genders in Australia. [26] Many investigators are using panels of serum biomarkers in an attempt to increase sensitivity of prediction. Numerous potential DNA biomarkers such as the overactivation of oncogenes, including K-ras, myc, EGFR, and Met, or the inactivation of tumour suppressor genes, including p53 and Rb, are being investigated. Gefitinib was found to be superior to carboplatin– paclitaxel in EGFR-mutant non-small cell lung cancer cases [20] and to improve progression-free survival, with acceptable toxicity, when compared with standard chemotherapy. [27]

Melanoma

Australia has the highest skin cancer incidence in the world. [28] Approximately two in three Australians will be diagnosed with skin cancer before the age of 70. [29] Currently, the diagnosis and prognosis of primary melanoma is based on histopathologic and clinical factors. In the genomic age, the number of modalities for identifying and subclassifying melanoma is rapidly increasing. These include immunohistochemistry of tissue sections and tissue microarrays and molecular analysis using RT-PCR, which can detect relevant multidrug resistance-associated protein (MRP) gene expression and characterisation of germ-line mutations. [30] It is now known that most malignant melanomas have a V600E BRAF mutation. [31] Treatment of metastatic melanoma with PLX4032 resulted in complete or partial tumour regression in the majority of patients. Responses were observed at all sites of disease, including the bone, liver, and small bowel. [32]

Leukaemia

Leukaemia has progressed from being seen merely as a disease of the blood to one that consists of 38 different subtypes. [33] Historically a fatal disease, chronic myeloid leukaemia (CML) has been redefined by the presence of the Philadelphia chromosome. [34] In 1998, imatinib was marketed as a tyrosine kinase inhibitor. This drug has proven to be so effective that patients with CML now have mortality rates comparable to those of the general population. [35]

Colon Cancer

Cetuximab was the first anti-EGFR monoclonal antibody approved in the US for the treatment of colorectal cancer, and the first agent with proven clinical efficacy in overcoming topoisomerase I resistance. [22] In 2004, bevacizumab was approved for use in the first-line treatment of metastatic colorectal cancer in combination with 5-fluorouracil-based chemotherapy. Extensive investigation since that time has sought to define bevacizumab’s role in different chemotherapy combinations and in early stage disease. [36]

Lymphoma

Another monoclonal antibody, rituximab, is an anti-human CD20 antibody. Rituximab alone has been used as the first-line therapy in patients with indolent lymphoma, with overall response rates of approximately 70% and complete response rates of over 30%. [37,38] Monoclonal antibodies directed against other B-cell-associated antigens and new anti-CD20 monoclonal antibodies and anti-CD80 monoclonal antibodies (such as galiximab) are being investigated in follicular lymphoma. [39]

Implication and considerations of personalised cancer treatment

Scientific considerations

Increasing information has revealed the incredible complexity of the cancer tumourigenesis puzzle; there are not only point mutations, such as nucleotide insertions, deletions and SNPs, but also genomic rearrangements and copy number changes. [40-42] These studies have documented a pervasive variability of these somatic mutations, [7,43] so that thousands of human genomes and cancer genomes need to be completely sequenced to have a com¬plete landscape of causal mutations. And what about epigenetic and non-genomic changes? While there is a lot of intense research being conducted on the sorts of molecular biology techniques discussed, none have been prospectively validated in clinical trials. In clinical practice, what use is a ‘gene signature’ if it provides no more discriminatory value than performance status or TNM-staging?

Much research has so far been focused on primary cancers; what about metastatic cancers, which account for considerable mortality? The inherent complexity of genomic alterations in late-stage cancers, coupled with interactions that occur between tumour and stromal cells, means that most often we are not measuring what we are treating. If we choose therapy based on the primary tumour, but we are treating the metastasis, we are likely giving the wrong therapy. Despite our increasing knowledge about metastatic colonisation, we still hold little understanding of how metastatic tumour cells behave as solitary disseminated entities. Until we identify optimal predictors for metastases and an understanding of the establishment of micrometastases and activation from latency, personalised therapy should be used sagaciously.

In addition, from a genomic discovery, it is difficult, costly and timeconsuming to deliver to patients a new targeted therapy with suitable pharmacokinetic properties, safety and demonstrable efficacy in randomised clinical trials. The first cancer-related gene mutation was discovered nearly thirty years ago – a point mutation in the HRAS gene that causes a glycine-to-valine mutation at codon twelve. [44,45] The subsequent identification of similar mutations in the KRAS family [46- 48] ushered in a new field of cancer research activity. Yet it is only now, three decades later, that KRAS mutation status is affecting cancer patient management as a ‘resistance marker’ of tumour responsiveness to anti-EGFR therapies. [21]

Ethical and Moral Considerations

The social and ethical implications of genetic research are significant, in fact 3% of the budget for the Human Genome Project is allocated for the same reason. These worries range from “Brave New Worldesque” fears about the beginnings of “genetic determinism” to invasions of “genetic privacy”. An understandable qualm regarding predictive genetic testing is discrimination. For example, if a person is discovered to be at genetically-predisposed to developing cancer, will employers be allowed to make such individuals redundant? Will insurance companies deny claims on the same basis? In Australia, the Law Reform Commission’s report details the protection of privacy, protection against unfair discrimination and maintaining ethical standards in genetics, of which the majority was accepted by the Commonwealth. [49,50] In addition, the Investment and Financial Services Association states that no applicant will be required to undergo a predictive genetic test for life insurance. [51] Undeniably, the potentially negative psychological impact of testing needs to be balanced against the benefits of detection of low, albeit significant, genetic risk. For example, population-based early detection testing for ovarian cancer is hindered by an inappropriately low positive predictive power of existing testing regimes.

As personalised medicine moves closer to becoming a reality, it raises important questions about health equality. Such discoveries are magnifying the disparity in the accessibility of cancer care for minority groups and the elderly, evidenced by their higher incidence rates and lower rates of cancer survival. This is particularly relevant in Australia, given the pre-existing pitfalls of access to medical care for Indigenous Australians. Even when calibrating for later presentations and remoteness, there have still been significant survival disparities between the Indigenous and non-Indigenous populations. [52] Therefore, a number of questions remain. Will personalised treatment serve only to exacerbate the health disparities between the developing and developed world? Even if effective personalised therapies are proven through clinical trials, how will disadvantaged populations access this care given their difficulties in accessing the services that are currently available?

Economic Considerations

The next question that arises is: Who will pay? At first glance, stratifying patients may seem unappealing to the pharmaceutical industry, as it may mean trading the “blockbuster” drug offered to the widest possible market for a diagnostic/therapeutic drug that is highly effective but only in a specific patient cohort. Instead of drugs developed for mass use (and mass profi t), drugs designed through pharmacogenomics for a niche genetic market will be exceedingly expensive. Who will cover this prohibitive cost – the patient, their private health insurer or the Government?

Training Considerations

The limiting factor in personalised medicine could be the treating doctor’s familiarity with utilising genetic information. This can be addressed by enhancing genetic ‘literacy’ amongst doctors. The role of genetics and genetic counselling is becoming increasingly recognised, and is now a subspecialty within the Royal Australian College of Physicians. If personalised treatment improves morbidity and mortality, the proportion of cancer survivors requiring follow-up and management will also rise, and delivery of this service will fall on oncologists and general practitioners, as well as other healthcare professionals. To customise medical decisions for a cancer patient meaningfully and responsibly on the basis of the complete profile of his or her tumour genome, a physician needs to know which specific data points are clinically relevant and actionable. For example, the discovery of BRAF mutations in melanoma [32] have shown us the key first step in making this a reality, namely the creation of a clear and accessible reference of somatic mutations in all cancer types.

Downstream of this is the education that medical universities provide to their graduates in the clinical aspects of genetics. In order to maximise the application of personalised medicine it is imperative for current medical students to understand how genetic factors for cancer and drug response are determined, how they are altered by genegene interactions, and how to evaluate the significance of test results in the context of an individual patient with a specific medical profile. Students should acquaint themselves with the principles of genetic variation and how genome-wide studies are conducted. Importantly, we need to understand that the same principles of simple Mendelian genetics cannot be applied to the genomics of complex diseases such as cancer.

Conclusion

The importance of cancer genomics is evident in every corner of cancer research. However, its presence in the clinic is still limited. It is undeniable that much important work remains to be done in the burgeoning area of personalised therapy; from making sense of data collected from the genome-wide association studies and understanding the genetic behaviour of metastatic cancers to regulatory and economic issues. This leaves us with the parting question, are humans just a sum of their genes?

Conflicts of interest

None declared.

Correspondence

M Wong: may.wong@student.unsw.edu.au

Categories
Review Articles Articles

Is Chlamydia trachomatis a cofactor for cervical cancer?

Introduction

The most recent epidemiological publication on the worldwide burden of cervical cancer has reported that cervical cancer (0.53 million cases) was the third most common female cancer reported in 2008 after breast (1.38 million cases) and colorectal cancer (0.57 million cases). [1] Cervical cancer is the leading source of cancer-related death among women in Africa, Central America, South-Central Asia and Melanesia, indicating that it remains a major public health problem in spite of effective screening methods and vaccine availability. [1]

The age-standardised incidence of cervical cancer in Australian women (20-69 years) has decreased by approximately 50% from 1991 (the year the National Cervical Screening Program was introduced) to 2006 (Figure 1). [2,3] Despite this drop, the Australian Institute of Health and Welfare estimated an increase in cervical cancer incidence and mortality for 2010 by 1.5% and 9.6 % respectively. [3]

Human papillomavirus (HPV) is required but not sufficient to cause invasive cervical cancer (ICC). [4-6] Not all women with a HPV infection progress to develop ICC. This implies the existence of cofactors in the pathogenesis of ICC such as smoking, sexually transmitted infections, age at first intercourse and number of lifetime sexual partners. [7] Chlamydia trachomatis (CT) is the most common bacterial sexually transmitted infection (STI) and it has been associated with the development of ICC in many case-controlled and population based studies. [8-11] However, a clear cause-and effect relationship has not been elucidated between CT infection, HPV persistence and progression to ICC as an end stage. This article aims to review the literature for evidence that CT acts as a cofactor in the development of ICC and HPV establishment. The understanding of CT as a risk factor for ICC is crucial as it is amenable to prevention.

Aim: To review the literature to determine if an infection with Chlamydia trachomatis (CT) acts as a confounding factor in the pathogenesis of invasive cervical cancer (ICC) in women. Methods: Web-based Medline and the Australian Institute of Health and Welfare (AIHW) search for key terms: cervical cancer (including neoplasia, malignancy and carcinoma), chlamydia, human papillomavirus (HPV) and immunology. The search was restricted to English language publications on ICC (both squamous and adenocarcinoma) and cervical intraepithelial neoplasia (CIN) between 1990-2010. Results: HPV is essential but not sufficient to cause ICC. Past and current infection with CT is associated with squamous cell carcinoma of the cervix of HPV-positive women. CT infection induces both protective and pathologic immune responses in the host that depend on the balance between Type-1 helper cells versus Type-2 helper cell-mediated immunity. CT most likely behaves as a cervical cancer cofactor by 1) invading the host immune system and 2) enhancing chronic inflammation. These factors increase the susceptibility of a subsequent HPV infection and build HPV persistence in the host. Conclusion: Prophylaxis against CT is significant in reducing the incidence of ICC in HPVposi tive women. GPs should be raising awareness of the association between CT and ICC in their patients.

Evidence for the role of HPV in the aetiology and pathogenesis of cervical cancer

HPV is a species-specific, non-enveloped, double stranded DNA virus that infects squamous epithelia and consists of the major protein L1 and the minor capsid protein L2. More than 130 HPV types have been classified based on their genotype and HPV 16 (50-70% of cases) and HPV 18 (7-20% cases) are the most important players in the aetiology of cervical cancer. [6,12] Genital HPV transmission is usually spread via skin-to-skin contact during sexual intercourse but does not require vaginal or anal penetration, which implies that condoms only offer some protection against CIN and ICC. [6] The risk factors for contracting HPV infection are early age at first sexual activity, multiple sexual partners, early age at first delivery, increased number of pregnancies, smoking, immunosuppression (for example, human immunodeficiency virus or medication), and long-term oral contraceptive use. Social customs in endemic regions such as child marriages, polygamy and high parity use may also increase the likelihood of contracting HPV. [13] More than 80% of HPV infections are cleared by the host’s cellular immune response, which starts about three months from the inoculation of virus. HPV can be latent for 2-12 months post-infection. [14]

Molecular Pathogenesis

HPV particles enter basal keratinocytes of mucosal epithelium via binding of virions to the basal membrane of disrupted epithelium. This is mediated via heparan surface proteoglycans (HSPGs) found in the extracellular matrix and cell surface of most cells. The virus is then internalised to establish an infection mainly via a clathrin-dependent endocytic mechanism. However, some HPV types may use alternative uptake pathways to enter cells, such as a caveolae-dependent route or the involvement of tetraspanin-enriched domains as a platform for viral uptake. [15] The virus replicates in nondividing cells that lack the necessary cellular DNA polymerases and replication factors. Therefore, HPV encodes proteins that reactivate cellular DNA synthesis in noncycling cells, inhibit apoptosis, and delay the differentiation of the infected keratinocyte, to allow viral DNA replication. [6] The integration of viral genome in the host DNA causes deregulation of E6 and E7 oncogenes of high-risk HPV (HPV 16 and 18) but not of low risk HPV (HPV 6 and 11). This results in the expression of E6 and E7 oncogenes throughout the epithelium resulting in aneuploidy and karotypic chromosomal abnormalities that accompany keratinocyte immortalisation. [5]

Natural History of HPV infection and cervical cancer

Low risk HPV infections are usually cleared by cellular immunity coupled with seroconversion and antibodies against major coat protein L1. [5,6,12] Infection with high-risk HPV is highly associated with the development of squamous cell and adenocarcinoma of the cervix, which is confounded by other factors such as smoking and STIs. [4,9,10] The progression of cervical cancer in response to HPV is schematically illustrated in Figure 2.

Chlamydia trachomatis and the immune response

CT is a true obligate intracellular pathogen and is the most common bacterial cause of STIs. It is associated with sexual risk-taking behaviour and leads to asymptomatic and therefore undiagnosed genital infections due to the slow growth cycle of CT. [16] A CT infection is targeted by innate immune cells, T cells and B cells. Protective immune responses control the infection whereas pathological responses lead to chronic inflammation that causes tissue damage. [17]

Innate immunity

The mucosal epithelium of the genital tract provides first line of host defence. If CT is successful in entering the mucosal epithelium, the innate immune system is activated through the recognition of pathogen-associated molecular patterns (PAMPs) such as the Toll-like receptors (TLRs). Although CT lipopolysaccharides can be recognised by TLR4, TLR2 is more crucial for signalling pro-inflammatory cytokine production. [18] This leads to the production of pro-inflammatory cytokines such as interleukin-1 (IL-1), IL-6, tumour necrosis factor-a (TNF-a) and granulocyte-macrophage colony-stimulating factor (GMCSF). [17] In addition, chemokines such as IL-8 can increase recruitment of innate-immunity cells such as macrophages, natural killer (NK) cells, dendritic cells (DCs) and neutrophils that in turn produce more proinflammatory cytokines to restrict CT growth. Infected epithelial cells release matrix metalloproteases (MMPs) that contribute to tissue proteolysis and remodelling. Neutrophils also release MMPs and elastases that contribute to tissue damage. NK cells produce interferon (IFN)–gamma that drives CD4 T cells toward the Th1-mediated immune response. The infected tissue is infi ltrated by a mixture of CD4, CD8, B cells, and plasma cells (PCs). [17,19,20] DCs are essential for processing and presenting CT antigens to T cells and therefore linking innate and adaptive immunity.

Adaptive Immunity

Both CD4 and CD8 cells contribute to control of CT infection. In 2000, Morrison et al. showed that B cell-deficient mice, depleted of CD4 cells, are unable to clear CT infection. [21] However, another study in 2005 showed that passive transfer of chlamydia-specific monoclonal antibodies into B-cell deficient and CD4 depleted cells restored the ability of these mice to control a secondary CT infection. [22] This indicates a strong synergy between CD4 and B cells in the adaptive immune response to CT. B cells produce CT-specific antibodies to combat the pathogens. In contrast, CD8 cells produce IL-4, IL-5 and IL- 13 that do not appear to protect against chlamydia infection and may even indirectly enhance chlamydia load by inhibiting the protective CD4 response. [23] A similar observation was made by Agrawal et al. who examined cervical lymphocyte cytokine responses of 255 CT antibody–positive women with or without fertility disorders (infertility and multiple spontaneous abortions) and of healthy control women negative for CT serum IgM or IgG. [20] The study revealed a significant increase in CD4 cells in the cervical mucosa of fertile women, compared with those with fertility disorders and with negative control women. There was a very small increase in CD8 cells in cervical mucosa of CT infected women in both groups. The results showed that cervical cells from the women with fertility disorders secreted higher levels of IL- 1b, IL-6, IL-8, and IL-10 in response to CT; whereas, cervical cells from antibody-positive fertile women secreted significantly higher levels of IFN-gamma and IL-12. This suggests that a skewed immune response toward Th1 prevalence protects against chronic infection. [20]

The pathologic response to CT can result in inflammatory damage within the upper reproductive tract due to either failed or weak Th1 action resulting in chronic infection or an exaggerated Th1 response. Alternatively, chronic infection can occur if Th2 response dominates Th1 immune response and result in autoimmunity and direct cell damage which in turn will enhance tissue inflammation. Inflammation also increases the expression of human heat shock protein (HSP), which induce production of IL-10 via autoantibodies leading to CT associated pathology such as tubal blockage and ectopic pregnancies. [24]

Evidence that Chlamydia trachomatis is a cofactor for cervical cancer

Whilst it has been established that HPV is a necessary factor in the development of cervical cancer, it is still unclear why the majority of women infected with HPV do not progress to ICC stage. Several studies in the last decade have focused on the role of STIs in the pathogenesis of ICC and discovered that CT infection is consistently associated with squamous cell ICC.

In 2000, Koskela et al. performed a large-scale case-controlled study within a cohort of 530,000 Nordic women to evaluate the role of CT in the development of ICC. [10] One-hundred and eighty-two women with ICC (diagnosed during a mean follow-up of five years after serum sampling) were identified via linking data files of three Nordic serum banks and the cancer registries of Finland, Norway and Sweden. Microimmunofl uorescence (MIF) was used to detect CT-specific IgGs and HPV16-, 18- and 33-specific IgG antibodies were determined by standard ELISAs. Serum antibodies to CT were associated with an increased risk for cervical squamous-cell carcinoma (HPV and smoking adjusted odds ratio (OR), 2.2; 95% confi dence interval (CI), 1.3–3.5). The association remained also after adjustment for smoking both in HPV16-seronegative and seropositive cases (OR, 3.0; 95% CI, 1.8–5.1; OR, 2.3, 95% CI, 0.8–7.0 respectively). This study provided sero-epidemiologic evidence that CT could cause squamous cell ICC. However the authors were unable to explain the biological association between CT and squamous cell ICC.

Many more studies emerged in 2002 to investigate this association between CT and ICC even further. Smith et al. performed a hospital case-controlled study of 499 ICC women from Brazil and 539 from Manila that revealed that CT seropositive women have a twofold increase in squamous ICC (OR, 2.1; 95% CI, 1.1-4.0) but not adenocarcinoma or adenosquamous ICC (OR, 0.8; 95% CI, 0.3-2.2). [8] Similarly, Wallin et al. conducted a population based prospective study of 118 women who developed cancer after having a normal pap smear (average of 5.6 years later). [25] Women were followed up for 26 years. PCR analysis for CT and HPV DNA showed that the relative risk for ICC associated with past CT, adjusted for concomitant HPV DNA positivity, was 17.1. They also concluded that the presence of CT and of HPV was not interrelated.

In contrast, another study examining the association between CT and HPV in women with cervical intraepithelial neoplasia (CIN) found that there is an increase in CT rate in HPV-positive women (29/49) as compared to HPV-negative women (10/80), (p<0.001). [26] However, no correlation between HPV and CT co-infection was found and the authors suggested that the increased CT infectivity rate in HPVposi tive women is presumably due to HPV-related factors, including modulation of the host’s immunity. In 2004, a case-controlled study of 1,238 ICC women and 1100 control women in 7 countries coordinated by the International Agency for Research on Cancer (IARC), France also supported the findings of previous studies. [7]

Strikingly, a very recent study in 2010 confirmed that there was no association between CT infection, as assessed by DNA or IgG, and risk of cervical premalignancy, after controlling for carcinogenic HPVposi tive status. [11] The authors have justified the difference in results from previous studies by criticising the retrospective nature of the IARC study, which meant that HPV and CT status at relevant times were not available. [7] However, other prospective studies have also identified the association between CT and ICC development. [9,25] Therefore, the results from this one study remain isolated from practically every other study that has found an association between CT and ICC in HPV infected women.

Consequently, it is evident that CT infection has a role in confounding squamous cell ICC in HPV infected women but it is not an independent    cause for ICC as previously suggested by Koskela et al. [10] Previous cause-and-eff ect association between CT and HPV are most likely from CT infection increasing the susceptibility to HPV. [9,11,27] The mechanisms by which CT can act as a confounder for ICC relate to CT induced inflammation (associated with metaplasia) and invasion of the host immune response, which increases susceptibility to HPV infection and enhances HPV persistence in the host. CT can directly degrade RFX-5 and USF-1 transcription factors that induce expression of MHC class I and MHC class II respectively. [17,28] This prevents recognition of both HPV and CT by CD4 and CD8 cells, thus preventing T-cell effector functions. CT can also suppress IFN-gamma-induced MHC class II expression by selective disruption of the IFN-gamma signalling pathways, hence evading host immunity. [28] Additionally, as discussed above, CT induces inflammation and metaplasia of infected cells, which predisposes them as target cells for HPV. CT infection may also increase access of HPV to the basal epithelium and increases HPV viral load. [16]

Conclusion

There is sufficient evidence to suggest that CT infection can act as a cofactor in squamous cell ICC development due to consistent positive correlations between CT infection and ICC in HPV positive women. CT invades the host immune response due to chronic inflammation and it is presumed that it prevents the clearance of HPV from the body, thereby increasing the likelihood of developing ICC. More studies are needed to establish the clear biological pathway linking CT to ICC to support the positive correlation found in epidemiological studies. An understanding of the significant role played by CT as a cofactor in ICC development should be exercised to maximise efforts in CT prophylaxis, starting at the primary health care level. Novel public health strategies must be devised to reduce CT transmission and raise awareness among women.

Conflicts of interest

None declared.

Correspondence

S Khosla: surkhosla@hotmail.com