Categories
Articles Guest Articles

A new paradigm for assessment of learning outcomes among Australian medical students: in the best interest of all medical students?

Truism: a claim that is so obvious or self-evident as to be hardly worth mentioning, except as a reminder or as a rhetorical or literary device.

Assertion: a proposition that is repeatedly restated regardless of contradiction.

“Medical education in Australia is a world-class system, and produces doctors of the highest capability.”

Truism or assertion? I suggest more assertion than truism. I ask you to consider: “how do we know if this statement is true?” How do you know how good you are; whether you have met the necessary learning outcomes from your medical program?

Introduction: the need for change

Most of us involved in medical education would agree that – broadly speaking, across the sector – what we do in Australian medical education is indeed world class. However, most of us would also say that we could always do better, and that we should always be trying to improve the system.

Despite having a rigorous accreditation system, developed and delivered by the Australian Medical Council (AMC), we do not have explicit measures – across the system – of the outcomes of the educational process at our medical schools. [1] This gap first became apparent to me when preparing my school for AMC accreditation a few years ago. We were asked to provide data on the outcomes of the education (including what our graduates had gone on to achieve) and I found this challenging. We did source some data from Royal Colleges that suggested our graduates performed at a similar level on College exams to other school’s graduates, and we had some data on rural practice, but overall the picture is patchy. Does this gap in the outcomes data matter? I think it does – I suggest that this is in fact a major issue for the sector to consider, and I suggest that medical students need to engage with this issue. Luckily I believe we have a simple solution, and one that is within our grasp.

Some unanswered questions

As a medical dean, I wanted to know – in a quantitative and systematic way – how my students and my school perform. Specific questions I asked myself were:

How do my students and my school perform against a defined national or international standard?

How do my students and my school perform against other medical schools (nationally and globally)?

How can I gather this type of data and use it to improve the educational experiences of our students so that they become even better doctors?

How can I provide quantitative reassurance to my university, the profession and to society that we are doing what we can to fulfil our social responsibility, in terms of graduating competent doctors.

I was not, and am not, particularly interested in our performance in a ranking or league table sense.

Current state of play

Currently in Australia each medical school designs and delivers its own examinations. There is some, but limited collaboration in the design, delivery, evaluation, and quality assurance of the exams in Australian medical schools. Certainly there is no externally focussed data or reporting that affirms the quality and outcomes of our medical degree programs. Firstly, we do not have a national standard against which to assess our students; we do have very clear AMC standards but they do not include examination against a defined set of national competencies. Nor do we have an explicit statement of what knowledge, which clinical skills, and which professional competencies, our graduates are expected to display. Although there are projects underway, some under the auspice of Medical Deans Australia and New Zealand (MDANZ), there is no explicit set of expectations.

Not a national licensing exam

I do not argue for a national licensing exam. [2] Indeed I argue against a national licensing exam. [3] And, the broad view of medical deans seems to be that a national exam would be very expensive, time-consuming and could risk undermining the flexibility and diversity that exists within Australian medical programs. Thus, although the USA and Canada, by way of example, have rigorous national licensing exams, medical deans in Australia are not keen to go this route. Many of us see suggesting a national licensing exam as being overly simplistic, a knee-jerk reaction, not necessary to fix the existing gaps, and potentially damaging.

So, what is it we really need? If we are to provide a level of quantitative reassurance to society, to the profession (in the form of the Medical Board of Australia, and the AMC, for example), to our universities and – very importantly –  to our students, we do need a more collaborative approach to assessment of medical student competency, and we need some common exam questions, and hence some data that can be used to compare performance across students and schools.

Nothing new under the sun: collaborations already underway

This is not a new idea, and indeed there is an impressive collaboration of medical schools working within AMSAC (Australian Medical Schools Assessment Collaboration) that has been doing this since 2009. [4]  In 2012, 11 of the 19 medical schools in Australia (comprising 2492 students) took part in AMSAC. The “AMSAC exam” comprised 49 multiple choice questions in 2012, including 19 testing ‘structure’ and 30 testing ‘function’. Even with this relatively modest number of questions a substantial amount of high quality data can be generated that provides significant insight into the performance of the collaborating schools.

Other collaborations are in place across Australia, each with a slightly different focus. For example the ACCLAiM collaboration is focussing on the OSCE exam, developing common OSCE stations and common approaches to marking, among a group of schools. The IDEAL collaboration is a global network of schools that all contribute to a very large database of exam questions. Other “item banks” that have been used, and could be used, include the AMC items used in the examinations taken by international medical graduates.

In contrast to AMSAC, which focuses on assessment of the biomedical sciences around the midpoint of undergraduate medical training, the AMAC (Australian Medical Assessment Collaboration) group has its focus on testing knowledge and application of knowledge at the end of medical school training. Funded initially by the Australian Learning and Teaching Council [5] and now by the Office of Learning and Teaching with the Department of Industry, Innovation, Science, Research and Technology, AMAC now includes the majority of medical schools and has already piloted an ‘end of course common exam’. AMAC’s focus is on developing a strong collaborative culture among Australian medical schools, who will share a commitment to working together on assessment. Figure 1 shows one of the data outputs from AMAC, demonstrating how performance of the collaborating schools varied in the pilot trial. In the future I anticipate that some schools will choose to share a common exam, or part of an exam, while others will (at least initially) work within the group on identifying, developing and quality assuring individual items for the item bank.

Strengthening our medical schools

It is vital to point out here that the underpinning philosophy is one of cooperation and collaboration. This is about schools working together to strengthen their own assessment capacity and capability, and to help others do the same. It is not at all about withdrawing responsibility for assessment from schools, nor about undermining each school’s capacity to change its assessment practice. By making appropriate use of common exam questions, schools can measure and benchmark performance. These data can inform schools of areas of weakness and strength, and hence lead to curriculum development. I suggest this approach is actually an essential part of a broad quality assurance process that should underpin Australian medical education.

This is not about league tables – and such a counter-productive approach can easily be avoided by making comparisons between schools completely anonymous, which is the way that AMSAC functions (Figure 1).

Shall we go global?

In 2012, medical schools at The University of Queensland (UQ) and The University of Sydney, both delivered the International Foundations of Medicine (IFOM) Clinical Sciences Exam (CSE) to final year medical students, as a required formative assessment. A detailed report on the UQ experience has been submitted for publication. The IFOM CSE is a 160 question multiple-choice exam that tests knowledge and application of knowledge across most of the clinical disciplines. It is effectively an international version of United States Medical Licensing Exam (USMLE) and as such provides a “global standard” against which we can test ourselves. Now, let’s be careful about language here: I am not suggested that the USMLE is the global standard, but it is a global standard, and indeed the IFOM is being designed and developed to be one explicit global standard that students and schools can make use of (should they wish). Of course, good practice would have us formally blueprint any exam against our own curriculum, and this is not possible for the IFOM. The exam is produced by the National Board of Medical Examiners (NBME) [7] in the USA, under strict security constraints, and while a high level blueprinting is done by the international committee that oversees IFOM development, local level blueprinting is not possible. Further research and evaluation is needed to explore how important this is.

So, having delivered IFOM CSE once, we now have high quality data that shows how our medical students performed against one global standard, against the USMLE, and against our colleagues at The University of Sydney. All the information we have gathered is new, is insightful, and is stimulating a range of thoughtful conversations. Of course the data is not definitive, it is not a “magical, gold standard” but it is important data that is giving us important pause for thought.

Peering into the future

So, we are on a journey – a journey that I firmly believe is in the best interests of students, medical schools and all our stakeholders, most importantly your future patients. Just 2-3 years ago while several innovators were working on some of the collaborations described here, the importance of sector-wide change was not on the Deans’ agenda; now it is. What might it look like in the future?

The ideal scenario that would develop over the next few months and years is as follows:

A formal, voluntary, collaboration between as many medical schools in Australia and New Zealand as possible, run under the auspice of MDANZ as the peak body representing these medical schools

  • A formal, inclusive, governance structure would be in place, with appropriate representation of all members
  • A proper business plan to support the collaboration would be developed and managed through the governance structure
  • The outputs of the collaboration would be used by each medical school in a way that it sees fit, and the activities and outputs could include:
    • Annual meeting on assessment practice and strategy
    • A common clinical sciences exam of 100-200 multiple choice questions, covering all core clinical sciences that schools that wish to use a common exam would take up
    • An item bank of MCQs and OSCE stations: schools might choose to use some common OSCE stations in their own clinical exams
    • A range of innovation projects to develop new assessment practices
  • Analysis and statistical support would be provided to allow schools and students to understand how they are performing in comparison with a defined national or global standard
  • Anonymous reports would be available to schools that provide benchmarking data, which could be used in accreditation reports to reassure the AMC, MBA, and society about learning outcomes.

Importantly, students need to be a part of this process. Medical students are deeply engaged in all aspects of medical education in Australia, and rightly so. Surely it is in the students’ best interests to know that their schools are working to improve their educational experience, and their educational outcomes all the time?

Support for this publication/activity has been provided by the Australian Government Office for Learning and Teaching. The views expressed in this publication/activity do not necessarily reflect the views of the Australian Government Office for Learning and Teaching.

References

[1] Australian Medical Council (website) http://www.amc.org.au (accessed Feb 2012).

[2] Koczwara B, Tattersall MHN, Barton MB et al. Achieving equal standards in medical student education: is a national exit examination the answer? Med J Aust 2005; 182: 228–230.

[3] Harden RM. Five myths and the case against a European or national licensing examination. Medical Teacher 2009; 31: 217–220.

[4] Wilson I, O’Mara D. The Australian Medical Schools Assessment Collaboration: What do the differences mean? ANZAHPE Conference, Alice Springs NT, June 2011; Presentation 4331.

[5] Department of Education, Science and Training. Australian Medical Education Study. Commonwealth of Australia 2007, Canberra ISBN 978-0-642-77859-8.

[6] Wilkinson D, Edwards D, Coates H, Canny B, Pearce J, Schafer J, Papinczak T, McAllister L. The Australian Medical Assessment Collaboration: developing the foundations for a national assessment of medical student learning outcomes. Project report )www.olt.gov.au/project-developing-foundation-national-assessment-medical-student-learning-outcomes-2010) ISBN 978-1-922125-33-0.

[7] National Board of Medical Examiners (website) http://www.nbme.org/Schools/iFoM/index.html (accessed Feb 2012)