Prideaux, D. Clarity of outcomes in medical education: do we know if it really makes a difference? [editorial ] Medical Education 2004; 38: 580 - 581 Hays R and Baravilala W. Applying global standards across national boundaries: lessons learned from an Asia-Pacific example. Medical Education 2004; 38: 582 - 586 Talbot M, Monkey see, monkey do: a critique of the competency model in graduate medical education. Medical Education 2004; 38: 587 - 592 Rees CE. The problem with outcomes-based curricula in medical education: insights from educational theory. Medical Education 2004; 38: 593 - 598 Reviewed by Bruce Z Morgenstern, Mayo School of Medicine
Prideaux, D. Clarity of outcomes in medical education: do we know if it really makes a difference? [editorial ] Medical Education 2004; 38: 580 - 581
Hays R and Baravilala W. Applying global standards across national boundaries: lessons learned from an Asia-Pacific example. Medical Education 2004; 38: 582 - 586
Talbot M, Monkey see, monkey do: a critique of the competency model in graduate medical education. Medical Education 2004; 38: 587 - 592
Rees CE. The problem with outcomes-based curricula in medical education: insights from educational theory. Medical Education 2004; 38: 593 - 598
Reviewed by Bruce Z Morgenstern, Mayo School of Medicine
These four papers are a linked series on the tension between the regulatory-agency driven demand for measurable outcomes and the educators' goal to let process impact outcomes - something that underpins Problem-Based Learning, where the outcomes are intended to be subordinate to learner-defined goals.
The commentary by Prideaux sets the stage and offers the following eight questions to help guide the reader's progress through the other three papers. The questions, reproduced directly (i.e., plagiarized, but well ascribed to the author) here are very cogent as we wrestle with the recent LCME rule discussed on the COMSEP list:
"1) How do outcome statements drive student learning? 2) Is student learning driven more by statements of outcomes or statements of what is to be assessed? 3) Do different types of outcomes (standards, competencies, broad outcome statements) drive student learning in different ways? 4) Do locally, nationally or globally derived outcomes drive student learning in different ways? 5) Do outcome statements encourage or discourage student direction in learning? 6) How do outcome statements drive teacher activity, selection of content, selection of learning activities assessment? 7) Does the adoption of externally derived outcomes affect teacher and student engagement with the curriculum? 8) Does participation in the process of determining outcome statements affect teacher and student engagement with the curriculum?"
Hays and Baravilala describe a site visit to the Fiji Medical School (I think that would have been a great experience) in which the World Federation of Medical Education (WFME) standards were applied. [The WFME standards can be found at "www.wfme.ku.dk/wfme"]
It became clear to the site visitors that international standards have their place, but they must be applied in a local context. The parallels to the LCME visits are clear: can the standards that apply to my alma mater (Jefferson, with a large class size and a geographically separate campus) be simply applied to my most recent home (Mayo 42 students on essentially a single campus)?
Talbot, in the best titled paper of the four (Monkey See, Monkey Do) takes on the competency movement with some challenges and the use of a slightly different educational paradigm. He describes the messianic fervor" of the competency movement, and points out that a "competency construct is a learning paradigm: it is not the same as competence, which is a step on the road to professional excellence….Most experts seem to recognize that competence is a matter of degree, whereas the planner views it as a binary yes ⁄no model."
Two tables from his paper, explained using anesthesia training as the examples are presented to help demonstrate some of his points. Many of us are familiar with aspects of table one as learners progress from novice to expert. The second table describes the complexity of the demonstration of competence as opposed to discrete competencies.
Finally, Rees comes back to the questions posed by Prideaux. She notes that regulations are driving the development of an outcomes-based education model in medical education. The need to delineate the outcomes has led to settings in which "curriculum designers and teachers control product-‘orientated' curricula, leading to student disempowerment." In an interesting conundrum she points out that the strict listing of "learning outcomes cannot specify exactly what is to be achieved as a result of learning." She, as do Prideaux and Talbot, feels that medical educators must establish the "value of precise learning outcomes before blindly adopting an outcomes-based model."
These articles are an interesting read. They do not answer questions, but they do raise interesting issues as we rush headlong into outcomes-based education. Do all our courses need to be outcomes-based? Are we training skilled laborers or are we training critical thinkers who need to be able to synthesize basic knowledge with new material that life-long learning skills and a certain fundamental skepticism foster? Will "competent" graduates advance the field as well as those who have some opportunity for learner-centered learning?
(Please read the accompanying piece by Lane and Algranati in this issue of the Educator [pgs 7-11]. Do you believe the clerkship should have requirements for a set # of patient encounters or simulated encounters, for core disease entities? If yes, do you believe case series/seminars, could substitute for an encounter? Would a computerized case substitute for an encounter? Have you changed your curriculum since the ACGME competencies came out? What do you use more, ACGME competencies or AAMC MSPO Guidelines? Have you changed your evaluation approach as a result of the ACGME competencies project? Steve Miller)