Council on Medical Student Education in Pediatrics

COMSEP Logo

Search This Site

Journal Club

A somber thought: You may not be as good as assessing students as you think you are
You're certainly relatively competent: assessor bias due to recent experiences.
Yeates P et al. Medical Education 2014;47:910-922.
Reviewed by Jeanine Ronan


What was the study question?
Previous studies have indicated that assessors evaluate learner performance based on comparisons to previous learner performances rather than fixed standards. In this study, the evaluators examined if competence scores or perceived rankings of learners are influenced by the previously observed performances. In essence, this study determined the presence of a contrast effect in assessment. In contrast effects, evaluators overemphasize the differences between a target and an anchor performance.

How was the study done?
Scripted videos of poor, borderline, and good performances for Foundation Year 1 (F1) doctors from the United Kingdom were shown to consulting physicians in either ascending or descending order of proficiency. The videos were mini-CEX demonstrations of HPI collection and diagnosis explanation. The consulting physicians scored overall learner performance via a 6-point Likert scale and provided a ranking score by asking “what proportion of F1 doctors would you expect to perform better, worse, same.” They were also asked how confident they felt about their scores.

What were the results?
Learner overall performance and perceived rankings supported the contrast effect. The consulting physicians who observed videos in ascending order of performance assigned higher scores and higher rankings than those who viewed videos in descending order. Effect was not significant for poor performers. Confidence ratings were overall high and did not show any significant differences between the groups.

What are the implications?
Contrast effects might be important as observers rate the performance of multiple trainees. The performance of the first trainee might bias the evaluator and lead to either higher or lower scores than warranted in subsequent trainees. This study is limited in that the effect was demonstrated in a laboratory and not with real scenarios.

Editor's note: This article is rather dense and a bit theoretical, but it points out our notion that experienced observers can reliably evaluate trainees might be incorrect, even when descriptive anchors are provided, leading us to misjudge the competence of those we are responsible for evaluating. This seems particularly sobering in this era of EPAs and "entrustment" decisions and warrants further investigation.(LL)

Return to Journal Club