Council on Medical Student Education in Pediatrics


Search This Site

Journal Club

Opening the black box of clinical skills assessment via observation: A conceptual model. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Medical Education 2011;45:1048-1060.

Reviewed by Christopher B. White

What was the study question?
As most clerkship directors know, clinical preceptors vary widely in how they perceive and evaluate the clinical performance of the learners they supervise. The goal of this study was to explore the factors responsible for the variability in resident clinical evaluations made by supervising faculty.

How was the study done?
The study design was excellent - for details, see Figure 1 on page 1050.Forty eight faculty experienced in teaching general internal medicine residents in the ambulatory setting participated. Each watched 4 videos and 2 live encounters of standardized PGY2 residents taking a history, performing a physical exam or counseling a standardized patient. Each case was carefully scripted to show unsatisfactory, satisfactory, or superior resident characteristics in both content and performance. After each of the 6 encounters, faculty completed a mini clinical evaluation exercise (mini-CEX), rating residents on a scale of 1-9, and underwent a structured 15 minute interview by a trained study investigator. For the 2 live encounters, the faculty were given 10 minutes to provide feedback to the standardized resident. Comments by faculty were analyzed by quantitative analysis for emergent themes.

What were the results?
There was significant variability in the ratings for the same resident performance - this was reported in a prior study by the same authors. Four themes were identified to account for the variability among these experienced preceptors. 1) Frame of Reference: Using self, using other doctors, using patient outcomes or using a "gestalt" as the standard upon which to judge resident performance. 2) Inference: i.e. making assumptions about a resident's abilities based on subjective judgments of their observed behavior. 3) Variable approaches and strategies for synthesizing judgments into a numerical score. 4) Other factors external to resident performance: Context (complexity of the patient encounter, resident's prior experience, faculty-resident relationship) and the resident's response to feedback given by the preceptor.

What are the implications of these findings?
The data emerging from this study shed light on a perennial problem for clerkship and residency directors: why there is so much variability in the subjective grades assigned given by experienced faculty? The authors present a nice conceptual model likening the many factors responsible for grade variation as the lens through which each preceptor sees the learner-patient interaction (p 1056). Having identified what the problem areas are, the next step will be to figure out how to address each of them. Undoubtedly this will require a multi-factorial approach. But the biggest challenge will be to convince all of us who work as clinical preceptors that our own experiences and preconceptions are a major reason for the variability of learner evaluations, and to help us become more mindful and objective as we teach our students and residents.

Editor's note: This well-designed and well-interpreted study causes us to pause the next time we assess a trainee.

Return to Journal Club