Council on Medical Student Education in Pediatrics


Search This Site

Journal Club

Rating the raters. Do Preceptors with More Rating Experience Provide More Reliable Assessments of Medical Student Performance? Ferguson KJ et al. Teaching and Learning in Medicine 2012;24(2):101-105

Reviewed by Randy Rockney

What was the study question?
Does experience with rating students act as a form of rater training and improve the quality of clinical evaluation form (CEF) ratings?

How was the study done?
The authors looked at all the CEFs that were completed for all students rotating through five clerkships in 2007-2008 at the University of Iowa. Eight-hundred and fifty-seven preceptors (57% faculty and 43% residents) completed 5493 CEFs, with a range of 1 to over 25 CEFs completed by each preceptor, for 150 medical students. They defined "high-experienced" raters as those that had completed 12 or more CEFs in the past year and "low-experienced" raters as those that had completed less than 12 CEFs in the past year - this definition yielded an equal amount of raters in each of the two groups. The authors aimed to determine if there was a relationship between rater experience and rating reliability. In the authors' words, "As in other situations where no ‘gold standard' for accuracy exists, rating accuracy can be viewed as the level of agreement between independent raters." (Only the faculty data was used for analysis in this study.)

What were the results?
Ratings from the high experienced group yielded a higher reliability than the ratings from the low experienced group. Because rater groups were created to maximize group sample sizes instead of using a more informed definition of a high experienced rater, the authors speculate that there might be a greater difference in rater reliability than measured.

What are the implications of this study?
Greater reliability of CEFs from more experienced raters supports two concepts:

  1. Experience with rating students acts as a form of rater training enhancing the quality of CEF ratings and
  2. It may be valid to assign higher weights to the CEFs completed by highly experienced evaluators.

Editor's note: The authors propose some interesting ideas for faculty development and enhancement of reliability of CEFs: pair up less experienced raters with more experienced preceptors and encourage novice raters to observe student performance a great deal in order to appreciate the range of skills that students display.

Another term used often in the medical education literature to refer to a clinical evaluation form is ITER (in-training evaluation form). The reference section of this paper is missing a great deal of the pertinent literature related to this topic. If you wish to learn more about CEFs, I would encourage you to include the term "ITER" in your search. (SLB)

Return to Journal Club