"You Can Observe a Lot Just by Watching" (Yogi Berra) Developing Expert-Derived Rating Standards for the Peer Assessment of Lectures Newman LR et al. Academic Medicine 2012;87:356-363.
Reviewed by Christopher B. White
What was the study question?
Can experts in medical education develop a valid and reliable assessment tool for the peer review of medical lectures?
How was the study done?
Seven experts in medical education - at the Harvard Medical School in Boston, MA - previously developed the "Peer Assessment of Medical Lecturing Instrument" (Academic Medicine 2009;84:1104-1110.). In the current study, the authors took extraordinary efforts to further "flesh out" the 5-point scale for the criteria to develop consensus for each item. This process involved 24 meetings of the group over two years, with each member observing, rating and discussing 40 medical lectures. The sessions were characterized by brutally honest discussions, which revealed important insights into why members rated the same lectures differently. Then, the experts independently observed and scored six new lectures using the newly developed tool to test its validity and reliability.
What were the results?
Ten of the eleven criteria and the Overall Performance rating" showed a high positive correlation among the seven trained peer reviewers. There were no statistically significant differences among the raters.
What are the implications of these findings?
Editor's note: This well-done study fills a great void in education literature. The authors plan to publish the facilitators guide for use by others. This will surely prove to be a great resource for COMSEP! (SLB)