Linda O Lewin, MD - University of Maryland; Lorraine Beraho, Medical Student - University of Maryland
Introduction: Assessment tools are used extensively in medical education to evaluate students’ progress in developing the skills necessary for successful medical practice. Data must be reproducible in order to be accurately reflect student performance, however, the reliability of most assessment tools used in medical student education is unknown.
Purpose: To evaluate a new 17 item oral case presentation rating scale to determine its inter-rater reliability (IRR).
Methods: Fifteen third year medical students each recorded one inpatient oral case presentation. Each presentation was assessed by three trained pediatricians using a 17 item rating scale containing the following sub-categories: History (4 items), PE and Diagnostic Studies (4 items), Summary Statement (1 item), Assessment /Plan (3 items), Clinical Reasoning/Synthesis of Information (2 items), and General Aspects (3 items). Intraclass correlation coefficients (ICC) were used to determine the IRR of rater responses.
Results: The overall rating form had an IRR of 0.90(95% CI 0.79, 0.96). ICC coefficients for grouped items were as follows:
Section ICC 95% CI
History 0.74 0.5-0.89
Physical/labs 0.87 0.73-0.95
Summary 0.65 0.37-0.85
Assess/Plan 0.63 0.35-0.84
Clin Reasoning 0.8 0.61-0.92
Gen Aspects 0.86 0.72-0.95
Overall Asses 0.8 0.61-0.92
Conclusions: The IRR of the overall scale indicates a low degree of measurement error with an ICC indicative of a very high level of agreement. 4 out of 7 sub-sections had ICC coefficients falling within the range of almost perfect inter-examiner agreement, 1 section with strong agreement, and 2 sections with moderate agreement. Limitations include the small number of raters (3) and the use of only pediatric cases and faculty. We conclude that the Oral Case Presentation Rating Scale is sufficiently reliable to serve as a useful assessment and feedback tool for medical student oral case presentations.