Council on Medical Student Education in Pediatrics


Search This Site

COMSEP 2010 Albuquerque Meeting


Robert A. Dudas, MD, Pediatric Clerkship; Jorie Colbert, PhD, Medical Education; Seth Goldstein, MD; Michael Barone, MD MPH, Pediatric Clerkship, Johns Hopkins University, Baltimore, MD

Background: Faculty and resident global assessment of pediatric medical student knowledge is commonly employed as an assessment tool, but may or may not be predictive of performance on a standardized test of medical knowledge. Objective: We sought to determine the correlation between global ratings of medical student knowledge by faculty and residents with student performance on the NBME subject examination. Design/Methods: Data was obtained from the records of Johns Hopkins School of Medicine medical students in this observational study over the course of a single academic year (N=120). Fixed 5-point (anchored) medical knowledge summative assessments were offered by faculty and residents at completion of the pediatric clerkship. All students completed the NBME subject examination in pediatrics. Students were grouped based upon their NBME score (top 25%, middle 50% and lower 25%). Degree of correlation was determined using Pearson Correlation Coefficients and means were compared by student t-test. Results: One-hundred four medical students were rated by at least 2 faculty members and 2 residents. Regression analysis demonstrated that resident (R2 = 0.21, p < 0.001) and faculty (R2 = 0.15, p < 0.001,) ratings significantly predicted NBME scores. Students in the bottom 25% were rated significantly lower than students in the top 25% by both residents (p < 0.001) and faculty (p < 0.001) and were also rated significantly lower than students in the 25.1%-74.9% group by both residents (p = 0.001) and faculty (p = 0.001). No significant differences emerged between residents ratings and faculty ratings for each NBME score group and there was high correlation between resident ratings and faculty ratings (r = 0.45).  Discussion: Both faculty and resident global assessments of medical students’ clinical knowledge had  criterion-related validity for predicting performance on the NBME subject examination in pediatrics.  Our results suggest that resident and faculty rater types are equally accurate and the combination of resident and faculty ratings has incremental validity over ratings by just one rater type, most likely due to the increased number of raters