Council on Medical Student Education in Pediatrics

COMSEP Logo

Search This Site

COMSEP Meeting in Nashville

Poster Presentation:


RELIABILITY OF A CHECKLIST USED FOR ASSESSMENT OF PEDIATRIC OTOSOCOPY SKILLS

Authors:

Caroline R. Paul, MD, MD, University of Wisconsin School of Medicine and Public Health, Madison, WI, Gwen C. McIntosh, MD, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, Sarah Corden, MD, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, Richard L. Ellis, MD, University of Wisconsin School of Medicine and Public Health, Fitchburg, Wisconsin, Lori Weber, MD, Gunderson Lutheran-Pediatrics, La Crosse, Wisconsin, Gary Williams, MD, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin


BACKGROUND:  Since students’ performance in real clinical settings is often evaluated by their individual attending faculty, there is a need to establish more standardized evaluation instruments for learners.  Faculty participating in a pediatric otoscopy curriculum using a checklist as an evaluation measure reported that the checklist enhanced their observation of students’ skills and could be easily implemented in their clinical practices (Paul C. COMSEP, 2010)
Objective:  This study was performed to assess accuracy, consistency of accuracy, and inter-rater agreement of an evaluation checklist for pediatric otoscopy skills.
METHODS:  A 12-item checklist consisting of 5 domains (discussion, equipment, distraction techniques, holding positions, and exam) was developed as an evaluation instrument for a pediatric otoscopy curriculum.  Nine videos showing a physician performing the pediatric ear exam on a child in various manners were developed.  Five pediatric faculty at a large teaching hospital who routinely serve as medical student preceptors were asked to view the videos and evaluate the physician performing the ear exam using the 12-item checklist. Intra-class correlation coefficient was used to assess consistency in accuracy between faculty.  Kappa value was used to assess inter-rater agreement for all faculty correctly scoring items on the checklist.
RESULTS:   For each individual faculty, the percentage of correctly scored items on the 12-item checklist ranged between 95.4% and 97.2%.  Intra-class correlation coefficient was -0.09.  For all faculty combined, the mean percentage of correctly scored items for the 12-item checklist was 95.1% (13.9 SD) with the mean percentage of correctly scored items ranging between 80.6% and 100% for each individual item.  Kappa value for inter-rater agreement was 0.38.
CONCLUSIONS:  While there was high accuracy for each individual faculty using the 12-item checklist to evaluate standardized ear exams, there was poor consistency in accuracy between faculty and only fair inter-rater agreement for all faculty correctly scoring items on the checklist.  Standardized evaluation instruments such as checklists may be effective and easily implemented in a real clinical setting for the attainment of different skills. However, their reliability should be established prior to use in curricula.