Council on Medical Student Education in Pediatrics


Search This Site

COMSEP 2012 Indianapolis Meeting

Poster Presentation:

Improving Clinical Evaluation using an Online Competency Based Tool

Adam Weinstein, MD - Dartmouth Medical School; Alison Ricker, BA - Dartmouth Medical School; Matthew Braga, MD - Dartmouth Medical School; Todd Poret, MD - Dartmouth Medical School

Background: Clinical evaluation of students is a necessary but problematic endeavor. More than 95% of clerkships in all disciplines use clinical ratings to contribute for up to 50-70% of a student’s grade.1 These ratings are often poorly discriminative and based on recollections of presentations, rather than on observed care.1-3 Evaluations can be time-consuming, with delays and recall variation.

Objective: We evaluated if a competency-based form, which rates students by descriptors rather than a traditional Likert scale, could better distinguish between students’ clinical ratings, providing more accurate feedback. We also assessed whether distributing it via online survey would result in more timely feedback.

Methods: The new evaluation tool (sample will be presented) was introduced in the 2010-2011 academic-year. We assessed how the tool performed for each competency by comparing the average ratings to those of the previous year by student t-test. We also compared the timing in which our final grades were submitted each year. Lastly, we surveyed (tool will be presented) a targeted sample of attendings (8), who regularly evaluate clerkship students and have familiarity with both tools, for their qualitative feedback comparing the tools.

Results: Using the new tool, average student ratings shifted for Medical Knowledge from 4.5/5 to 4.0 (p<0.001); Patient Care 4.5 to 4.1 (p<0.001); Communication 4.6 to 4.4 (p<0.001); Professionalism 4.8 to 4.6 (p<0.001); Personal Learning 4.7 to 4.3 (p<0.001); and Healthcare System 4.3 to 4.2 (p=NS). The standard deviation of student ratings increased in four of six competencies. Clerkship final grades were submitted 4-5 weeks after completion compared to 6-9 weeks using the old tool (p<0.001). Seven of eight preceptors found the tool more convenient and efficient. Preceptors expressed neutral (50-63%) to favorable (38%) opinions, believing it more accurately reflected student ability and assessed a greater scope of performance. Most found the descriptors easy to use (88%) and appropriately specific (75%). Only one disagreed with the notion that the descriptors were effective in rating aspects of student performance. (Additional comments will be presented.)

Discussion: Preceptors like the new tool. It assists in timely evaluation of students. It doesn’t eliminate subjectivity or positive skew, but does significantly shift the average to better distinguish between students. We will refine the tool to further distinguish student performance, limit subjectivity, and promote written comments.