Council on Medical Student Education in Pediatrics

COMSEP Logo

Search This Site

COMSEP 2005 North Carolina Meeting

Poster Presentation:


Construct and Predictive Validity of a Comprehensive Clinical Skills Exam

Authors:

 Linda Tewksbury1, Regina Richter2, Colleen Gillespie4, and Adina Kalet3

1Pediatrics, 2Medicine, NYU School of Medicine, 3NYU School of Medicine, 4NYU Robert F. Wagner Graduate School of Public Service


 Background: Medical schools are facing mounting pressures to ensure clinical competency of graduating students. Despite increasing reliance on performance-based assessments, there have been some conflicting data on validity of such exams.

Objective: To analyze construct and predictive validity of a comprehensive clinical skills exam (CCSE).

Methods: We developed the 6-station CCSE for fourth-year medical students. Standardized patients used checklists to rate students' communication (CS), history gathering (HG) and physical examination (PE) skills. Faculty assessed student clinical reasoning (CR) by evaluating student patient notes completed post-encounter. Students scoring in the bottom decile in 2 or competencies met failure criteria. We assessed: 1) construct validity using Pearson's Correlation Coefficient to measure divergent and convergent relationships among exam competencies (CS,HG,PE) and other measures of student competence (shelf exams, clerkship grades, USMLE exams); 2) concurrent validity by comparing CCSE performance of students who completed core clerkship to those who did not; and 3) predictive validity by examining CCSE pass/fail status of students who failed the USMLE StepIICS.

Results: 125/148(85%) of students who completed the exam consented to have their data analyzed anonymously. Internal consistency of checklists, as measured by Cronbach's Alpha, was: CS (.91), HG (.80), PE (.60), and CR (<.5). Across all cases, CS was highly correlated with HG (r=.47,p<.001), but not with PE, demonstrating expected convergent and divergent validity. In measuring construct validity, CCSE competencies (CS,HG,PE) were not consistently associated with students' shelf exam scores, except for CS, which was weakly correlated (range r=.19 to .23,p<.05). Overall, clerkship grades weakly correlated with HG (r=.26,p<.01) and PE (r=.19,p>.05) and more substantially correlated with CS (r=.35,p<.001). CS and PE were not significantly correlated with USMLE StepI&II knowledge exams, though HG did correlate weakly (r=.22,p<.05). Together, these three sets of variables (shelf exam, clerkship grades, USMLE exams) accounted for very little of the variance in CCSE scores (CS R2=.10, HG R2=.10, PE R2=.03,p=ns). In terms of CCSE concurrent validity, students who had completed the relevant core clerkship (78%Pediatrics, 68%Neurology, 71%Psychiatry, 74%Ob-Gyn) generally performed better in the respective clerkship-focused station. As for predictive validity, 3/9 students meeting failure criteria for the CCSE failed the USMLE StepIICS. Only one student who passed the CCSE failed the USMLE StepIICS but, of note, received the second lowest CCSE score in CS.

Conclusion: CCSE validity was supported by a number of measures, most impressively predicting failure of the USMLE StepIICS. Weak correlation between the CCSE and other measures of student competence may indicate that the CCSE is capturing elements of student clinical competency not otherwise well-measured.