Council on Medical Student Education in Pediatrics

COMSEP Logo

Search This Site

Journal Club

White CB, Thomas AM. Students Assigned to Community Practices for Their Pediatric Clerkship Perform as Well or Better on Written Examinations As Students Assigned to Academic Medical Centers. Teaching and Learning in Medicine. 2004; 16(3):250 - 254 Reviewed by Bruce Morgenstern, Mayo School of Medicine


White CB, Thomas AM. Students Assigned to Community Practices for Their Pediatric Clerkship Perform as Well or Better on Written Examinations As Students Assigned to Academic Medical Centers. Teaching and Learning in Medicine. 2004; 16(3):250 - 254

Reviewed by Bruce Morgenstern, Mayo School of Medicine

In another paper by a COMSEP member addressing the impact of the clinical experience of students on their performance on written examinations, Chris White of MCG, along with Andria Thomas from the Department of Family Medicine, evaluated the cumulative experience of students over 5 years. They report that of the 830 students, the 173 students who were at Community Practice Sites (CPS) did at least as well, if not better than those who were at Academic Medical Centers (AMC). The NBME subject examination scores did not differ. CPS students did better on an in-house MCQ type exam. They also had a statistically better clinical grade (but I wonder if there is a meaningful difference between 90.3 and 88.9?). Students at CPS saw many more patients (167 vs. 71).

Comment: In an era in which the LCME has created a new emphasis on "numbers and kinds" of patients, this paper adds meaningfully to our data set. The student results at MCG are not dissimilar to results seen in Nebraska. The results do differ from data reported in surgical clerkships, and at least one other Pediatrics site.

Before we place too much emphasis on the parts of this study, which imply that students at CPS did better, we need to look at those outcomes. White and Thomas argue well that the clinical grade is not a meaningful tool, given how subjective clinical grades often are. They do not spend much time letting the reader know much about their in house exam. Is it reliable? Is it valid? Is there a pro-ambulatory bias to the exam? Perhaps they have sufficient data to analyze their exam to see if the differences are real. Despite these weaknesses, it is clear that students in different sites do no worse. This adds to the questions that underlie the move of the LCME to "numbers and kinds." Is there a dose-response effect to clinical education? Is there some useful and valid measure of "equivalence" between clinical sites in clerkship? Certainly, the MCG students seem to have disparate experiences (as did those in Nebraska), but they seem to be able to test out the same. Perhaps it's true, our students are chosen to be able to succeed despite us. We have an important need to continue investigations along the lines of this paper.

(Bruce raises good questions about the evaluation data we use. I know that many of us have a less than accepting response to the LCME requirements. Why is diversity sometimes a good thing, but not acceptable at other times? Since the LCME standard isn't likely to go away, and we have several studies that suggest that diverse exposures do not have major impact on our measured outcomes, I'd like to pose a different question. What was similar in the various experiences that resulted in similar outcomes? Isn't that what we should quantify and use as our "numbers and types?" Maybe it is because we know that pediatrics can be taught during interactions with a variety of patients and diagnoses, that we have created systems that allow students to learn the basics of pediatrics regardless of the type and number of patients they see. Can you think of other groupings that will meet the needs of the LCME and the experiences of your students? I think that the studies may help us look at things from a different perspective and

Return to Journal Club