Council on Medical Student Education in Pediatrics

COMSEP Logo

Search This Site

Journal Club

Adapting the Key Features Examination for a Clinical Clerkship. Hatala R and Norman G. Medical Education 2002;36:160-165 Reviewed by Leslie Fall, Dartmouth Medical School


Adapting the Key Features Examination for a Clinical Clerkship. Hatala R and Norman G. Medical Education 2002;36:160-165

Reviewed by Leslie Fall, Dartmouth Medical School

Description: The Key Features Written Examination is a component of the Medical Council of Canada's Examination that is aimed at assessing clinical decision-making skills. The test was designed to reduce the reliance on content knowledge that limits these forms of testing, where performance on one problem is a poor predictor of performance on a second problem. To do so, the "key feature" problem consists of a brief clinical stem followed by one or more questions requesting actions from the student. Content specificity is addressed by only testing the elements of a clinical problem, entitled the "key features," that are critical to its resolution. The test format and scoring system have been validated and an example question is included in the article's appendix. In this article, the authors describe the use of the Key Features Examination (KFE) question type in their Internal Medicine clerkship final examination at McMaster University. A question template ("blueprint") was developed and a total of 82 questions were written by faculty and residents, based upon the Clerkship Directors in Internal Medicine curriculum. Significant faculty input was required to generate a reproducible list of key features for each clinical problem (range 1-4). A dichotomous scoring system was developed. The 15-20 questions were administered to 101 students during a 2-hour test. Students' average score was 73.3% (range 44.0-90.0%). The authors compared students' performance on the KFE to other measures of student competence, including the encounter cards used to evaluate students' clinical performance during the clerkship, as well as their scores on the LLMC part I examination (equivalent to the USMLE exam). The authors found a poor correlation with the students' encounter cards (0.20-0.35) and a modest correlation with the LLMC medicine score (0.36-0.54). The authors conclude that these low correlations may indicate that the KFE is assessing the unique domains of clinical decision-making not addressed by the other evaluation tools.

Discussion: As someone who is interested in incorporating more clinical reasoning assessment into my clerkship, I was interested to read this study. I was encouraged to see that the Medical Council of Canada has validated the question type and is using it already in standardized testing. I found the example question more content-specific than I had expected and will contact the authors to better understand the question-type. I had also hoped that the authors would have compared the students' scores to a more global assessment of their reasoning abilities (i.e. attending or resident ward evaluation). I think this correlation might have been better, although I have my doubts about how well we assess this skill on the wards too. I was a bit discouraged, although not surprised, to see how much effort must be given to writing these questions well. It might be an interesting idea to develop and test a series of KFE questions based upon the COMSEP curriculum.

(The annual meeting will have a lot of opportunities to share insights into teaching and assessing clinical reasoning skills - especially given our invited speaker - George Bordage. Do you have a standard explicit approach to teaching clinical reasoning - that is widely used by your faculty? If so - tell us about it. Steve Miller)

Return to Journal Club