Section G
Clinical Assessment of Medical Students
Benjamin S. Siegel, M.D.
“Very much more time must be hereafter given to those practical portions of the examinations which afford the only true test of a (student’s) fitness to enter the profession. The day of the theoretical examinations is over.” | |
Sir William Osler – 1885 |
Overview
Previous chapters have addressed evaluation of students at the end of the rotation using a variety of instruments such as the oral exam, the written exam, the OSCE, and the use of standardized patients to evaluate clinical competence.Most clerkship programs utilize one or more of these assessment strategies to arrive at a summative evaluation.Clerkship directors also use a clinical assessment for a formative and summative evaluation of the medical students by physicians, nurses and social workers as medical students assume the role of primary caregiver of patient care or as a member of a health team.1,2These clinical experiences take place on the wards, in the outpatient department and emergency room, or in community-based or office practice settings.The clinical assessments provide important data to the clerkship director.They are used for continuous feedback by the clinical preceptor, for mid-way feedback/evaluation and as part of the overall assessment of medical student competency at the end of the rotation.Some clerkship directors place more emphasis on this clinical assessment to arrive at a final evaluation and use the more objective evaluations mentioned above to validate the clinical assessment.This chapter will review the clinical assessment of the students from the perspective of both faculty and housestaff and suggest ways in which each group of supervisors may contribute to the process.
Description and Rationale
Effective assessment of clinical skills requires a clear set of objectives and a mutually agreed upon evaluation process (see Section K).In the clinical setting, the student is asked to evaluate patients using a biopsychosocial framework, and to present the data to a preceptor:a physician faculty member, a supervising house officer, an outside consultant or other health professionals.The student may be in a number of different settings such as a primary care clinic or office, the intensive care unit, a general pediatric ward, an emergency room, or in a subspecialty consulting office, each with its own set of goals and objectives.The data presented by the student is both oral and written, and can include the history, the physical exam, the differential diagnosis or a problem list and the management plan.In addition, the student must be sensitive to the psychosocial, environmental, ethical and cultural aspects of the patient’s/family’s problems.Finally, the student addresses the pathophysiological basis for the diagnosis, the psychosocial and patient education plans, and seeks out further information about the problem to increase his/her medical knowledge-base.Each one of these, or all in combination, are usually assessed by the preceptor depending upon the clinical context and the goals for the educational experience.
Clerkships usually have clinical evaluation forms which list areas of competency required of students and assessed by faculty and housestaff.These forms vary from clerkship to clerkship but usually contain the following elements of skills, knowledge and attitudes or clinical competencies (Table I):
Table I |
|
Forms using these content areas vary from specific behavioral descriptions of specific criteria to general categories such as Honors, High Pass, Pass, Low Pass and Fail to A,B,C,D,F.These scoring categories may not always match or agree with each other.Most forms also ask the observer to complete a short narrative describing the student, and some ask for a list of detailed strengths and areas needing improvement to be defined for each student.The clerkship director then uses the data in each form and the narrative (if present) to derive a global assessment as a final grade with a specific designation such as Honors, Pass, etc., or a letter grade.The final narrative usually is incorporated into the Dean’s letter.
Literature Overview
There is a moderate amount of literature on the specific clinical evaluation of medical students.Tonesk5 reviewed problems clinical faculty, clerkship coordinators and residents faced in evaluating students’ clinical performance.The problems identified in the survey of 1092 clinical faculty and residents at 10 medical schools included:inadequate guidelines for handling problem students, failure to act on negative evaluations, lack of information about problems that students bring with them into the clerkships and faculty members’ unwillingness to record negative information.In a study of the assessment of students performance on a surgical rotation, Carline6 noted that residents evaluated medical students higher than faculty, and that faculty ratings were poorly correlated with resident ratings.In addition, resident ratings were better predictors than faculty ratings in relation to the National Board of Medical Examiners Surgery subscore.Residents evaluated students more accurately in those areas where the supervising resident had more interaction with the student:data-collecting skills, knowledge in an area, relationship to patients, professional relationship and educational attitude while faculty had better accuracy in the oral examination.
Stillman,7in a study comparing comments of surgical chief residents and faculty on student performance on surgery, noted that chief residents emphasized “surgical skills” and “techniques” and less often commented on medical students competency in “logic”, “judgment” and “reasoning” while both faculty and chief residents were about equal in comments about areas of “appearance”, “enthusiasm”, “diligence” and “motivation”.Finally, a study8 was performed comparing objective summative examinations (oral and comprehensive written exams) in an ob-gyn clerkship and clinical performance as measured by faculty and resident review of a 16 item rating scale with the National Board Part II score.In this study, there were few differences in student performance between the written and oral exam and the National Board scores but there was wide variation between students’ performance on such faculty and resident evaluations as oral presentations, clinical performance and case write-ups.These differences were related to the site of the clerkship and hence, the different standards faculty and residents had at each site for evaluating medical students.It is clear that the clinical evaluation of students varies with the observer i.e., faculty vs. housestaff vs. chief residents and may vary among sites in the same clerkship.
Most studies16,17 show poor or no correlation between academic performance in the first two years of medical school with clinical performance in the third year.One study18 suggested that psychosocial characteristics measured in the second year correlated weakly with clinical performance in the third year.A large number of studies19 demonstrate poor to fair (r≤5) correlation of clinical performance with objective examinations such as oral exams, multiple-choice examinations and subtests of NBME Part II (moderate correlation: r = .5 -.75, good correlations: r = ≥.75).The best correlation between ratings of clinical performance in a third year clerkship in internal medicine and the NBME subtest was .59.23However, 38% of students with satisfactory clinical performance ratings had marginal or failing test scores.Thus, there was a “halo” effect, students who appeared motivated and attentive to patient care were usually graded higher in knowledge than their performance in knowledge testing demonstrated. 24,25 Studies19,22,26 of inter-rater reliability of assessment of student clinical performance have shown good reliability but it requires a number of observations and observers to have a high reliability.In one study,22 increasing the numbers of raters per student from 2 to 5 increased reliability.Additionally, to achieve a reliability of .8, there is a need for 7 observations to determine an overall clinical rating and 27 observations to assess interpersonal relationship with patients.27
Another problem of clinical assessment is faculty and house officer bias.As mentioned above, house officers usually rate students higher than faculty attendings.Attendings often28 use personal characteristics of students as a proxy for clinical competence in cognitive areas.In a recent study29in a pediatric clerkship which compared videotaped interaction of medical students with patients at the bedside (interaction of non-verbal behavior), with the final grade of the clerkship, there was a very high correlation between the two domains.Students whose final grade was considered high were those who on videotape were described as less shy, more smiling, having less avoidant self-touching and a perception of warmth and interpersonal involvement.Finally, final evaluations may be biased when there is a group discussion of the final grade vs. individual evaluations.30Interns, residents, and attendings all graded students individually.After group discussion, these evaluators gave each student a combined grade.There was no statistically significant correlation between individual and group rating of students.Thus, the context by which final grades are arrived at may influence the final grade themselves, with each context providing a different view of the student.
Implementation Strategies
Even though there were some studies of reliability and validity of the assessment of clinical performance of third year students, all clerkships evaluate clinical performance and clerkship directors utilize housestaff, faculty and other professionals such as nurses, social workers, etc. as participants in the evaluation process.Thus, the process of assessment of clinical performance should be carried out carefully, comprehensively and responsibly.This section will address some general issues for clerkship directors to optimize the process of clinical assessment of the third year students.
-
- Provide Clear Expectations
Goals and objectives of the clerkship, the process of clinical education, and the expectations of the acquisition of knowledge, skills, and attitudes should be presented orally and in written form at the beginning of the rotation.In addition, the process of timely feedback, the final examination process, as well as the way in which the evaluation of clinical performance is integrated into the final grade should be discussed at orientation and as the end of the clerkship approaches.
-
- Appreciate the Context of Clinical Experience and the Evaluation Process
Clinical evaluation should be context driven.What a student does in an emergency room, office, or subspecialty clinic may differ markedly from what a student is expected to do on a busy ward experience working with a health team.Thus, an emergency room evaluation might focus on clinical problems with some discussion of diagnosis and management.On the wards a comprehensive history and physical exam, a detailed discussion of pathophysiology and differential diagnosis, an approach to management and some discussion of the psychosocial , cultural and ethical issues might all be important areas of evaluation.The content of the medical record is different in the clinic and in the hospital.The amount and intensity of time spent by a medical student in different environments will also vary.These differences should be noted on evaluation forms and considered as part of the evaluation of clinical performance.Likewise student supervisors will differ and have different perspectives as cited in the studies above6,7 and their perspectives should be identified formally.For example, on the wards, the PL-1 house officer must manage the patient and may be involved in working with the student to organize and prioritize data, and organize the written initial assessment and daily progress notes, as well as help the student in the case presentation.The PL-2 and PL-3 residents might address students’ attitudes, interpersonal skills, decision making, differential diagnosis and clinical problem-solving.Nurses and social workers may focus upon students’ interpersonal skills and education of patient/family skills and some of the psychosocial aspects of health care.The attending, who may only hear case presentations, may address overall knowledge, pathophysiology, differential diagnosis and clinical problem-solving.This is not to say that content of the evaluation is role specific but it does recognize different interactions and different goals among all of the “supervisory” people the medical student interacts with on a busy inpatient unit.Thus, there may be different evaluation forms and different criteria which varies with the clinical experience and the kind of evaluator.
-
- Identify the Marginal Student
An important goal of evaluation is to identify the student who is having difficulty with the express purpose of intervening well before the rotation ends so that inadequacies can be improved upon.Thus, the importance of regular and timely feedback9 is critical.Feedback should be part of every clinical encounter and can be integrated easily and quickly10 into the discussion of clinical performance.Certainly there should be mid-clerkship feedback for all students.
Feedback is not often stressed as an important teaching strategy for faculty or housestaff and is sometimes very difficult to do.There is often shame and humiliation as an outcome of poorly given negative feedback, and lack of credibility with overly praising and too general positive feedback.In fact, in one study12 of observations of house officer teaching on rounds, in only 11% of 158 case encounters was any feedback given at all.
-
- Develop an Evaluation Profile31
Since there are multiple elements or criteria for clinical performance, there should be a way to make a composite of all the evaluations given to the clerkship directors.It is this composite evaluation of clinical performance from multiple evaluators at multiple sites which is integrated into a final clinical grade by the clerkship director and an evaluation committee.The clerkship director should know what clinical sites or environments lend themselves to assessing specific clinical competencies.Thus performance evaluation in these sites and specific goals should be identified by faculty and scores can be weighted in the total evaluation process.For example, an attending on the wards who has worked with a student for three weeks has a different perspective than the attending who has worked with the student only once or twice in the emergency room.Therefore, the final evaluation cannot simply be a sum of all of the components, but a much broader and more complex description of the strengths and areas needing improvement by the student.
-
- Provide Opportunities for Self-Evaluation
Students should also have an opportunity to evaluate themselves using the same criteria as faculty.Oftentimes students’ judgments, especially about their strengths and weaknesses, can be very useful and integrated into faculty and housestaff evaluations.If students take responsibility for the evaluation system by seeking faculty and housestaff and giving them the clinical evaluation forms, there will be a greater likelihood of having a more comprehensive view of student performance.It also will place a student in a position of having to have a conversation with faculty or housestaff about their performance thus establishing a feedback system which is critical for defining areas needing improvement.This process will link the feedback and evaluation system.
-
- Ensure Written Documentation is Important
It is obvious, but especially for the problem student, that all data about evaluation of performance should be in writing.
-
- Recognize the Subjective Component of the Process
Clinical evaluation of students is like good clinical judgment.There are many subjective qualities to the evaluation process and a large number of variables that are assessed.To achieve good clinical judgment takes years of experience.To be able to assess the clinical performance of students also takes a great deal of experience.Just as clinical judgment can be improved by making explicit an approach to differential diagnosis and pathophysiology, and using the principles of probability and utility, so too, the process of clinical assessment of students can be improved by being explicit about the goals, objectives and expectations, and by insisting upon processes of continuous feedback to improve inadequacies.The evaluation criteria should be made explicit to students at the beginning of the rotation.Continuous faculty development should address the process of feedback and evaluation as it addresses all the issues of the improvement of the teaching process and the learning environment.13
REFERENCES
- Kroenke K. .Attending Rounds: Guidelines for Teaching on the Wards.J. Gen. Int. Med. 7:68. 1992.
- Bellet P.How to Improve Teaching on the Hospital Wards. Arch. Pediatr. Adolesc. Med. 148:652. 1994.
- Patel V.Effects of Conventional and Problem-based Medical Curricula on problem solving.Acad. Med. 66:380. 1991.
- Herman MW.Validity and importance of low ratings given medical graduates in non-cognitive areas.J. Med. Ed. 58:837.1983.
- Tonesk X.An AAMC pilot study by 10 medical schools of clinical evaluation of students.J. Med. Ed. 62:707. 1983.
- Carline J.Resident and faculty differences in student evaluations: Implications for changes in a clerkship grading system.Surgery 100:89. 1986.
- Stillman R.Pitfalls in evaluating the surgical student. Surgery 96:92. 1984.
- IrbyD.Evaluation of student performances in a multi site clinical clerkship.Am. J. Obstet. Gynecol.136:1020. 1980.
- Ende J.Feedback in clinical medical education.JAMA. 250:777. 1983.
- Neher J.The five step “microskills” model of clinical teaching.J. Am. Board Fam. Pract. 5:419. 1992.
- Kroenke K.Attending rounds:Guidelines for teaching on the wards.J. Gen. Int. Med. 7:68. 1992.
- Wilkerson, L.The resident as teacher during work rounds. J. Med. Educ. 61:823. 1986.
- Osborn L, Whitman N.Ward attending: The forty day month.Dept. of Family and Preventive Medicine, University of Utah School of Medicine, 1991.See especially Chapter VIII -Assessment and Feedbackand Chapter IX – Evaluation of Students and Residents.
- Osler W.On the growth of a profession.Canad. Med. Surg. J. 14:129. 1885-1886.
- Backward reasoning is the traditional clinical problem-solving skill, hypothetico- deductive process used to teach medical students.It involves identifying a symptom or a sign and generating hypotheses which might cause that specific symptom.Once the hypotheses (differential diagnosis) are generated, they are then tested against the signs and symptoms identified to see if they could explain the symptoms.Forward thinking, otherwise called “pattern recognition”3 or the identification of “illness scripts” (Schmidt H.G. et al.A cognitive perspective on medical expertise: Theory and implications.Acad. Med. 65:611.1991.) does not involve the above deductive process.Rather it relies upon a common set of specific characteristics of illness which taken together easily define the diagnosis.X + Y + Z symptoms and signs = disease. Experienced physicians use this process about 85% of the time.
- Parenti CM..A process for identifying marginal performers among students in a clerkship.Acad. Med. 68:7. 1993.
- Ginsburg AD.Comparison of intraining evaluation with tests of clinical ability in medical students. J Med. Educ.60:29. 1985.
- Hojat M, et. al.Students’ psychosocial characteristics as predictors of academic performance in medical school.Acad. Med. 68:8. 1993.
- Keynan A, et. al.Reliability of global rating scales in the assessment of clinical competence of medical students. Med. Educ. 21:477.1987.
- O’Donohue, Jr. WJ, Wergin JF.Evaluation of medical students during a clinical clerkship in internal medicine. J. Med. Educ. 53:55.1987.
- Dunnington G, et. al.Structured single-observer methods of evaluation for the assessment of ward performance on the surgical clerkship.Amer. J. Surg. 159:423.1990.
- Littlefield JH, et. al.A description and four-Year analysis of a cinical clerkship evaluation system. J. Med. Edu.56:334. 1981.
- Marienfield RD, Reid JC.Subjective vs objective evaluation of clinical clerks.NEJM. 302:18. 1980.
- Quarrick E.A,Sloop EW.A method for identifying the criteria of good performance in a medical clerkship program.J.Med. Educ. 47:188. 1972.
- Whalen JP.Correlation of ratings of students’ overall performance in a medicine clerkship with ratings of knowledge.Acad. Med. 69:4. 1994.
- Maxim BR, Dielman TE.Dimensionality, internal consistency and interrater reliability of clinical performance ratings. Med. Educ. 21:130.1987.
- Carline JD, et. al.Factors affecting the reliability of ratings of students’ clinical skills in a medicine clerkship.J. of Gen. Int. Med. 7:506. 1992.
- Durand RP, et. al.Teachers’ perceptions concerning the relative vlues of personal and clinical characteristics and their influence on the assignment of students’ clinical grades.Med. Educ. 22:335. 1988.
- Rosenblum ND, et al.Predicting medical student success in a clinical clerkship by rating students’ nonverbal behavior. Arch. Pediatr. Adolesc. Med. 148:213. 1994.
- Rosenblum ND, Platt O. The effect of context on the rating of students by faculty and housestaff in a clinical clerkship,Acad. Med. 67:485. 1992.
- Printen KJ.Clinical performance evaluation of junior medical students.J. Med. Educ.48:343. 1973.