Parish SJ et al, Teaching Clinical Skills Through Videotape Review, Teaching and Learning in Medicine, 18(2) 92-98. Reviewed by Elizabeth Stuart, Stanford University
Parish SJ et al, Teaching Clinical Skills Through Videotape Review, Teaching and Learning in Medicine, 18(2) 92-98.
Reviewed by Elizabeth Stuart, Stanford University
Parish and colleagues examine the question of how feedback is accepted and valued by students during videotape review of standardized patient encounters. Most studies in this area explore the impact of one-on-one sessions between students and faculty although group feedback sessions have also been shown to be effective. Based on a pilot study showing no difference in students' acceptance of group vs. individualized video reviews, the authors undertook a randomized trial to compare the two approaches.
The study subjects were third year students participating in a required 7-station clinical competency exam. Videotapes were reviewed at varying lengths of time after taking the exam. Exclusion criteria included poor performance (2 SD below the mean). 128 students were randomized to one of two feedback approaches: (1) 90-minute, one-on-one sessions with a faculty member, (2) 2-hour group sessions, with four students and one faculty member per group. Students pre-selected the segments of their taped encounters that they wished to be reviewed. Faculty facilitators attended a half-day faculty development session in preparation for the reviews.
The authors used an 11-item questionnaire (9 Likert scales; 2 open-ended questions) to assess students' perceptions of the utility of the sessions, their comfort level in receiving feedback, and their opinions of the session format.
71 students participated in group reviews; 57 had individual feedback sessions. The two groups of students did not differ significantly by gender, age, or performance on the clinical competency exam. In general, students' reactions to the feedback session were positive. Students in the individua-lized feedback group were statistically significantly more likely to agree that:
the review was a positive experience (88 vs. 73%);
the length of the session was right (91 vs. 78% of students);
the amount of feedback on individual performance was appropriate (95 vs. 79%);
the reviews gave them new ideas for improving their performance (83 vs. 66%).
Students in the individualized feedback group were more likely to agree that they felt comfortable doing the reviews in the assigned setting (88 vs 73%. p <.01), but the two groups agreed equally that "the review was much less stressful than I had expected" and that they would do another videotape review if given the chance. More students who participated in group reviews agreed that they would have preferred to do the reviews "the other way," but numbers in both groups were fairly small (8 vs. 23%). Students who did individualized reviews were more likely to have selected a video segment where they perceived they had performed poorly.
Limitations to the study include post-randomization drop-out (128 of 159 eligible students enrolled in the study); a "negative Hawthorne effect," and the use of an opinion survey to evaluate the efficacy of the feedback sessions. Given that the two review formats differed both in terms of time spent per student (90 vs. 30 minutes) and the presence of peers, it is difficult to gain a full sense of the advantages and limitations of each approach. A more in-depth qualitative evaluation might have provided helpful clarification.
The results of the study suggest that individualized videotape feedback sessions may be preferable to group reviews. However, before the study was even finished, the investigators' institution implemented group reviews for all students based on the finding that both formats were well-received by a majority of students.
(Comments: Whether in real time or taped, learners love feedback. The more specific and personal the feedback, the better it is. Individual review of tapes is valuable but in many institutions may be too costly - Bill Raszka)