Steele DJ, Palensky JEJ, Lynch TG, Lacy NL, Duffy SW.Learning Preferences, computer attitudes, and student evaluation of computerized instruction. Medical Education 2002;36:225-232. With Commentary: Love it or hate it? Medical students' attitudes to computer-assisted learning. Vogel M, Wood DF. Medical Education 2002;36:214-215 Reviewed by Bruce Morgenstern, Mayo Medical School
Steele DJ, Palensky JEJ, Lynch TG, Lacy NL, Duffy SW. Learning Preferences, computer attitudes, and student evaluation of computerized instruction. Medical Education 2002;36:225-232. With Commentary: Love it or hate it? Medical students' attitudes to computer-assisted learning. Vogel M, Wood DF. Medical Education 2002;36:214-215
Reviewed by Bruce Morgenstern, Mayo Medical School
Summary: The article by Steele, et al is a mixed-methods design study that used validated survey tools and qualitative measures to assess the learning preferences and computer attitudes of 150 third-year medical students at the University of Nebraska College of Medicine between 1998 and 1999. The students, as a part of their surgical clerkship, had to complete a "prototype" computer-aided instruction (CAI) program on angiography. In addition to the survey tools, 31 students underwent qualitative interviews.
The Computer Attitudes Survey revealed that at the time, students were moderately positive toward computers in general, slightly negative about the role of computers in education, and generally positive toward the CAI angiography program. The students' scales that were determined from the Learning Preferences Inventory did not correlate with their attitudes toward computers.
Comments about the CAI program were interesting. There was a clear decrease in the attitude toward the program: 98% thought the presentation of the content was effective; 89.4% thought it was an effective way to learn; 78.8% preferred CAI to other forms of self-directed learning; 60.9% thought it was more effective than a lecture; 52.3% preferred CAI to reading texts; 43% preferred CAI to lectures.
The qualitative interviews revealed that the students generally praised the content, clarity, and organization of the program. The reported comments are positive and negative for all themes that were identified. For example, students were reported to like and dislike the flexibility of the CAI program (dislike in that it was not flexible enough). There was both preference and distaste in comparing CAI to texts and lectures. Students seemed to fear the loss of the student-teacher interaction.
The authors offer some helpful suggestions about developing CAI programs. They highlight the need for clear visual representations, simple navigation, with methods to let students know where they've been, self-quizzes and feedback, and some capacity for student notes and student-created summaries. The students need to be reassured that faculty interaction will not go away.
The accompanying commentary bemoans the problems with CAI studies to date: few studies, methodological flaws, lack of objective outcome criteria, and contamination between the intervention and control groups. Reported CAI initiatives often "fall short of their target user group's requirements." The commentary essentially points out that the glass can be viewed as either half full or half empty.
Morgenstern's opinion: I read the commentary first, and then the article, since that was the order in which I had printed off the on-line issue of the journal. This was a mistake, in retrospect, as I was biased as I read the actual study. Several issues with the study, the most important of which was the years the students were surveyed, struck me. The surveys were from 1998-99. Even without the obvious last-century joke, in computer technology, that was nearly a decade ago. This was before widespread broadband, increasing wireless, and with fairly "old" software development tools. It would be interesting to see how, as children grow up digital, their attitudes towards computers and CAI change over time. I'd bet a more contemporary group of students would have felt differently.
I was also reminded of the concept of the "tyranny of the or," which I learned by using a Google search was used by Collins, J.C., & Porras, J. L. in 1997 in their book "Built to last: Successful habits of visionary companies." This concept is the superficially rational view that things must be either A or B, but not both. The ambivalence that the commentary noted from the study seemed largely based upon the students' fears that CAI would replace teacher student interaction. Why is CAI viewed as a teacher replacement (so we get A - teachers and students in some setting, or B - CAI, but not both)? I took an on-line course from the Socrates Distance Learning Group about teaching on-line courses last year. It was clear both from the materials we covered and the course structure that the teacher was critical, whether I could see and talk with him or not. My fears are two-fold: 1) that there are many people who look at the money and think that CAI replacing teachers is the cheap solution, and 2) that there are course chairs who feel the need to add more content to their course all the time and will off-load some content to CAI, but simply add the knowledge to the students' load, by replacing the course time with something else.
I also think that this article approaches several other interesting pedagogical concepts. They use the Learning Preferences Inventory (LPI), which yields a result that is based upon six scales of learning preference: abstract learning, concrete learning, individual learning, interpersonal learning, student-structured learning and teacher-structured learning. The authors do not address another important aspect of learner preference that is called cognitive style. The latter tells you the way in which a student can absorb new facts; some are visual learners, some are aural, etc. If you consider the 6 scales from the LPI and the factors in cognitive style, it's no wonder that no single teaching tool can ever clearly "win."
Finally, the authors might have been a bit more detailed about the angiography tool. There are differences between teaching and training, and it is not clear what the CAI tool was trying to accomplish. I think that the goal was to teach students something about angiography, and while they proved that some of it can be done via computer, it seems clear that teaching requires a teacher of some type. Training may not. To use a computer example, I have been trained to use Microsoft Word as a word processing tool. I have not learned Word. If someone had "taught" me Word, and helped me assure myself that I had in fact learned it, I would be able to deal with the capabilities of the software when I needed them (things like mail merges, auto-format, etc.). We hope to teach our students in way that equips them for on-going learning. In some way - real time or asynchronously - that requires a teacher. CAI will work when it is considered the adjunctive tool that it is, not a replacement.
(Do you use CAI in your clerkship? If so, has it replaced the need for a "live" preceptor? Has it added more "material" to be "covered" in the same amount of time? Steve Miller)