Section E

Videotaping as an Evaluation Tool in the Clinical Setting

Cynthia S. Samra, M.D.



Since the early 1970’s, there has been increased support in the medical literature to use videotaping to objectively evaluate trainees.1,2,3,4With the advent of readily accessible audio-visual tools, videotaping has come to the forefront as a viable means of achieving direct observation of students.It provides educators with not only the means to assess technical skills of students but also the ability to focus on data collection, the mechanics of appropriate questioning techniques, mannerisms of the students, and ways to facilitate patient comfort.5 Additionally, one of the distinct advantages of using videotaped sessions is that the student is able to review his/her own performance which allows a more meaningful critique and promotes personal change.5On a more global level, by videotaping large groups of students, the data obtained can be used to discover possible omissions in teaching by faculty and to suggest ways in which to standardize the approach to teaching clinical skills.6


In a survey of the pediatric clerkship Directors in 1991 regarding what were considered to be the main elements of a core curriculum, 100% of those directors completing the questionnaire indicated that performing a pediatric history and physical examination was the number one item of importance. Despite the recognition of the importance of these skills, objective means of verifying and critiquing them have not been uniformly delineated. There has been growing concern by the general academic community that medical students may actually complete four years of school without having their interview and physical exam skills verified and critiqued by an qualified evaluator.7 Most written histories and physical examinations are done without direct supervision, so that it is difficult to determine if all relevant information has been obtained. Written material does not allow assessment of the student’s interviewing and interpersonal skills.8With the charge to objectively evaluate the clinical skills of the students during their pediatric clerkship, the clerkship directors must look toward the methods of evaluation available to them.

The issues involved in using videotaping in the clinical setting for evaluation and feedback are discussed here.The steps involved in implementing a successful videotape program for students in the clinical setting include:

  1. making a time commitment to the process;
  2. deciding what will be graded (content, process, or both);
  3. developing a rating scale that accurately reflects what is to be graded;
  4. creating a well-trained group of observers;
  5. choosing the type of patient encounter to be taped;
  6. determining the most effective format for reviewing the sessions; and
  7. dealing with the technical aspects.


    • Time Commitment

The observer does not have to be present during the interview.This provides the observer flexibility in reviewing the tape on his/her own time. It is of utmost importance that the feedback session with the student take place as soon as possible after the taping.Thus, the student is able to recall the thought process occurring during questioning, along with feelings elicited during the session, etc.1

Perhaps the main disadvantage is that the time required for the feedback session with the student is approximately 1 1/2 – 3 times the interview time. One suggestion is that the interview time be limited (example: 15-20 minutes).7It has been demonstrated by various authors that multiple taped sessions with subsequent review provide better outcomes (meaning the students show concrete changes in their performances).4,9,10 This may prove difficult to schedule.

    • What will be graded?: Content and Process

“Content”, the what of clinical skills, refers to the actual individual elements of the history such as Chief Complaint, History of Present Illness, Family History, etc., and the technical aspects of performing a physical examination. “Process” refers to how well clinical skills are performed and includes such items as tact, sensitivity, ability to converse at the patients’ level, rapport, rate of flow of the interview, recognizing verbal and non-verbal cues, student mannerisms, along with a long list of other aspects of interviewing skills. A combination of the two may be used giving a more global analysis. Some students perform well in an interview, but may not be able to use the information obtained.

    • Rating Scale

The final rating scale again will depend on the focus selected by the clerkship along with who is chosen to observe the interview.There may be a difference in how the scale is structured based on whether or not the observer is a faculty member, trained paraprofessional, students’ peers, or parents. One premise that should be followed in creating a rating scale is that any evaluation form used to measure student performance should be sensitive enough to pick up individual or group differences.10 Developing a scale can be time consuming and once all of the elements are agreed upon, the scale should undergo validation and reliability testing. In the literature, several different rating scales have been published for grading students, residents, and faculty. They are summarized in Table 1.


TABLE 1. Rating scales from the literature and their main focus



Arizona Clinical Interview Rating Scale8 Process
Brown University Interpersonal Skill18
Evaluation Method
Clinical Assessment Scale for Clinical
Pediatric Interviewing19
Behavioral Categories for Interaction Analysis10 Process
Resident Interpersonal Skill Evaluation
Form: Annotated Items20
Physician and Patient Verbal and Non-verbal
Interactions Evaluated by the National Board
of Medical Examiners ISIE-8120
Consultation Assessment Scale21 Process/Content
Interview Performance of Internal Medicine
Medical Interview Skills Checklist1 Process/Content
Student Performance in the Medical Interview22 Process/Content
The Northwestern Evaluation and Training System
Content/Physical Exam
Abdominal Examination Evaluation Checklist24,25 Content/Physical Exam

In analyzing the possible utility of the above rating scales in medical student education, the following observations are made.The Arizona Clinical Interview Rating Scale (ACIR) can easily be adapted to use with medical students in assessing their interview skills alone. It specifically focuses on the organization and time line of the actual history, the use of transitional statements, questioning skills, as well as on documentation of data and rapport. One particular attribute is that it divides the questionnaire into categories that are individually ranked on a scale of 1-5, and gives written descriptors for each point assignment. The drawback is that it is rather wordy and the evaluator needs to become well acquainted with the scale before proceeding with its use.

The Brown University Interpersonal Skill Evaluation Method (BUISE) is a more straightforward simplified rating scale that looks at the areas of establishing rapport, demonstration of clinical skills and procedures, testing for feedback, and providing appropriate closing. The authors of this scale feel that it provides more flexibility in evaluating the trainee at any point in the interview/exam by allowing for an “infinite variety of good to poor responses, dependent on patient, problem, and setting.”18This is in contrast to the ACIR which was felt to evaluate primarily the physician-patient interactions at the end of the encounter.18Disadvantages include that there is minimal evaluation of physical examination skills and the fact that a weighted scoring system was used but not provided in the article for review.

The focus of the Clinical Assessment Scale for Clinical Pediatric Interviewing (CASPI), is on the process of the interview. It is divided into three categories: structural- where the order, logic, progression, and use of language geared toward the patient’s level of comprehension are evaluated; functional – where the actual exchange of information between physician and patient is assessed; and affective – where the emotional tone of the interview is ranked. These are all scored on a 1-4 scale (with 3 to 4 determining competency). This is an uncomplicated scale, yet some definition of each subdivision of the three categories would need to be provided to the evaluator in order to have consistency in grading.

An interesting grading system is the one described by Helfer and Hess.10It looks at various behavioral categories and the frequency with which they are encountered in one student’s interview session. The scale is composed of eleven items (examples include asking leading or non-leading questions, degree of feedback given to the patient, and degree of empathy). For tabulating purposes the items are grouped into one of the following categories: interpersonal score, exploratory total score, exploratory non-leading score, negative score, and clarifying score. Subsequently, the occurrences of the items in each category are pooled and scored in terms of percentage of total interactions.This can be used in conjunction with a scale which grades content to give a global picture of the student. One advantage of this particular system is that it is uncluttered and easy to use.

Also included in the above table, is the Resident Interpersonal Skill Evaluation (RISE) Form: Annotated Items. In reality, this is a 19 item standardized patient’s evaluation form designed to obtain his/her personal feelings of the interaction with the trainee. It is scored on a Likert scale of 1-7 ranging from “very strongly agree” to “very strongly disagree.”This could easily be used even with a regular clinic patient and would be of value to the student because a physician or other trained rater may not have the same perspective to offer. No descriptors of the categories are provided.

The National Board of Medical Examiner’s ISE-81 system focuses on different aspects of the physician-patient verbal and non-verbal interactions. It consists of 29 items ( 19 and 10 items for the evaluator’s ratings of the trainee and patient, respectively).Although designed for physicians, the items apply to any level of training. (Example of items: asks narrow questions, gives commands or directions, criticizes patient, etc.)20

Hays constructed the Consultation Assessment Scale for the Royal Australian College of General Practitioners.21 Evaluation of communication skills is the central theme of this scale. There are eight categories, each subdivided to address different phases of the consultation. These include introduction, history-taking, examination (only in that it was appropriate to the history), diagnoses, management, closing, general comments, overall rating, and final written comments. A list of descriptors was not provided in the article, but would need to be defined to assure inter-rater reliability.

One scale looking at process and some degree of content is found in Meuleman’s and Caranasos’s article in 1989.11It assesses the trainee’s introduction, medical history(obtaining sufficient detail, including major components, etc.), technique and style. This is graded on a Likert scale with 0=poor and 4=excellent. The problem with this scale is the lack of definition of what “too much or too little” in any one category signifies and would be difficult for multiple observers to use.

The Minnesota Communications Program has developed the Medical Interviews Skills Checklist1 (MISC) which emphasizes both the process and content of the interview and helps to measure data-gathering and problem-solving skills. A strong (S) and weak (W) grading scale are used instead of the Likert Scale because it is felt that the primary concern is to identify the trainees’ skills and not necessarily a comparison with other trainees. The checklist is broken down into four main categories with specific subdivisions relating to each. These consist of biologic inquiry which includes the patient diagnosis (examples: medical history, physical exam, problem list, and tests, to name a few); psychologic inquiry, which stresses the patient’s profile ( i.e. such items as demographics, family functioning, lifestyle, and support systems); interview structure (featuring the organization, structure of questioning, closing, etc.); and process (including rapport, listening behavior, demeanor, supportive behavior).This is a very comprehensive checklist that is fairly easy to follow and provides much information. One of the drawbacks is that there is not more detailed reference to the physical exam.

Barbee et al. first introduced a rating scale to be used by multiple observers in 1967.22An abridged edition of the Present Illness History Rating Form looks at twelve detailed items referring to the content of this particular area of the clinical history. It is scored from 1-7 ( 1=superior, 4=median, 7=omitted/inferior). In conjunction with this is an Interview Technique checklist consisting of four categories ranging from errors in data collection to errors in communication, along with a summary overall rating for the interview. This type of evaluation can readily be used when very specific areas of the history and physical examination are to be targeted for analysis.

The Northwestern Evaluation and Training System (NETS) is based on videotaped sessions of select standardized patients with a rating scale which was developed to measure quantitative data obtained from each standardized case scenario.For example, the system originally used a case of a 45 year old lady with a breast swelling and a 40 year old man with atypical chest pain. Each scenario was well delineated and the internal medicine interns were expected to elicit specific information.The rating scale was divided into an interview section consisting of 18 sections covering medical history, general data, family history, psychiatric symptoms, and life experiences as related to illnesses.The physical exam portion contained 12 sections containing 119 items, and the final section addressed general interview technique, physical exam technique and doctor’s qualities in clinical interaction.23This particular format is quite extensive and is directed more at the quantitative issues (was it asked for?, was it performed correctly?) than qualitative issues.While this was specifically designed for residents, it could be accommodated for the performance expectations of a medical student quite nicely. Additionally, this could be used to certify the ability of the student to perform a proper physical examination. It is labor intensive.

One additional example found in the literature is that of a checklist specifically for evaluating limited aspects of the physical exam. This presents a stepwise Abdominal Examination Evaluation Checklist (example of a partial checklist).24 Here the rating scale consists of performance correct, incorrect, or not done. This gives a very objective measure of the trainee’s abilities. A similar shorter rating scale is found in Mir et al. (1987).25Both serve as examples of how to focus on the correct technique of a physical exam and can be adapted for students.

There are several other rating scales which are not as pertinent to medical students but may be helpful with residents and faculty. Among them is the “Instrument for assessing videotapes of doctor’s performance in consultation3“.This may prove useful in evaluating residents or faculty as it covers a wide range of items reflecting process, as well as content items such as prescribing habits, and diagnosis and management. Descriptors for the evaluators would be necessary however. A particularly detailed scale for evaluating pediatric residents at a two week newborn well child check up is found in an article by McCormick et al. (1993).26

  • Trained Observers

Having decided upon a rating scale, the next step, and probably the most difficult, is selecting a group of observers. It has been the experience of multiple authors that inter-rater reliability is generally poor, regardless of who does the observing, unless they have been through training sessions designed specifically to educate them on how to grade the students.10,11,12,13 It has been suggested that the observers be trained together and that they must be provided with a performance definition for each item on the rating scale.In addition, they must learn to evaluate the same characteristics in the same fashion, and to record which performances actually occurred during a consultation rather than to make judgments regarding specific performances.10

Problems are inherent in choosing who is to observe. Logically, one would assume that medical faculty are the best qualified. However, it has been demonstrated that lay personnel, trained appropriately, can actually be more accurate and reliable in their assessments.8,13,14Recommendations for selecting faculty members as observers include screening the faculty to see who has the best observational skills, training them to evaluate, and using these same faculty on a regular basis in order to maintain proficiency in their skills.14

  • Patient Encounter

Whether to use real versus standardized patients in the interview sessions to be evaluated depends partially upon the clerkship’s financial resources, but may also be influenced by the desire to see how a student is able to adapt when faced with a real patient. Arguments can be found supporting the use of either.8,10,11If using real patients, the outpatient setting is ideal in this author’s experience. A room with a permanent camera can be designated for videotaping and certain hours of the schedule can be blocked off for its use. It is relatively simple to choose the patients based on age, complexity, type of visit (e.g. well child versus sick child visit) that one feels is appropriate for a student interview.Written permission must be obtained from the parents before taping.

Using the Emergency Room allows brief focused encounters, but may require a camera assistant with a portable camera unless one room can be equipped with a permanent camera. Using the wards is a little more complex, however a camera with a wide lens set on a tripod may be adequate. It requires someone to set up and run the equipment.Even videotaping in a private practice has been described in Australia and the costs of lost income have been factored in.7

  • Review Session

Several different approaches can be used to review the taped session. Interactional review of videotaped sessions with the students is quite successful in effecting change; for the clerkship director its main disadvantage is that there is no final concrete summative evaluation. The student becomes an integral part of the discussion and is encouraged to comment on his/her own performance so that the session does not become a didactic one. The session should be supportive, positive, and non-judgmental. Many different areas may be pursued during the review and may include, for example: the student’s thought process in the line of questioning, differential diagnosis, pathophysiology, reactions to patient’s attitudes, opportunities to improve questioning technique, the student’s own behavior or physical demeanor throughout the interview, or assessing physical exam skills.1,15The outcome of this type of feedback demonstrates an increased self-confidence in interviewing skills, improvement in the ability to analyze and assess the quality of a clinic visit, and improvement in the student’s communication skills.7,16

A second means of evaluating trainees is a critical review of the videotaped session using the clerkship’s chosen rating scale along with direct feedback. Giving the student the evaluative criteria ahead of time allows the student to address those content and process areas chosen by faculty prior to the video sessions and will reinforce learning.This provides an objective grading system and allows the students to benefit from the interactional review mentioned above. This is one of the best methods available. By using this format, more than one student can be involved in the process.16 The trained observer may find that by using one student to tape another’s interview session and then switching roles, allows both of the students to be involved in the same review process. Each student is encouraged to actively participate. In this author’s experience, this works quite well. The students are not usually intimidated by each other.

Another method used in the literature7 includes review of two to three different short (15 minutes) videotaped encounters by a particular student in one day. The session should be interactional and a rating scale may be used. This gives an idea of how the student performs in different settings.

Additional suggestions include a trained faculty member observing the medical interview by video as it is taking place. Using a tape recorder, a running commentary is made by the attending, and an evaluation form is completed.The student is given the dictated commentary, the videotape, and the completed evaluation to review on his own.The attending reviews the written history and physical examination, and questions the student about data that was not obtained initially.(The student can go back to the patient and obtain the missing data.)The attending is able to observe first hand the student’s ability to synthesize information, arrive at a differential diagnosis, and formulate a plan, and record the information.17The drawback is that there is no interactional feedback of the videotape itself.The least successful method to effecting change is the student review of the videotape and completion of the evaluation form without any critique from the observer.16 One final technique is showing an instructional videotape which demonstrates a particular skill to be learned, having the student practice the skill, then videotaping the student performing the skill. A rating scale and trained observer may be used to evaluate the process as already discussed.

  • Technical Aspects and Costs

Selecting the type of camera equipment will depend upon the funding available and where the videotape sessions are to occur. This can range from a couple to several thousands of dollars. Consulting biomedical technicians may help determine what is best for a given set-up. Care must be taken to purchase equipment that provides a clear picture and sound from anywhere in the area of the session.Zoom capabilities are necessary to be able to review the details of the physical examination. Without this, videotaping is of little practical use. If picking up sound is a problem, there are a couple of solutions. A lapel microphone can be worn by the student; a microphone can be placed on the interviewing table; or if there is one designated interview room, a permanent microphone can be set up in the room after the best location is determined. A single PZM (pressure zone microphone) can pick up sound in a big room and can take the place of multiple lapel microphones and is probably the best single microphone solution available.Extra lighting sources may be necessary if the setting is too dark.

The ideal setting would be one in which the camera is mounted in an unobtrusive fashion with operation of the camera occurring completely outside the examination room so that the student, parents, and patients are oblivious to its working.Since this may not be feasible in many institutions, it is of value to note that even with visible cameras, within minutes of beginning the interview, they are forgotten.9Bored children may dance around and show off.

Who operates the equipment? A technician specially trained to record the sessions may be available to some institutions. Another solution is to teach the students how to record each other. The quality of the tapes may not be as good as a professional’s, but after an initial period of trial and error on the clerkship director’s part, the students can be given “pearls” to make a video that is very acceptable.

“Putting it together”- Personal Experiences

Having laid the foundation for developing a videotaping program, I will share some of my experiences.The camera equipment had been donated from a memorial fund and the room readied for use before a plan was even envisioned.The camera was fixed and mounted in the corner of one examination room permitting visualization of any portion of the room.It had a wide lens with zoom, was able to be moved laterally or up and down, and was controlled outside in the residents’ work area where the interview is visualized on a T.V. monitor.A pressure zone microphone was helpful, but the microphone was very sensitive to a child crying or playing loudly with toys and did drown out the voices.

Having reviewed some of the literature, I decided to begin a period of trial and error.Students were videotaped in the outpatient clinic during routine office visits; there was no limitation as to the nature of the visit.However, this changed quickly due to the length of time required to tape and review new patient encounters, and complex cases.Thus, taping sessions were limited to well child care and established sick/chronic illness visits.Unfortunately, we have not had total success even with these limitations.We have only certain mornings allotted to do the taping sessions, and since we are using actual patients, we are limited by the problems presented by patients on the particular schedule.It has been difficult to contact the parents to request permission beforehand because many of the phone numbers and addresses are incorrectly provided to the clinic.The nursing staff has been helping to obtain consent at the time they are checking the patients in.A training session for the nurses may be helpful to increase the number of agreeable parents. Needless to say, there are days where the students cannot get a patient to videotape. I feel it is the student’s loss, but at the current time, this is not a part of their final grade. We are still trying to perfect the system. Some students are actually disappointed, others rejoice.

Small details, like having to remind the nursing staff that we need patients on a particular day, can be frustrating. Also, the students are taught to videotape each other and must be shown how to run the equipment at each session.It is necessary to write out specific instructions for the students about operating the equipment, because they forget to push “record” or would rewind the tape and record on top of the previous student’s interview. All of the sound and microphone controls are labeled and left in a preset position – students are instructed not to change these. In addition, the sessions must be scheduled in such a way that the reviewer will be availablewithin about 48 hours to have the feedback session with the student. Not all students recorded get feedback because of our imperfect system.

Initially, only the two clerkship directors began reviewing the tapes with the students in order to get their reaction and to determine the way in which to optimize the video sessions as an educational experience.The goal was to verify if the students can actually perform an adequate pediatric history and physical exam, and giving the student appropriate feedback. Once the data collecting process was developed another goal was to be able to approach the physical diagnosis class directors with our findings in order to improve student preparation in that class.(This goal has not yet been attained.)We had mutually agreed upon objectives and used the ACIR form as a guideline.This did not fully meet our needs.

After further research and from our experiences, I developed rating scales based on the BUISE and the MISC that were more relevant to our pediatric clerkship (see Appendices A-C).Since different types of visits require a different approach, the rating forms are individualized for well child, sick visit, and chronic illness visit.These forms offer a more detailed approach to the content of the history and physical examination as well as the process providing a more global assessment of the student.Students are not assigned a numerical grade but are graded as being strong or weak in a particular area. Thus far their performance is not factored into their final grade, but comments are made in the narrative of their final grade sheet as feedback. Descriptors for each category have not been written as of yet, however, due to the problem of getting volunteers to review the videotapes.We are restricted to using the same 2-3 people (including the clerkship directors) for reviewing the tapes, and we are mutually aware of the goals and objectives.As we convince more faculty of the value of this project, official training sessions will have to be designed and implemented.

The length of the review session depends upon the length of the history and physical session (usually 20 – 30 minutes). The review lasts anywhere from 30 minutes to 1 1/2 hours per student, with an average of about an hour per student. One drawback is that students have never been videotaped prior to our clerkship and are not taped again until the first three months of their fourth year.The student is thus unfamiliar with the process of interpersonal recall,27,28 which requires more introductory time from the attending than was originally anticipated. Overcoming the students’ fears is paramount. The two students who videotaped each other are reviewed together and both are encouraged to stop the tape and make comments about the performance. This is done in a very positive, non-threatening way. Both positive and negative aspects of the interview and exam are critiqued along with giving suggestions on how to improve. The students are encouraged to make their own suggestions. Thought processes are also inquired about.

When using real patients one can never predict how they are going to behave, and one can obtain incredible insight into the student’s ability to adapt to stressful or less than ideal situations.For example, one student had to witness a child with a breath holding spell with subsequent seizure activity and it was amazing to see how well he handled the situation.On the other hand, some students fall apart and cannot complete an exam on a crying child.Nonetheless, the students learn from the review sessions.I have not yet had one student who entered with apprehension who did not leave feeling that despite the discomfort, the session was valuable.They find great value in observing themselves and getting feedback about how others perceive them. Several have commented that they wish the process would continue throughout all of the clerkships. One new goal for our clerkship is to develop a concrete written evaluation of the process by the students.

As more tapes are reviewed and different aspects of the clinical encounter are explored, our clerkship will be looking at ways to improve student performance.For example, our observations will be shared with the Physical Diagnosis course director to identify problems and identify areas where the students need more teaching.We are currently working on making several tapes of the pediatric faculty who have volunteered to perform an interview and physical exam of a well infant and child, a sick visit, and a new patient visit.These tapes will be made available to the students to review at their leisure. Different techniques of examination, restraining a child, and diverting a child’s attention will be stressed along with the format, interaction, and flow of the interview.

In conclusion, videotaping has been demonstrated to provide a direct means of assessing a student’s history and physical examination skills without having to have a faculty present for the interview. It also provides the student with the opportunity to visualize himself/herself and hopefully will encourage positive changes. Setting up for videotaping sessions requires much forethought, trial and error, the correct equipment and space, funding, and appropriate rating scales, but most of all the dedication of time by the faculty to making it work.The current literature provides the experiences of others, but it requires knowing one’s own needs to adapt a suitable program.


  1. Cassata D, Conroe R., et al.A Program For Enhancing Medical Interviewing Using videotape feedback in the family practice residency.J. Fam.Prac. 4:673-674. 1977.
  2. Cassie JM,Collins GF, et al. The use ofvideotapes to improve clinical teaching.J. Med. Educ. 52:353-354. 1977.
  3. Cox J, Mulholland H, et al. An instrument for assessment of videotapes of general practitioners’ performance.Brit. Med. J. 306:1043-1046. 1993.
  4. Freer CB.Videotape recording in the assessment of the hstory-taking skills of medical students.Med. Educ. 12:360-363. 1978.
  5. Davis J, Dans P.The effect on instructor-student interaction of video replay to teach history-taking skills.J. Med. Educ. 56:864-866.1981.
  6. Scheidt P,Lazoritz S, et al. Evaluation of system providing feedback to students on videotaped patient encounters.J. Med. Educ. 61:585-586. 1986.
  7. Pritchard DA. Students, patients and videotapes.Med. J.Austral.155. September 16 1991.
  8. Stillman P, Sabers D, et al.The use of paraprofessionals to teach interviewing skills. Pediatrics 57:769-774. 1976.
  9. Menahem S.Interviewing and examination skills in paediatric medicine: videotape analysis of student and consultant performance.J. Royal Soc. Med. 80:138-139. 1987.
  10. Helfer R, Hess J.An experimental model for making objective measurements of interviewing skills.J. Clin.Psych. 26:327-331.
  11. Meuleman J, Caranasos GJ.Evaluating the interview performance of internal medicine interns. Acad. Med. 64: 277-279. 1989
  12. Noel G, Herbers J,et al.How well do internal medicine faculty members evaluate the clinical skills of residents?Ann. Int. Med. 117: 757-765. 1992.
  13. Mumford E, Schlesinger H, et al.Ratings of videotaped simulated patient interviews and four other methods of evaluating a psychiatry clerkship.Amer. J. Psych. 144:316-322. 1987.
  14. Elliot D, Hickam D. Evaluation of physical examination skills.JAMA 258:3405-3408. 1987.
  15. McAvoy BR.Teaching clinical skills to medical students: the use of simulated patients and videotaping in general practice. Med. Ed.22:193-199. 1988.
  16. Del Mar C, Isaacs G.Teaching consultation skills by videotaping interviews: A study of student opinion.Med. Teacher. 14:53-58. 1992.
  17. Stone H, Angevine M, et al. A model for evaluating the history taking and physical examination skills of medical students.Med. Teacher. 11:75-80. 1989.
  18. Burchard K, Rowland-Morin P. A new method of assessing the interpersonal skills of surgeons.Acad. Med. 65:274-276. 1990.
  19. Dickinson ML, Huels M, et al.Pediatric house staff communication skills:Assessment and intervention.J. Med.Educ. 58:659-660. 1983.
  20. Woolliscroft J, Calhoun J, et al. House officer interviewing techniques:Impact on data elicitation and patient perceptions.J. Gen. Int. Med.4:108-114. 989.
  21. Hays R.Assessment of general practice consultations: Content validity of a rating scale.Med. Educ. 24:110-116. 1990.
  22. Barbee R, Feldman S, et al. The quantitative evaluation of student performance in the medical interview.J. Med. Educ. 42:238-243. 1967.
  23. Edelstein D, Ruder H.Assessment of clinical skills using videotapes of the complete medical interview and physical examination.Med.Teacher. 12:155. 1990.
  24. Calhoun, JG, Woolliscroft JO, et al.Using videotape to evaluate medical students’ physical examination skills.Med. Teacher. 8:367-368. 1986.
  25. Mir M, Marshall R, et al.Comparison between videotape and personal teaching as method of communication clinical skills to medical students.Brit. Med. J. 289:31-32. 1984.
  26. McCormick D, Rassin G, et al.Use of videotaping to evaluate pediatric resident performance of health supervision examinations of infants.Ped. J.92:116-120. 1993.
  27. Kagan N.The physician as therapeutic agent: Innovations in training emotions in health and illness:Applications to clinical practice. 14:209-226. 1984.
  28. Kagan N, Schauble P, Affect simulation in interpersonal process recallJ. Council.Psych. 16:309-313. 1969.
Appendix A
Appendix B
Appendix C