May 2023

Hello COMSEP!

For those of you who attended the annual meeting last month, this is the moment where you might start to lose some of that momentum you gathered.  (And for those of you who couldn’t make it, we missed you!)

What a great time to keep that momentum by trying something new in your clerkship/clerkship rotation/pediatric educational setting!  Maybe you’ll get some ideas by reading some of the reviews attached and below.

It’s also time for our annual request/grovel for reviewers.  Writing a COMSEP Journal Club is fun, free, and easy!  It also looks great on your curriculum vitae.  We encourage collaboration with learners including fellows, residents and medical students.  To help you along, we’ve attached some instructions and a template.  Please reach out if you are interested, or if you have questions.

Enjoy,
Amit, Karen and Jon

Do What I Say and Then Do, Not Just What I Do.

Anderson M, Beltran C, Harnik V, et al. A multisite randomized trial of implicit versus explicit modeling in clinical teaching. Medical Teacher, 45:3, 299-306,.https://dx.doi.org/10.1080/0142159X.2022.2133691 

Reviewed by: Tracy Omoegbele and Caroline Roth

What was the study question?
Is explicit modeling (demonstration with narration) more effective than implicit modeling (demonstration without narration) in knowledge transfer of communication skills to students related to challenging patient conversations?

How was it done?
Fourth-year medical students recruited from six teaching institutions participated in a simulated patient encounter involving a frustrated patient. In the implicit version of the simulation, faculty members were instructed to model three specific communication behaviors: sitting down, identifying and reflecting emotions, and summarizing patient concerns. In the explicit modeling arm, faculty members performed the same behaviors and were also scripted to narrate what they were doing and explain why they were taking these steps. Data was collected through pre- and post-assessment surveys distributed online. Students were blinded to the research purpose of the simulation and to group assignment.

What were the results?
The study found that the explicit modeling group was more likely than the implicit modeling group to note body position and summarizing patient understanding as key strategies for communication with patients. Both groups reported reflecting patient emotions as a key strategy at rates which were not statistically different. The majority of students from both groups agreed or strongly agreed with statements regarding the usefulness of explicit modeling versus implicit modeling (86.4%), including specifically in reference to learning communication skills (84.1%) and procedural skills (95.5%). Participation also impacted faculty, with 70.6% noting a change in how they teach.

How can this be applied to my work in education?
The study demonstrated that explicit modeling was more effective than implicit modeling in a simulated environment for transfer of knowledge from faculty to medical students. In a busy clinical learning environment, faculty may find that explicit modeling is a great tool for providing medical education throughout the course of routine clinical practice.

Editor’s Note: Without specific direction, students may observe and learn things that are surprising and unintended. One of my colleagues tells a story in which students observed him in a clinical encounter and ‘learned’ that the proper way to wear a stethoscope was slung around the shoulders rather than hanging from the neck. 🙂(JG)


See 13, Do 129, Teach One.

Ryan MS, Khamishon R, Richards A, Perera R, Garber A & Santen SA.
A Question of Scale? Generalizability of the Ottawa and Chen Scales to Render Entrustment
Decisions for the Core EPAs in the Workplace. Academic Medicine, 2022; 97(4):pages 552-61.
Doi: 10.1097/ACM.0000000000004189

Reviewed by Kirstin Nackers, MD

What was the study question?
How do two scales for assessment of entrustable professional activities (EPAs) – the Ottawa and the Chen scales – compare in their performance in terms of reliability and validity, in the context of a large cohort of clerkship students, and what is the impact of frequent assessors?

How was it done?
Workplace based assessments for EPAs using both the Ottawa and Chen scales were analyzed from the 8 core clerkships during the 2019-2020 academic year at one institution. Descriptive statistics were calculated for each EPA item. Generalizability (G) studies were performed to estimate the amount of variance in assessments attributable to the student, the rater, and other variables. Then, decision (D) studies estimated the number of assessments needed to achieve the desired level of reliability. Further exploration of challenges was done through root cause analysis

What were the results?
G studies revealed that variance in the assessments attributable to the student ranged from 0.8-6.5% on the Ottawa scale, and 1.8-7.1% on the Chen scale. Variance was primarily attributable to the rater (Ottawa 47.1%-61.3% and Chen 42.8%-55.2%), with the remainder due to other unexplained factors. The Chen scale’s slightly better performance in terms of variance attributable to the learner, was felt to be due to its prospective nature, (level of supervision would be needed in the future) as compared to the Ottawa scale being retrospective (level of supervision was provided). The D studies looking only at frequent assessors estimated only two of the EPAs could be reliably assessed on the Chen scales with 5-13 total assessments, with others requiring 25-129.

How can this be applied to my work in education?
This study demonstrates that the ratings assigned to students had more to do with who rated them and other factors than the student themselves. Further, a benefit was noted using experienced raters who had higher volume of observations, though even with frequent assessors, many observations are needed for most items to achieve reliability. The authors suggest that the real challenge likely relates more to implementation than the scales.

Editor’s Comments: In addition to the findings noted above, the authors describe the root cause of the poor generalizability and reliability of these assessments as being due to the educational program being NOT truly competency based. Their five “why’s” in the root cause analysis rang true to my own experiences with EPAs in UME. We have much work to do to create truly competency based programs in the UME setting. (KFo)


Why Athletes Excel in Medical School

Strowd LC, Kelly K, Peters TR, Jackson JM. Student, Faculty, and Coach Perspectives on Why
Athletes Excel in Medical School: A Qualitative Analysis. Teach Learn Med. 2022;34(1):43-59.
doi:10.1080/10401334.2021.1921584
Reviewed by Tai Kyung Hairston

What was the study question?
What are medical student, physician faculty, and college athletic coach perspectives on why students with collegiate athletic experience are more successful in medical school and on medical licensing exams?

How was it done?
In 2019, the study team interviewed medical students with at least one year of college varsity level athletic experience, their respective medical school faculty, and college athletic coaches from Wake Forest University were interviewed over six months. Demographic data such as sex, current year in medical school training, type of college sport, and NCAA Division were collected. The interviews were
semi-structured, coded by three investigators, and analyzed for emergent themes.

What were the results?
Fifteen student athletes, five physician faculty, and three college coaches were interviewed. The student athletes were second, third, and fourth years. They represented all three NCAA Divisions, and most identified as female. The physicians were all male, teaching faculty from family medicine, and were either collegiate team physicians or had prior collegiate sports experience. Common themes in all interviews included goal-oriented behaviors, value of hard work, willingness to make sacrifices, desire to exceed preconceived performance limits, willingness to ask for help, comfort with receiving constructive feedback, and value of teamwork, which mirrored aspects of strong performance in medical school. Students, faculty, and coaches all agreed that time management skills were critical for success given the constant balance between academic and sports practice commitments. Students commented that participation in collegiate sports helped develop communication skills, strategies for resiliency, and prioritization of personal wellness that carried over well into medical school.

How can this be applied to my work in education?
This study adds to the body of literature that recognizes medical students as adult learners who use previous experiences to build upon and schematize acquired knowledge. This study encourages faculty to encourage the transfer of previous life skills to foster self-directed, lifelong learning.

Editors Note: This was a follow up to a previous study which found that those with one year of collegiate experience in sports outperformed in medical school classes compared to their peers who did not play sports. This study adds that success in medical school may lie less in language and skills translated from other activities people do (AKP).