February 2022

Hello COMSEP!

Today is National Cherry Pie Day.  Not sure why they would designate a late-winter day with no fresh cherries to be found for Cherry Pie Day, but who are we to argue?

Enjoy,

Amit, Karen and Jon


Evidence for teaching evidence-based medicine

Effectiveness of Modalities to Teach Evidence Based Medicine to Pediatric Clerkship Students: A Randomized Controlled Trial. Hadvani T, Dutta A, Choy E, et al. Academic Pediatrics; 2021 Mar; 21(2):375-383. https://dx.doi.org/10.1016/j.acap.2020.09.012

Reviewed by: Caroline Roth

What was the study question?

Is a self-paced, multimedia module (SPM) as effective as a traditional didactic session (TDS) in teaching skills required to apply Evidence Based Medicine (EBM) to a real patient encounter during the pediatric clerkship?

How was it done?

Medical students on their core pediatric clerkship rotation were cluster randomized to either attend a didactic session (TDS) or complete an asynchronous module (SPM) on EBM, with identical content.  The curriculum included education on PICO questions, literature searches, study design, and critical appraisal concepts and was delivered during a 2 week Pediatric Hospital Medicine (PHM) block. Students then were required to complete a Critically Appraised Topic (CAT) form associated with a clinical encounter. The primary outcome was comparing students’ CAT scores between the two groups. Secondary outcomes included analysis of pre and post-curriculum responses on an Evidence-Based Practice Knowledge, Attitudes, Access, and Confidence Evaluation (KACE) for the SPM and TDS groups.  The KACE was administered to students prior to the curriculum, at the end of the PHM block, and at 3 months following the interventions.

What were the results?

The mean scores of the CAT assignments between the TDS and SPM groups were not significantly different.  Likewise, there was no significant difference between the two groups for knowledge, attitudes, confidence and accessing evidence. Both groups also showed improvements in all areas of the KACE at the end of the PHM block, but this was sustained only for knowledge and confidence at 3 months post-intervention.

What are the implications?

In a time when many institutions have pivoted to asynchronous virtual learning due various barriers including the COVID pandemic, it is reassuring to have a randomized control trial showing equal efficacy of an SPM in direct comparison to a traditional didactic session.  However, future studies should be conducted to see if these findings extend to curricula covering other subject matter relevant to the pediatric clerkship.

Editor’s Comments: As with many educational interventions that have been studied, there were expected improvements in knowledge, attitudes and confidence, regardless of mode of delivery. The bigger challenge, however, is how to sustain those improvements over time. Isolated curricular innovations are unlikely to have lasting effects unless there is repeat exposure throughout other clerkships. (KFo)


It must be true–I watched it on YouTube…

The Content Quality of YouTube Videos for Professional Medical Education: A Systematic Review.  Helming, Andrew, Adler, David, Keltner, Case, Igelman, Austin, Woodworth, Glenn.

Acad Med. 2021;96(10):1484-1493. https://dx.doi.org/:10.1097/ACM.0000000000004121

Reviewed by: Elizabeth Prabhu

What was the study question?

What is the content quality of YouTube videos intended for professional medical education and what are the video characteristics associated with quality?

How was it done?

A systematic review was done on all studies about the quality of YouTube videos intended for professional medical education from 2005-2019. Inclusion criteria included studies that specifically evaluated video content quality for medical students, residents, fellows or practicing physicians. The studies were classified based on the type of quality rating tool (QRT) used: externally validated, internally validated or limited global. The externally validated QRTs used were the JAMA score, DISCERN score and the Global Quality score, all looking at authorship, attribution, disclosure and current content of the videos. Internally validated QRTs included the use of published guidelines or expert opinions by the authors of the videos. Limited global QRTs  used a Likert or binary scale to assess global domains of  video quality.

A total of 31 studies met inclusion criteria. Data were extracted to identify study information, video characteristics and engagement metrics (duration of video, number of likes, number of days posted, number of views). Videos were also classified based on author type.

What were the results?

Out of the 31 studies reviewed, only 3 used an eternally validated QRT. Twenty studies used internally validated QRTs and 13 used limited global QRTs. Among those using an internally validated QRT, the average score was 44%. Videos with academic physician authors had higher internally validated QRT mean scores (46%) compared to those with non-academic-physician authors (26%; p<.05).   Video characteristics and engagement metrics were found to be unreliable factors to measure a video’s quality.

What are the implications?

Many medical students and doctors in training use many aspects of the internet for self-guided study and instruction. There are many avenues for this type of professional medical education  and there is even a term for this movement; FOAM: Free Open Access Medical Education, which can include blogs, podcasts, tweets, online videos, Facebook groups, etc. YouTube videos fit into this category. I think a lot of medical students are using YouTube videos in their self-study and a lot of clinical educators are using such videos to help teach and train students.. As these authors found, there is a huge variation in quality of YouTube videos, many of which had low quality rating scores. . The authors of this study concluded that the low quality of YouTube videos used for medical education was due to a lack of unifying grading criteria for video content, poor searching algorithms, and insufficient peer review or controls.

Editors Note: With the advent of Free Open Access Medical Education (FOAM) through various media, we as educators have to be even more cognizant of the materials our students use. Learners may not know the quality of what they are watching. In my practice, before recommending a YouTube video for a class, I watch it for content. (AP)


Training student as interpreters

Overcoming the language barrier: a novel curriculum for training medical students as volunteer medical interpreters.   Carlson, E.S., Barriga, T.M., Lobo, D. et al.  BMC Med Educ 22, 27 (2022). https://doi.org/10.1186/s12909-021-03081-0

Reviewed by: Srividya Naganathan

What was the study question?

Does a training program for Spanish fluent medical students serve to improve their comfort level and skills to function as  in-person translators in health care?

How was the study done?

A three-step Medical Student Interpreter Training Program (MSITP) was created as a way of engaging medical students to serve as translators for limited English proficiency (LEP) health encounters. The steps consisted of shadowing a licensed hospital interpreter for one hour,  taking the Language Services Qualified Bilingual Staff Assessment via telephone and attending a three-hour interpreter instruction session conducted by the department of interpreter services. First and second-year students who self-identified as fluent in Spanish were recruited for the study. Students were administered a Pre/Post  Interpreter Training Exercise Test to assess understanding of the Interpreter Code of Ethics and appropriate techniques of interpreting for LEP patients.

What were the results?

17 students successfully completed the MSITP and participated in the study. After training, 47% of students felt somewhat comfortable and 53% very comfortable with the Interpreter Code of Ethics. With respect to the concept of intervening and transparency, 41% of students felt somewhat comfortable and 59% of students were very comfortable. Furthermore, the training led to an increase in the percentage of students who rated their comfort with their interpreter skills as very comfortable from pre-test (41%) to post-test (53%).

What are the implications?

This is a novel program to engage Spanish fluent medical students as in-person translators in health care settings with Spanish speaking patients with LEP. Ultimately, this can lead to better and safer care to patients with LEP and can be expanded to other languages.

Editor’s Note: Bilingual medical students are often put in the position of interpreting for patients and families in clinical settings.  It certainly makes sense to ensure that they are trained to follow professional standards when doing so.  This training appears to be brief which I’m sure is appealing to busy students. I would love to know how often they are able to practice their skills after the training, and if there is a perceived difference in quality of interpretation.  (JG)

 

Click here to view a PDF of the February Journal Club