July 2025

Hello COMSEP!

Searching ‘What happened today on medicine’ and “July 28’ resulted in this interesting NEJM issue from 65 years ago focusing on the development of a new live attenuated measles vaccine. 

How far we’ve come, and how far we have left to go.   

Enjoy this summer edition of the COMSEP JC, and be the excellent educators you are.

Amit, Karen and Jon


This review gets a high pass...

Reviewed by Gary L. Beck Dallaghan

Iyer AA, Hayes C, Chang BS, Farrell SE, Fladger A, Hauer KE, Schwartzstein RM. Should Medical School Grading Be Tiered or Pass/Fail? A Scoping Review of Conceptual Arguments and Empirical Data. Acad Med 2025 May 12. doi: 10.1097/ACM.0000000000006085.

What was the study question?

Should medical school grading should be multi-tiered (tiered) or pass/fail, particularly in core clinical clerkships?

How was the study done?
The authors conducted a scoping review of literature from 2000 to 2023, analyzing conceptual arguments and empirical data to clarify the implications of each grading system. Authors included English-language articles on schools in the United States. Authors used meta-synthesis to analyze data.

What were the results?
Forty articles were included, 22 of which were empirical studies. There were 72 conceptual arguments supporting pass/fail and 49 supporting tiered. Pass/fail grading in pre-clerkship courses was consistently associated with improved short-term student well-being and did not negatively affect academic performance. However, evidence for clerkship grading was limited and mixed. Although some students and faculty perceived reduced stress and increased collaboration, pass/fail systems redirected stress to other areas for students to distinguish themselves.

Tiered grading was often defended for its role in distinguishing high-performing students and aiding residency applications. Yet, the review found that these grades—especially in clerkships—were often unreliable and subject to bias. Studies showed that only a small portion of grade variance reflected true differences in student performance, and disparities favored White and female students.

How can this be applied to my work in education?
Educators should be cautious in interpreting small numbers of tiered grades and consider the equity implications of their grading systems. The medical schools need a system-level approach to assessment that includes frequent, diverse observations and emphasizes transparency.  Ultimately, neither tiered nor pass/fail systems are universally superior. Grading should align with institutional values and be informed by both theory and evidence complemented with further research.

Editor’s note: This article does a good job of summarizing the literature on pass/fail versus tiered grading. Interestingly though the focus was supposed to be clerkship grading there was some on preclerkship grading which is a little different. (AP).


Listen to this!  

Park L, Yau T, van der List L, Li STT. Pediagogy: A Novel, Resident-Based Educational Podcast. Academic Pediatrics. 2025;25(1). https://dx.doi.org/10.1016/j.acap.2024.08.002

Reviewed by Kiara Smith

What was the study question?

Does a free resident-developed, resident-informed medical education podcast for pediatric residents and medical students improve learner confidence and knowledge on the covered pediatric topics?

How was the study done?

A needs assessment of pediatric residents at UC Davis was conducted to guide Pediagogy, a short educational podcast based on national guidelines and resident-selected topics. Two residents served as the primary hosts. Thirty-nine residents and 108 medical students were surveyed at baseline, 1 month, and 6 months, collecting data on training level, prior exposure to topics, and podcast usage. Participants self-selected whether to listen to the podcasts. Participants then completed surveys, which included a report of listener status, measured knowledge (via eight board-style questions) and confidence (via Likert scales) in managing pediatric conditions. Surveys were incentivized with monthly raffle entries.

What are the results?

Of the 147 eligible participants, 61% completed the pre-test survey, 39% the 1-month post-test, and 23% the 6-month post-test. Listeners described Pediagogy as “short,” “concise,” and “high yield.” Compared to non-listeners, those who listened to the podcast showed a statistically significant increase in confidence on episode topics at 6 months and performed better on board-style knowledge questions at 1 month (both P < 0.05).

How can this be applied to my work in education?

I’ve worked with a lot of students who worry about the resource gap they face given the high cost of the most reliable resources for medical education, so I’m very interested in identifying and distributing free, accessible, and helpful resources for students. I would love to see more resources like this developed and distributed to students, and I think the encouraging results here indicate that they may help vulnerable student populations prepare for shelf or board exams.

Editor’s Comments: One of the challenges of open-access resources, which includes podcasts, is establishing robust evaluation data beyond whether users “liked” the resource or perceived knowledge gain. The authors attempted to obtain such data, though knowledge items were based on a single “board style” question per episode, and had significant attrition in terms of survey responses over time. How do we, as a medical education community, establish higher level evaluation data for open access resources? Also..putting out a shameless plug for PedsCases.com which includes many free resources in the form of podcasts and “notes” for medical students and residents. (KFO)


Incorporating AI into UME curricula

Liu DS, Sawyer J, Luna A, Aoun J, Wang J, Boachie L, Halabi S, Joe B. Perceptions of US Medical Students on Artificial Intelligence in Medicine: Mixed Methods Survey Study.

JMIR Med Educ. 2022 Oct 21;8(4):e38325. https://dx.doi.org/10.2196/38325

Reviewed by Lisa Cheng

What was the study question?

What are the attitudes, knowledge, and familiarity of US medical students regarding AI in medicine,  and what are their preferred topics on AI and methods of delivery?

How was it done?

A Qualtrics survey was sent to students at 17 US medical schools (14 allopathic, 3 osteopathic) addressing students’  familiarity of AI’s various uses in medicine and impact on their future careers, along with their preferred methods and topics to learn more about AI in medicine.  Questions were based on similar studies done outside the US.

What were the results?

Completion rate was approximately 3.5% (390/11,248 total enrolled students).  The majority (~60.5%)  were M1 and M2 students.  The majority (90%) believed that AI would take on a significant role in medicine during their lifetime.  The attitudes of the students towards AI combined excitement (79.4%) with worry about the ethics of using AI (61.3%). Only about 20% of students indicated any knowledge of core AI concepts, and less than half of students could differentiate “hype” AI articles from clinically relevant AI research.  While 91.5% of students wanted to learn  AI concepts and their use in medicine,  less than 10% stated that their medical school provided any resources to explore the topic of AI in medicine.  The top 3 topics that students were interested in learning were (1) fundamental concepts of AI, (2) when to use AI in medicine, and (3) strengths and weaknesses of using AI in medicine.

How can this be applied to my work in education?

At the most recent COMSEP meeting, many people shared that their school either had an ambiguous statement or no statement on the use of AI in medical school.  Though this study captured  a small percentage of students, it shows a gap between what is offered and what is needed regarding AI in medicine, and hopefully can be used to advocate for a proactive approach on acknowledging and addressing AI in medicine. AI is a quickly evolving area and can feel overwhelming to understand while concurrently teaching usage and guardrails, but teaching skills on recognizing its strengths and weaknesses, and in particular the ethical questions to at least ask when using AI can help guide future use.

Editor’s Note: This study was published in 2022.  I suspect that the number of students who are familiar with (and regularly using) AI has skyrocketed since then, making it imperative that educators address this issue head on. (JG)