October 2025

Hello COMSEP!

The fall season finally seems to be in full swing,  which, depending on where you live, may or may not involve more and more elaborate decorations on front lawns and houses each year.   The below infographic may shed some light.  This and similar survey data can be found at this website, appropriately named Bloody Disgusting.

This month’s JC covers a range of topics from AI (again) to MSPE to TBL and clinical reasoning.  The last one was co-authored by our own Karen Forbes—but no conflict of interest here—she was not involved in either the selection or editing of the review.

And one more reminder that we are seeking new reviewers starting in December.   Please participate and encourage your learners (students, residents, and fellows) to participate.   It’s fun, easy, a resume-builder and enrolls you in a raffle to win a free registration for the COMSEP Annual Meeting in 2027!

Enjoy,

Jon, Karen and Amit


AI outperforms humans again. 

Wang, Z., Fan, TT., Li, ML. et al. Feasibility study of using GPT for history-taking training in medical education: a randomized clinical trial. BMC Med Educ 25, 1030 (2025). https://doi.org/10.1186/s12909-025-07614-9

Reviewed by Jill Forbess

What was the study question?

How effective is a custom-designed GPT model for simulated patent encounters to enhance medical students’ history taking skills, compared to traditional standardized patient simulation encounters? 

How was the study done?

56 medical students who had limited prior patient interview experience were randomized into either a GPT-simulated patient group or a traditional standardized patient (SP) simulation portrayed by an instructor. Both groups completed 2 to 3 encounters three times weekly for four weeks using matching clinical scenarios.The comparison in addition to student perceptions of effectiveness and satisfaction.  Both the GPT system and the instructors provided written structured feedback using a rubric centered on four key competencies: history taking skills, clinical reasoning skills, communication skills, and professional behavior . Performance was measured by pre- and post-training structured clinical examination (OSCE) scored by blinded instructors. Student feedback regarding educational effectiveness and satisfaction was assessed. 

What were the results?

Although the pre-training OSCE scores showed no significant differences between groups, the GPT group showed significantly higher post-training OSCE scores (86.79 ± 5.46 vs. 73.64 ± 4.76, P < 0.001), demonstrating a strong educational benefit. Students rated the GPT more effective in encouraging ongoing self-directed learning, enhancing enthusiasm for learning, improving communication skills and logical reasoning ability and those in the GPT group reported less interview anxiety.

What are the implications? 

This study indicates that GPT-generated simulated patient encounters are effective to enhance medical students’ skills and can be a useful pedagogical tool to augment training. The authors acknowledge limitations of the GPT system citing the inability to assess visual non-verbal behaviors which are an important aspect of communication and trust building with patients. Student overall satisfaction and engagement indicate this training method could improve skills attainment and augment self-directed learning.  

Editor’s Note: Although this was a small sample, the robust response was impressive.  Possible explanations include the decreased student anxiety and ability to receive feedback more immediately (and without resistance) using the GPT model. (JG)


Team-based clinical decision making

Mitchell, M. Rashid, J. Foulds, and K. Forbes, Thinking About the Why: A Qualitative Study on Students’ Perspectives of Paediatric Team-Based Learning Discussions, The Clinical Teacher22, no. 5 (2025): e70161,  https://doi.org/10.1111/tct.70161

Reviewed by Brittany Lissinna 

What was the study question? 

What can team-based learning discussions amongst pediatric clerkship students show about their thought processes in clinical decision-making? 

How was the study done? 

The authors used a descriptive qualitative design to understand the core social processes and decision-making related to team-based learning (TBL) discussions. Six focus groups consisting of a total of 26 third-year medical students on their pediatrics clerkship rotation were held 1-2 weeks after a TBL session that focused on key-feature questions (KFQs). The focus groups were structured to ask questions about clinical reasoning and peer teaching.  Transcripts from the focus groups were analyzed in an iterative process with data collection and analysis happening simultaneously. Key conceptual areas and themes were identified, and connections were drawn between common thoughts, processes and conditions experienced. 

What were the results? 

Five themes were identified:  self-confidence, co-learning with peers, trust, clinical reasoning strategies and clinical application. 

  • Enablers and deterrents to self-confidence were identified. One phenomenon that fit both was “calibration with peers”, in which students gauged their confidence with that of their peers. If aligned with peers it can increase self-confidence but discrepancy can lead to breakdown of confidence.
  • Co-learning with peers allowed students to identify knowledge gaps and practice especially in limited clinical experience. The concept of trust was explored, revealing that someone’s position of authority and the sense of authority embodied by a peer would impact how likely students were to follow their line of reasoning. 
  • Participants discussed their use of clinical reasoning strategies including pertinent positives and negatives, illness scripts, problem representations and integrating peer feedback. They also reflected that the TBL sessions facilitated practice with principles of choosing wisely and committing to a decision. Committing to a decision was both uncomfortable and recognized as important by participating students.  

How can this be applied to my work in education? 

The factors that impact clinical reasoning skills between medical students during a pediatric TBL session described in this paper reflect the complex personal and group identities that play a role in students’ learning. Developing trusting relationships and explicitly labelling clinical reasoning strategies as they are used in both classroom and clinical settings can facilitate focused practice of these skills. 

 Editor’s Note: Studies like this are always interesting because they give us more insight into how medical students learn during their sessions. Many of the themes here probably apply to sessions not on the subject of clinical reasoning and are related to social learning. (AP)


Towards a more useful MSPE

Mullikin, Dolores R. MD, MHPE; Pineda, Amy; Addams, Amy; Howley, Lisa Doyle PhD. Program Director Perspectives on the Utility of the Medical Student Performance Evaluation Shared During the Transition to Residency. Academic Medicine 100(9):p 1067-1073, September 2025. | https://dx.doi.org/10.1097/ACM.0000000000006096

Review by Natalie Dous and Srividya Naganathan

What was the study question?

What qualities of the Medical Student Performance Evaluation (MSPE) do program directors perceive as useful and what areas need improvement?

How was the study done?

This study analyzed feedback from 2020 and 2021 Resident Readiness Surveys (RRS). The RRS is an annual survey administered by the Association of American Colleges to program directors (PDs) to provide feedback to medical schools about graduates’ readiness for residency. The survey queried about the usefulness of the Medical Student Performance Evaluation (MSPE) as well as solicited open-ended comments.  Responses from PDs who both recalled the MSPE of individual interns accepted to their programs and answered the usefulness question with an accompanying comment were included in data analysis. Two investigators independently coded comments and used thematic analysis.

What were the results?

The 2-year survey response rate was 62% (3,893 of 6,253 invited PDs) yielding 1,881 study-eligible learner surveys. Overall, 83% of PDs perceived the MSPE as useful or somewhat useful particularly when it provided an accurate and holistic description of the learner. PDs particularly valued narrative comments that provided context and specificity of clinical knowledge and skills (Academic Progress section), as well as descriptions of personal attributes that highlighted unique qualities (Noteworthy Characteristics section). However, the lack of standardized assessment methods, insufficient comparison with other learners, and inadequate emphasis on areas for professional development were identified as limitations. Many PDs expressed a desire for more standardized tools and clearer documentation of opportunities for growth.

How can I apply this to my teaching?

This study highlights the importance of creating standardized assessments with inclusion of high-quality narrative comments, personal attribute descriptions, improved ability to differentiate learners and a focus on areas of professional development in the MSPE, to provide meaningful information to the PDs. This approach may have a positive impact in enhancing MSPE’s value and supporting a smoother transition into residency. 

Editor’s Comments: It is refreshing to see that PDs desired more emphasis on an individual’s areas for continued professional development which may help programs set up learners for success in their residency training. Not only would this increase the perceived authenticity of the MSPE, but it better aligns with competency-based medical education and with a growth mindset. (KFO)