Hello COMSEP!
Many of us our now facing the ‘post-conference blues’, that inevitable period in which energy, enthusiasm, and a basketful of new ideas run headlong into the quicksand of our obligations and daily work.
Don’t lose that momentum! Take out your notes from the annual meeting and remind yourself of what you wanted to accomplish. Pick one small project and take the first step to making it happen.
Read this edition of the COMSEP Journal Club to see what other medical educators are up to. And, if you are curious and motivated, volunteer to write a review for a future edition. We are now looking for reviewers for summer and fall. Reach out to COMSEP to volunteer or if you have any questions.
Enjoy!
Amit, Jon and Karen
Is there an “A” in team?
Sawicki JG, Sriram K, Hansen I, Good B. Association between inpatient team continuity and clerkship student academic performance. J Hosp Med. 2024;19(5):349-355. DOI: 10.1002/jhm.13273
Reviewed by Amit Pahwa
What was the study question?
Does increased continuity with residents and attendings on an inpatient team increase the student performance on the clerkship?
How was it done?
The authors retrospectively identified students who completed the pediatric clerkship at one institution. Continuity with resident, attending, or overall team was defined as the maximum number of days the student was supervised by the same resident or attending. The primary outcome was performance of the student on an aggregated inpatient preceptor assessment, rated from 0 to 4.. Secondary outcomes included OSCE scores, shelf exam scores, and final clerkship grade. The study team performed a multiple linear regression with different models and confounders based on outcome.
What were the results?
The study included 227 students who mostly identified as male (55%) and white, non-Hispanic (79%). Most students (68%) were on a team with only attendings and senior residents. The rest were on a team with one attending, one senior resident, and two interns. The maximum days a student was with the same attending was 9.5 days, the same resident 13.39 days, and the same team 8.1 days. For every 1 day increase in student-resident, student-attending, and student-team continuity there was a small but significant increase in preceptor rating (0.01 – 0.04) as well as final clerkship grade (0.01 – 0.02). There was no significant correlation with OSCE or shelf exam scores. Students with the least continuity with their team were most likely to receive a pass, and those with highest continuity were more likely to receive a high pass.
How can this be applied to my work in education?
While we cannot presume causation with increased continuity, there may be some reasons that continuity can improve student performance ratings. One possibility is that more direct observation may allow a supervising resident or attending to have a better sense of student’s performance. Another is that continuity leads to increasing trust in the students, allowing them to take on more tasks to demonstrate performance. As course directors if we could strive for either, it could benefit the student.
Editor’s Note: It makes sense that more continuity would lead to higher quality feedback, which, if internalized, would improve performance. It would have been nice to see improvement in an external measure like an OSCE, which would relieve the concern mentioned by the authors that bias could lead to a higher rating by preceptors for students whom they know better. (JG)
Feedback is hard--with patients, too.
Sehlbach C, Bosveld MH, Romme S, Nijhuis MA, Govaerts MJ, & Smeenk FW. Challenges in Engaging patients in feedback conversations for health care professionals’ workplace learning
Medical Education 2024; 1-10. https:/dx.doi.org/10.1111/medu.15313
Reviewed by Meera Ratani and Antoinette Spoto-Cannons
What was the study question?
What are patients' and health care professionals' perceptions regarding patient feedback for learning and integrating patient feedback into feedback conversations?
How was it done?
This study was a qualitative study that conducted semi-structured interviews with 12 healthcare providers and 10 patient consultants. Thematic analysis was conducted using an inductive approach to analyze the interview data.
What were the results?
The study revealed that both patients and healthcare professionals acknowledge the importance and value of patient feedback for learning and improving healthcare. However, both groups note challenges in effectively integrating patient feedback, including the need for training healthcare providers to invite patients into the feedback conversation. Additionally, role conflict emerged as a significant barrier, requiring patients to take on the role of educator in addition to their patient role. However, this shift in roles also worked to empower patients once they entered the role of educator, altering power dynamics within the feedback exchange.
How can this be applied to my work in education?
In healthcare education, these findings underscore the importance of training healthcare professionals to invite patient feedback and navigate feedback conversations while addressing power dynamics, determining who and when to ask, and coping with vulnerability. By equipping professionals with the necessary skills and tools, healthcare education programs can foster a culture of continuous improvement and patient-centered care. Moreover, fostering an environment where patients are recognized as valuable educators can lead to a more inclusive and collaborative learning environment, benefiting both patients and professionals. Additionally, patient organizations could provide resources to empower patients in providing feedback and participating in feedback conversations, further enhancing the learning experience for all stakeholders involved.
Editor’s Comments: This article describes the complexities of seeking feedback, including vulnerability, power dynamics and conflicting roles, from the perspectives and context of healthcare professionals and patients. As educators, we regularly provide feedback to medical learners, and some of those feedback conversations can be challenging indeed. I was struck by how valuable it could be for junior medical learners if clinician educators role model the seeking of patient feedback with the associated vulnerability. (KFo)
Dr. ChatGPT
Kung TH, Cheatham M, Medenilla CS, et al. Performance of ChatGPT on USMLE: Potential
for AI-assisted medical education using large language models PLOS Digital Health 2023; 2(2). DOI: 10.1371/journal.pdig.0000198
Reviewed by Danny MacKenzie and Lisa Cheng
What was the study question?
Two questions: 1) How does ChatGPT3 perform on the USMLE steps 1, 2, and 3? 2) How well does it justify its answer choices?
How was it done?
ChatGPT3 was tested on 350 questions taken from the June 2022 USMLE-published practice Step 1, 2, and 3 exams, which were published after this version of ChatGPT was released.. The authors tested 3 different inputs for each question: 1) open-ended (no answer choices), 2) multiple choice, no justification, and 3) multiple choice, forced justification. ChatGPT’s answers were judged by 2 blinded physician reviewers with appropriate inter-rater reliability.
What were the results?
For USMLE Step 1, 2, and 3 ChatGPT scored 45%, 54%, and 62% on the open-ended questions; 36%, 57%, and 56%, respectively, on multiple-choice. When assessing ChatGPT’s explanations for its answers, the blinded physician reviewers found ChatGPT provided logically flowing arguments (internal concordance) for its answers with novel, non-obvious, and valid information not provided in the question stem (unique insights). When incorrect, its congruence and insight suffered.
How can I apply this to my work in education?
The passing USMLE score for test-takers is generally about 60%. Though the study does include a measure that increases ChatGPT’s score, by excluding “indeterminate responses,” if you include these responses, ChatGPT’s does not generally pass (as per above). However, these scores are still promising accomplishments for an AI that was not explicitly trained on medical information.
From an educational perspective, ChatGPT did demonstrate elements of teaching: it presented new/ unknown information and modeled logical arguments. However, given its inaccuracy, ChatGPT responses should be interpreted with caution and not as a replacement for standard educational tools. Assuming that students will likely use this as a study aid, strategies for recognizing when a ChatGPT answer has incongruence and lack of insight could be the next step of study interest.
Editor’s Note: GIven the rate of improvement of these tools, I suspect some AI product could pass these exams today. Perhaps more importantly, AI has an increasing role in health care overall, raising the question about whether standardized exams are the best assessment tool for medical students in today’s world. The companion piece to this article raises some interesting questions (JG)