February 2023

Hello COMSEP!

Today is Mardi Gras.  For those of you who live in communities where that is celebrated, enjoy!    For the rest of us, maybe we can take a lesson from the tradition and try to insert a little joy into our lives today.

The theme for this month’s journal club is feedback.   In that spirit, please let us know how we’re doing.  We welcome any suggestions to help improve the journal club for COMSEP’s membership.

Amit, Karen and Jon

 

Who Gives the Best Feedback?

Mooney CJ, Pascoe JM, Blatt AE, et al. Predictors of faculty narrative evaluation quality in medical school clerkshipsMed Educ. 2022;56(12):1223-1231.  https://dx.doi.org/10.1111/medu.14911

Reviewed by Daniel Herchline

What was the study question?

How are various process-, faculty-, and student-level characteristics associated with the quality of faculty members’ narrative evaluations of clerkship students?

How was the study done?

Drawing from a theoretical framework based in situated cognition theory, the authors analyzed narrative feedback data from the evaluations of medical students across two inpatient rotations on the internal medicine and neurology clerkship. The authors measured narrative quality using the Narrative Evaluation Quality Instrument (NEQI). They also assessed if the NEQI was affected by time to evaluation completion, evaluator years of experience, evaluator on-service teaching weeks per year, time evaluators spent with student, student gender, and evaluator gender.

What were the results?

The authors included 247 narrative evaluations representing 50 unique medical students. Scores for students across both clerkships were similar with an average NEQI of 6.2 for males and 6.6 for females. The quality of the narrative decreased as time to completion of evaluation increased (0.3 point drop in NEQI every 10 days). The quality of narrative written by female faculty was higher than that by male faculty. There was no significant difference in the total NEQI based on weeks spent on teaching service, time spent with students, or interaction between gender of the evaluator and gender of the student. For the usefulness score there was a positive association with the time the evaluator spent with the student.

What are the implications?

While narrative comments are important in the assessment of learners in medical education they remain inconsistent in quality. It is widely accepted that a variety of different contextual factors influence the narrative feedback given to learners across the UME and GME spectrum. By contextualizing narrative comments, more accurate assessments of learners may be attainable. Additionally, exploring individual factors that contribute to higher quality narrative feedback may provide insights for how to improve narrative feedback on a broader scale.

Editors note: While a single site, this is another study showing how the quality of narrative evaluations can be influenced by things outside of the student’s performance. However we do not know if female faculty had students who performed better. Also it may not be the time to complete the evaluation but rather it being difficult to write a quality narrative for students who did not perform well (AP).

 

Tell Me How You Really Feel…

Robb KA, Rosenbaum ME, Peters MA, Lenoch S, Lancianese D & Miller JL.

Self-Assessment in Feedback Conversations: A Complicated Balance.  Academic Medicine 2023;98:248-254.   https://dx.doi.org/10.1097/ACM.0000000000004917

Reviewed by: Karen Forbes

What was the study question?

During feedback conversations, how do medical students perceive and respond to self-assessment prompts? What are their perceptions of and approaches to self-assessment in this context?

How was it done?

All rising second-year, third-year, and fourth-year students from one institution were invited to participate in this qualitative study; 25 students distributed across the years participating. Using an interview guide developed from relevant medical education literature, one-on-one interviews were conducted. Open-ended questions were used to explore experiences with feedback and self-assessment during medical school. Audio recordings of interviews were transcribed verbatim and thematic analysis was conducted, developing codes and themes to identify recurrent ideas and patterns in the data.

What were the results?

There were no significant differences among students based on level of training or other demographic factors. Some students reported self-assessment ability increased with experience, but this was not universal. Students could identify benefits to self-assessment before receiving feedback, including directing the feedback conversation, determining alignment with feedback-giver’s perceptions, and encouragement of self-reflection and preparation for self-monitoring as a future physician. However, some students failed to identify any personal benefit to the process and reported censoring their responses to avoid seeming overconfident or to invent shortcomings particularly if they could not identify areas of weakness. Others reported fears of being honest due to concerns of impact on grade performance and future evaluations.

What are the implications?

Asking students to self-reflect on their performance is complex. While students acknowledge benefits of self-assessment, they also describe censoring their self-assessment due to concerns about image and implications on evaluation. Furthermore, students may not know what they need feedback on and rely on preceptors to identify areas for improvement.  Feedback-givers should not limit the scope of their discussions to only those areas raised by students. This study’s findings underscore the importance of psychological safety in the student-preceptor relationship and learning environment, to encourage students to share a genuine analysis of their performance, and to foster a growth mindset where they understand that feedback is intended for personal improvement.

Editor’s Note: It should be no surprise that students don’t feel safe in sharing their self-assessments with faculty..   Most of us serve dual roles with students–both providing formative feedback and summative evaluations.  It can be confusing for students (and faculty!)  to know which role we are playing at any given time.   Providing signposts for students identifying the purpose of the conversation may help.  (JG)

 

The Power of Why

Berens M, Becker T, Anders S, Sam AH, Raupach T. Effects of Elaboration and Instructor Feedback on Retention of Clinical Reasoning Competence Among Undergraduate Medical Students: A Randomized Crossover Trial. JAMA Netw Open. 2022;5(12):e2245491. https://dx.doi.org/10.1001/jamanetworkopen.2022.45491

Reviewed  by Victoria Robinson and Maggie Costich

What was the study question?

Does prompting medical students to elaborate on their clinical reasoning and providing instructor feedback enhance learning and retention?

How was the study done?

In this randomized prospective crossover study, 4th year medical students attended 10 weekly internal medicine e-seminars focused on developing clinical reasoning skills. In the first student group, half of the study questions related to identifying the most likely diagnosis (control item) while the other half of questions asked students to both identify the most likely diagnosis and elaborate on how to differentiate the correct diagnosis from the most common incorrect diagnoses (intervention item). Intervention and control items were switched in the second student group. Students then received long-form expert comments which elaborated on the correct answer. Study authors assessed differences in scores in control vs intervention items within participants both at time of exit exams (immediately following the 10-week course) and then on retention tests 6 months later.

What were the results?

Students scored significantly better on intervention items than on control items on both the exit exam and the retention test 6 months later, with fewer “common clinical reasoning errors.” Completion of the elaboration question was more highly correlated with retention test performance than prior exam performance.

What are the implications?

Clinician educators often ask medical students questions with discreet answers, “i.e. what is the diagnosis here?” This paper indicates that time spent elucidating students’ clinical reasoning processes (e.g. how to eliminate other possible diagnoses and support their reasoning), may help with both short-term understanding and long term retention. Much of the iterative learning about forming a differential diagnosis happens during the clinical years when students are seeing patients – building and reinforcing their clinical models. However, during clinical rotations, most feedback about differential diagnosis happens verbally during medical student presentations on rounds or at the bedside. Further research could explore whether these same effects on understanding and retention hold true with verbal feedback. If so, this could serve as a model to structure teaching and feedback during the clinical year as well.

Editor’s Note: Anyone who works with students who use question banks for USMLE like Uworld or AMBOSS will be familiar with the ‘instructor feedback’ portion of this study which occurs after every response.  It would be interesting to tease apart the relative contribution of the elaboration (asking students to explain their reasoning) versus the instructor feedback in improving learning. Either way, this is a seemingly powerful intervention that requires little in the way of faculty resources. (JG)