Giving Thanks for Wellness
McGinness A, Raman M, Stallworth D, Natesan S. App-Based Three Good Things and Gratitude Journaling Incentive Program for Burnout in Pediatric Residents: A Nonrandomized Controlled Pilot. Acad Pediatr. 2022 Nov-Dec;22(8):1532-1535. https;//dx. doi.org/ 10.1016/j.acap.2022.05.009
Reviewed by Hillary Anderson and Molly Rideout
What was the study question?
What is the impact of a low-cost, individual-level intervention like Gratitude and 3 Good Things (G/3GT) journaling on resident burnout, gratitude, and life satisfaction?
How was the study done?
This was a nonrandomized controlled pilot study that was open to all pediatric residents (n = 83) at University of California San Francisco Benioff Children’s Hospital Oakland during the 2020-2021 academic year. Participants were given the opportunity to opt into the intervention – G/3GT journaling (listing 3 things they were grateful for and 3 good things daily for 4 weeks) or the control group (no specific wellness activities). Intervention (n = 20) and control group (n = 21) enrollment was closed once the researchers reached their limits of their gift card budget. Participants completed several validated surveys at 5 points: before, during and up to 6 months after the intervention. Surveys included the Oldenburg Burnout Inventory (OLBI), Physician Work Life Study, Six-Item Gratitude Questionnaire, and Satisfaction with Life Scale. All participants were compensated with $5 each time they completed surveys. In addition, the intervention group received $50 for their participation in G/3GT journaling.
What were the results?
The intervention group had a significant decrease in OLBI-Exhaustion subtotal score (P=0.025) compared to the control group, lasting up to 6 months post-intervention. Additional analyses, did not show a statistically significant difference between intervention and control groups, although the OLBI-Disengagement subtotal approached significance (p=0.055)
What are the implications?
Residents (and potentially medical students) may benefit from Gratitude/3 Good Things journaling incentive programs to mitigate burnout and effects may last well beyond the program. One interesting limitation identified by the authors is that while prior studies observed that PGY-3s had lower or similar levels of burnout than PGY-1s, in this study the intervention group (with a higher proportion of PGY-3s) had a higher level of burnout at baseline. As residency and medical education leaders consider the implementation of wellness interventions, timing of these interventions for specific levels of trainees is worth further consideration.
Editor’s Note: I commend these authors in trying a simple way to combat burnout when so many things have been tried. It was interesting that only one subscore (out of all the others) of the OLBI was significantly different. The other caution is that this was nonrandomized so unsure if the intervention population more easily responds to positive thinking since they signed up for it. (AKP)
Say My Name, Say My Name…Practical Tips for Promoting Psychological Safety During Clinical Clerkships
McClintock AH, Fainstad TL, Jauregui J. Clinician Teacher as Leader: Creating Psychological Safety in the Clinical Learning Environment for Medical Students. . Acad Med. 2022;97(11S):S46-S53. https://dx.doi.org/10.1097/ACM.0000000000004913
Reviewed by Lauren K. Kahl, MD
What was the study question?
During clinical rotations, how do leaders promote psychological safety for medical students?
How was it done?
From October 2020 to February 2021, semi-structured interviews of fourth-year medical students who had completed their core clerkships at two separate medical schools were conducted. The authors used the psychological safety framework to initially code the interviews, constructivist grounded theory for data outside of this framework, and self-determination theory and critical theory as sensitizing concepts. Students were asked to reflect on their clinical rotations and what impacted the psychological safety of their learning environments. Students were also asked about changing environments and follow-up questions to interesting findings.
What were the results?
Eighteen students (9 from each school) were interviewed. Sixty-six percent were female, and 27% self-identified as having a racial or ethnic background underrepresented in medicine. Students defined safety as the ability to ask questions without judgment, a flattened hierarchy, and support within the team. Many students noted the simple act of being addressed by name promoted a safe environment. Students also identified several leadership behaviors that promoted psychological safety: setting clear expectations, explicitly expressing a focus on learning, humility, encouraging student participation/autonomy in patient care, acknowledging student effort, and providing learning-centered feedback.
Students also identified features of unsafe environments, and the traditional learner hierarchy was a common theme. Unkindness, avoidance, not inviting students to participate in patient care, asking repeated questions about an already identified knowledge gap, and promoting competition between students were described as components of an unsafe environment. Students also described that this increased their cognitive load, with a shift in focus from active learning to managing their image. Students were quick to “diagnose” their learning environments, and these impressions rarely changed.
What are the implications?
This study identifies simple, concrete ways to promote psychological safety in the clinical learning environment. Interestingly, students described the impact of unsafe environments on their overall learning and cognitive load. A focused effort on promoting psychological safety may also impact student assessment and feedback. Future studies could further describe these relationships.
Editor’s Note: As educators, we are always trying to promote a positive learning environment to improve learning. This study not only reaffirms behaviors that we know contribute to an unsafe learning environment but also contributes to other areas. The aspect of managing image makes me realize that unsafe environments may lead to narrative comments not related to a student’s performance. (AKP)
Early Practice with Clinical Reasoning
Waechter J, Allen J, Lee CH, Zwaan L. Development and Pilot Testing of a Data-Rich Clinical Reasoning Training and Assessment Tool. Acad Med. 2022;97:1484–1488.
https://dx.doi.org/10.1097/ACM.0000000000004758
Reviewed by: Susan Washburn and Melanie Rudnick
What was the study question?
Can online simulation cases with an assessment tool be a useful and feasible tool to teach clinical reasoning skills in the pre-clinical years of medical school?
How was it done?
An online digital library of clinical reasoning cases was created. Cases were completed in five stages: introduction, history, physical exam, investigations, and final ranking. Students were assigned points based on identifying information that supported or refuted their leading differential diagnosis, ordering required or inappropriate tests, and having the correct final diagnosis. Students were given access to their scorecards after completing the cases. Two pilot studies were performed.
What were the results?
In the first pilot study 2 cases were assigned to 74 second-year pre-clerkship students at one school. It was found that creating a scorecard for these cases took too long for instructors and the scorecard wasn’t helpful to students. A case template was created and the software was upgraded to address these issues. In the second pilot 1 case was administered to 75 second year students to learn the software and then 6 additional cases were made available. Students were required to complete 3 cases with the others being optional. Students completed 376 cases, an average of 5 cases/student. Two experienced clinician instructors created and provided formative feedback to students via the scorecards. Instructors reported that the workload was feasible.
What are the implications?
The learning curve is steep in the transition from the pre-clinical to the clinical environment. Students frequently have trouble applying their vast fund of knowledge and have little training on clinical reasoning in the pre-clerkship years. This study focused on development of software and scorecards and feasibility of implementation. It would be helpful to know more specifics of that to gauge implementation for others. It will be interesting to know how clinical reasoning improves over time for learners, either by self-report or faculty assessment of learners in the pre-clinical and clinical environments. The authors augmented their strategy of using the scorecard reports described in the study by adding review sessions in a classroom setting, highlighting the importance of in-person feedback in making simulation cases effective educational tools.
Editor’s Note: As more and more schools implement earlier clinical experiences, case-based learning and simulation, the line between ‘preclinical’ and ‘clinical’ students becomes more and more blurry. While early and unproven, the virtual cases in this study and others like it may be able to make that transition even smoother and less steep. (JG) |