December 2021

posted in: 2021 Journal Clubs | 0

COMSEP Journal Club
December 201

Editors: Jon Gold, Amit Pahwa and Karen Forbes

PNAPE: New tool on the block

Evaluation and Improvement of Intern Progress Note Assessments and Plans. Kelly MM, Sklansky DJ, Nackers KA et al. Hospital Pediatrics. 2021;11(4):401-405. https://doi.org/10.1542/hpeds.2020-003244

Reviewed by Jareetha Abdul-Raheem

What was the study question?  

The study team wanted to both develop a tool with high interrater reliability to evaluate assessment and plans and use that tool to evaluate the efficacy of an education bundle on intern note quality.

How was this study done?

An 8-member stakeholder team consisting of pediatric residents, a hospitalist, an intensivist, residency program directors, and chief medical informatics officer convened to develop a 19-item progress note assessment and plan evaluation (PNAPE) tool and a bundled intervention. The tool and bundled intervention were generated using published literature, institutional best practice guidelines, and consultation with a departmental billing expert. The bundled intervention included a new electronic health record (EHR) note template and a 2.5-hour educational workshop. Team members delivered the educational workshop to all pediatric interns in August 2018. The workshop provided a didactic overview of the purpose of progress notes, note-writing best practices, and the new template, along with an opportunity for interns to practice writing and assessing notes using the tool. Blinded assessors, then used the PNAPE tool to assess 39 progress notes from 13 interns each from the pre- and post-intervention periods (fall 2017 and 2018).

What were the results?

PNAPE had a high internal inter-rater reliability between assessors with an intraclass correlation coefficient of 0.86 (95% confidence interval: 0.66–0.95). Following the implementation of the intervention bundle, assessment and plan quality improved significantly from a score of 13 (interquartile range [IQR]: 12–15) to 15 (IQR: 14–17; P =.008). Most of the score improvement was attributed to a higher proportion of assessments and plans indicating the primary problem requiring ongoing hospitalization and progress of this problem (P = .0016 and P = .001, respectively). The median file time decreased from 4:30 PM (IQR: 2:33 PM–6:20 PM) to 1:13 PM (IQR: 12:05 PM–3:59 PM; P = .001) following implementation of the bundle.

What are the implications?

The assessment and plan of the medical note are one of the most important aspects yet there are few tools or resources available that either evaluate assessment and plan quality or provide any guidance on how to create a high-quality assessment plan. The PNAPE tool and interventional bundle provides a potential solution on ways to both provide a standard methodology to evaluate the assessment and plan and to improve the assessment and plan quality.

Editor’s Note: The quality of notes in the electronic health record has been something we all feel is lacking. The study team found a simple way to improve the quality of the assessment and plan. They also found an objective method to assess quality rather than relying on subjective assessments. One criticism though is the curriculum was designed to show the study participants what PNAPE is. (AP)

 

Computers grading students

 

Machine Scoring of Medical Students’ Written Clinical Reasoning: Initial Validity Evidence.  Cianciolo AT, LaVoie N, Parker J.. Acad Med. 2021; 96:1026-1035. https://dx.DOI.org/10.1097/ACM.0000000000004010

 

Reviewed by Gary L. Beck Dallaghan

 

What was the study question?

Can machine learning be used to score diagnostic justification essays (a previously validated tool to assess clinical reasoning) in place of faculty raters?

 

How was the study done?

Machine learning applies computer algorithms that “learn” by identifying patterns in datasets and using the patterns to make inferences about new data.  The authors developed machine scoring algorithms to score a sample of 700 diagnostic justification essays from 414 third-year medical students completed as part of their summative clinical competency exam between 2012 and 2017. They compared the machine learning scores with research assistants’ ratings, original faculty ratings, and archival academic performance data on the same students who wrote the essays.

 

What were the results?

Response process validity was established by correlating machine scores with student research assistant and faculty ratings.  Internal structure validity was established by examining replicability of scores within cases and across cases .  Validity evidence for association with other variables compared machine scores and faculty ratings with measures of medical knowledge (e.g. USMLE exam scores), clinical cognition (performance on this and earlier summative clinical examinations) and clinical communication (faculty ratings from clerkships)   All three sources of validity evidence showed moderate to strong associations, although some case specificity was noted.

 

What are the implications?

The authors demonstrated that machine learning algorithms can accurately grade clinical reasoning essays.  However, the inability to adequately develop the machine learning algorithm, particularly if the technical expertise is not available at your institution, makes this impractical for most.

 

Editor’s Note:   It’s not quite time to put away your favorite red marker–yet.   But it is interesting to note that machine scores correlated better with average faculty ratings than faculty did with each other–ie. faculty ratings are more inconsistent.   As we think about reliable assessment data at educational transition points that go beyond multiple choice questions, machine learning may be a useful tool.  (JG)

 

 

 

Early clinical electives to help students in residency planning

 

How Do Clinical Electives during the Clerkship Year Influence Career Exploration? A Qualitative Study.  Sheu L et al. Teaching and Learning in Medicine (2021) published online: 01 Apr 2021.   https://dx.doi.org/10.1080/10401334.2021.1891545

 

Reviewed by  Zakary Woods, Elizabeth Van Opstal

 

What was the study question?

How do clinical electives during the clerkship year influence career exploration?

 

How was the study done? 

The University of California, San Francisco established a new curriculum which included a Clinical Immersive Experiences (CIEx) Program during the clerkship year, developed to promote career exploration. Students were required to participate in three two-week CIEx electives to graduate, graded as pass/fail. Eighteen of 132 eligible 4th year medical students who recently completed the program were randomly selected for invitation to participate in individual semi-structured interviews. Social Cognitive Career Theory (SCCT) served as the framework to guide interpretation of interview data. For each CIEx, students were asked reasons for choosing the CIEx, how it aided their career development, and how it affected future plans. The interviewer included probes that mapped to the three variables of SCCT: personal goals, self-efficacy, and understanding outcome expectation. Interview transcripts were analyzed using the template analysis method.

 

What were the results?

Fifteen (83%) of the 18 invited students participated. Interviews represented 27 different CIExes. Three major themes were identified: CIExes (1) facilitate personalized career exploration, (2) promote focused learning and skills development, and (3) foster a positive learning environment. Within facilitating personalized career exploration, the authors focused on how this curriculum allowed students to set personal goals and develop outcome expectations. Comments included those focusing on allowing students to gain more information about specialties to help facilitate choosing or not choosing a specialty. For the second and third themes, student interviews focused on self-efficacy and personal goals. The positive learning environment of the CIExes was described with student comments on wellbeing. One student commented the experience was a “breath of fresh air” between core rotations, and students commented positively on being a pass/fail experience.

 

What are the implications?

The inclusion of clinical electives during the clerkship year benefited students’ career exploration by assisting students in developing and adjusting their career goals through self-selected experiences. Allowing medical students to participate in electives earlier in medical school may help combat the anxiety and decision-making process that many students have difficulty with when deciding on a career. These electives helped improve medical student wellbeing.

 

Editor’s Note: One thing that stood out to me was that although “personal goals” was highlighted as a benefit of the clinical immersive experiences, there was no mention of “getting a reference letter” as a goal of these electives. Students, at least at my institution, are very focused on the match, and an important purpose of elective experiences is to secure letters of recommendation. Asking for LORs can be very stressful, even if the elective is non-evaluative; it was therefore surprising to me that it did not come up in this study (KFo).