December 2019

| 0

You must remember this!

Cognitive learning theory for clinical teaching.

McSparron JI, Vanka A, Smith CC.. Clin Teach  2018; 15:1-5. https://doi.org/10.1111/tct.12781

Reviewed by Gary Beck Dallaghan

 

 

What was the study question? Cognitive learning theory (CLT) is applied in medical school classrooms, focusing on engaged learning activities to help medical students retain material.  How does that work in a clinical environment?  Can science-of-learning principles be employed on the clerkships?

What is cognitive learning theory? CLT is a learning theory that focuses on perception and processing of information.  The six key aspects of CLT include retrieval practice, spaced learning, interleaving, generation, deliberate self-practice/reflection, and elaboration.  The authors provided more detail and examples of how these can be applied in clinical medical education.

What were the results? Socrates mastered the art of retrieval practice, using questioning to revisit key learning points.  You can also ask learners to identify their own key learning points.  By touching on learning points over time, long-term retention is enhanced.  Interleaving transfers knowledge from one situation to another, thinking of how management of asthma in one patient is different from the next.  Generating an answer or solving a problem fosters learning, so rather than lectures on the clerkship perhaps case-based learning should be considered. Deliberate practice is a structured approach designed to improve learning, which is why specific, actionable feedback is so important. Consider carving out time for students to reflect on their patients to identify gaps in their knowledge.  Use elaboration to ask the “what if” questions to get medical students to relate prior knowledge to new situations.

What are the implications of these findings? This clinical teacher toolbox article offers practical tips for applying CLT in a clinical environment.  The authors provide a table with several examples for each of the key aspects.  For many a busy clinician, this article will further reinforce teaching approaches already being practiced, but may also provide some new ideas to add to your teaching portfolio.

Editor’s Note:  It is nice to see that many of the time-tested strategies we use in clinical teaching align well with evidence-based educational practice.  If nothing else, being deliberate about including such strategies will ensure that our students get the most of our limited time with them. (JG)

 Do students think clerkship grading is fair and accurate?

In Pursuit of Honors: A Multi-Institutional Study of Students' Perceptions of Clerkship Evaluation and Grading .

Bullock, Justin L. et al.  Acad Med. 2019;94: S48–S56. https://dx.doi.org/10.1097/ACM.0000000000002905

Reviewed by Srividya Naganathan

What was the study question? What are medical students' perceptions of the fairness and accuracy of core clerkship assessment, the clerkship learning environment, and contributors to students' achievement?

How was the study done? This was a multi-institutional, cross-sectional survey of fourth year medical students from 6 U.S. schools, performed at the end of core clerkships. The survey included 106 items addressing participant demographics, self-reported number of honors earned, number of clerkships taken, intended specialty, perceived impact of various domains on their final grade, and perceptions of grading (fairness, accuracy) and clerkship learning environment (motivation, stereotype threat).  Descriptive statistics were used for demographics, chi-square tests for subgroup comparisons, and multivariable regression analysis to explore the relationship between student demographics and perceptions and honors earned.

What were the results? The survey response rate was 71.1%. Only 44.4% of students agreed that grade assessment was fair. Less than two-thirds of students felt that clerkship evaluations were accurate or that feedback received was useful (60.8% and 61.7%, respectively). Resident evaluation procedures were considered fair by 70.0% of students, while only 41.7% agreed that attending evaluation procedures were fair. One-third of students (33.6%) felt grading was biased with more women compared to men (64.4% vs 25.2%, P < .0005), and under- represented minorities compared to non-under represented (48.1% vs 31.4%, P = .0001). About, 18.3% of student responses indicated vulnerability to stereotype threat based on race. Not surprisingly, obtaining Honors was positively associated with applying into a more competitive specialty (beta = 0.18, P < .0005) and perceiving evaluations as more accurate (beta = 0.29, P < .0005), while negatively associated with stereotype threat (beta = -0.162, P < .0005)

What are the implications of these findings?  This study demonstrates that many medical students perceive evaluation and grading process during core clerkships not to be fair, with existence of perceived potential bias.

Editor’s note: I am confident that all clerkship directors could have predicted the results of this survey. We know that students’ perceptions of clerkship grading is not great unless the student receives the highest grade. That’s when they’re happy. The obvious next task is to strive to make clerkship evaluation as transparent and accurate as possible. We have a long way to go to achieve that result. (RR)

 Strategies to improve diagnostic reasoning

Interactive whiteboard use in clinical reasoning sessions to teach diagnostic test ordering and interpretation to undergraduate medical students

Gouzi et al. BMC Medical Education (2019) 19:424 https://doi.org/10.1186/s12909-019-1834-1

Reviewed by Aleisha Nabower

What was the study question?  Does the use of interactive white boards (IWB) in clinical reasoning learning (CRL) improve medical student education about test ordering and interpretation compared to the traditional courses?

How was the study done?  Third year medical students in a 6-year system were randomly assigned to traditional curriculum (CM) or interactive white board teaching in addition to traditional curriculum (IWB/CRL + CM). All students participating in a respiratory medicine rotation were mentored by senior physicians and participated in five 1-hour courses on test ordering and interpretation. Students in the IWB/CRL + CM group also participated in four 90-minute CRL/IWB small group sessions where they used an IWB to gather clinical information, discuss pertinence of information, generate clinical hypothesis, and order tests to explore hypotheses. Results of diagnostic test appeared on the IWB for discussion. Pre- and post-rotation assessments on the appropriateness of test ordering and interpretations were conducted for both groups. Additionally, students in years 3 to 6 of the program were invited to complete an online questionnaire that included perceptions of IWB/CRL sessions if they did participate, and self-confidence in diagnostic test ordering and interpretation.

What were the results?  Twenty-three students participated in both the pre- and post-training assessments (11/40 in CM and 12/40 in IWB/CRL+CM). All students improved in testing parameters over time. The number of diagnostic tests ordered increased over time in the CM group but did not change in the IWB/CRL + CM group. Test interpretability increased only in the IWB/CRL + CM group.  A total of 233 students participated in the questionnaire (206 CM and 27 IWB/CRL + CM). Over 90% of students expressed a need for more training sessions on diagnostic test ordering and interpretation. An increase in selfreported understanding of indications for testing was noted as students progressed through the program. Students who completed the IWB/CRL sessions were more likely to indicate testing a hypothesis as their reason for diagnostic testing than peers at same level of training and reported similar practices to those of the more advanced students.   What are the implications?  The incorporation of IWB to problem-based learning methods seems to improve student participation and understanding of ordering and interpreting diagnostic tests. While this study did not directly compare CRL with and without the use of IWBs, the authors propose that several of the learning objectives (verifying interpretability, identifying nonclinical signs) would not have been addressed in CRL sessions without the use of the IWB.     Editor’s Comments: It is hard to make broad generalizations about efficacy of the interactive whiteboard with such a small study group. However, it makes sense that there would be a real and perceived benefit by having students actively discuss rationale for testing as they articulate their clinical decision-making processes (KFo).