Get by with a little help from your friends
Khalil MK. Weekly near-peer tutoring sessions improve students’ performance on basic medical sciences and USMLE Step1 examinations. Med Teach. 2022;44(7):752-757. https://dx.doi.org/10.1080/0142159X.2022.2027901Reviewed by: Amit PahwaWhat was the study question?
Do near peer sessions improve performance on basic science courses and USMLE Step 1?
How was it done?
Self-selected fourth year medical students at one school were reviewed and approved as near-peer tutors to conduct 24 weekly sessions. Each 90-minute session consisted of 30 USMLE style questions on material covered in the second-year courses the previous week. Second-year student participants were categorized as no-low, moderate, and high attendance based on number of sessions attended. Correlation of attendance with performance on the second-year basic science exams, Comprehensive Basic Science Examination (CBSE), and Step 1 scores conducted using Pearson correlation. Participants who attended more than 8 sessions received a previously validated survey on effectiveness of the program.
What were the results?
A total of 388 students were invited to participate with full data on 368. Most students (267) were no-low attendance, 73 were moderate attendance, and 27 were high attendance. There was a significant difference among the three groups. High attendance groups performed better on all exams compared to no-low attendance and better on basic science and CBSE than moderate attendance. Participant evaluation of the sessions were high in overall session, understanding difficult concepts, monitoring learning progress, and preparation for M2 year and USMLE. There were less helpful in reducing stress and increasing confidence for the USMLE.
What are the implications?
Near-peer sessions in medical school are becoming increasingly popular in undergraduate medical education. This study supports the use of near peers for preparation for exams in the nonclinical years. Utilizing near-peers may help free faculty time to concentrate on other aspects of students’ progress for which near-peer may not be effective. These programs also allow students to develop as educators.
Editor’s Comments: The results over 4 years show a clear correlation between attendance of near-peer sessions with exam performance, however overall rates of attendance were strikingly low for these optional sessions with almost ¾ of students having no or low attendance. This likely speaks to challenges students face in balancing educational and personal demands and how that impacts ability to attend optional sessions. (KFo)
Failing to Fail
Swails, JL, Gadgil, MA, Goodrum, H, Gupta, R, Rahbar, MH, Bernstam, Role of faculty characteristics in failing to fail in clinical clerkships. EV.Med Educ.2022;56:634-640 2022 AcadMed 2021;96:113-7. https://doi.org/10.1111/medu.14725
Reviewed by: Michele Haight and Cliff Lee
What was the study question?
Is there an association between faculty rank and likelihood of submitting a low performance evaluation (LPE) for medical students during 3rd year clerkships?
How was it done? Individual faculty evaluations of supervised medical students who completed their 3rd year clerkships were analyzed over a period of fifteen years (January, 2007-April, 2021) at a single institution. All midpoint and final medical student evaluations for Family and Community Medicine, Internal Medicine, Pediatrics, Psychiatry and Neurology clerkships were included. The study authors utilized a generalized mixed regression model. In addition to rank, other available factors were faculty age, race, ethnicity, and gender.
What were the results? A total of 50,120 evaluations were identified (32,024 final evaluations [64%] and 18,096 midpoint evaluations [36%]) completed by 585 faculty evaluators on 3447 medical students. A total of 1418 (2.8%) LPEs were given with female evaluators accounting for 63% of the LPEs. Full professors were more likely to submit summative LPEs than assistant professors (OR=1.62 [1.08, 2.43]; p=0,02). There was no significant difference between associate professors and assistant professors. LPEs were more common at midpoint (4.9%) than the final evaluation (1.6%), (OR=4.004, 95% CI [3.59, 4.53]; p<0.001). Women were more likely to give LPEs than men (OR=1.88 [1.37, 2.58]; p=0.01). The likelihood of LPE decreased significantly during the 15-year study period (OR=0.94 [0.90, 0.97]; p<0.01). Effects of age, race, ethnicity and other interactive effects were dropped from the final model due to nonstatistical significance. Faculty experience was not associated with LPE.
What are the implications?
While LPE’s are relatively rare, this study does show a difference in those who are likely to submit LPE’s. The authors hypothesized that junior faculty may be hesitant to document an LPE due to fear of negative consequences, particularly poor teaching evaluations as these are an important component of assessment for promotion, selection for teaching awards, and other professional advancement opportunities. Accurately identifying low-performing students is not only necessary to help students improve, but it is an essential and ethical responsibility to maintain the public good. A more standardized framework for giving and documenting LPEs needs to be established.
Editor’s Note: This study continues to highlight how our assessment methods may not have standardization and as medical school course directors we need to continue to improve how we assess students.. However the design of this study has limitations since it may just be more likely that assistant professors and women are doing a bulk of the teaching and more likely to receive low performing students. (AKP)
An argument for the value of MCAT scores
Hanson JT, Busche K; Elks ML, Jackson-Williams LE et al. The Validity of MCAT Scores in Predicting Students’ Performance and Progress in Medical School: Results From a Multisite Study. Academic Medicine: September 2022 – Volume 97 – Issue 9 – p 1374-1384. https://dx.doi.org/10.1097/ACM.0000000000004754
Reviewed by Srividya Naganathan
What was the study question?
Do Medical College Admission Test (MCAT) scores and undergraduate grade point averages (UGPAs) predict students’ performance in preclerkship and clerkship courses and on United States Medical Licensing Examination (USMLE) Step1 and Step 2 CK exams?
How was the study done?
Two groups of medical schools were included in the study: Group 1 (Validity schools)- 15 US and 2 Canadian schools; Group 2 ( all US medical schools)-148 US medical schools with regular MD programs. Data collection included MCAT, UGPAs, preclerkship/clerkship school performance, USMLE step 1 and 2 scores from group 1 and school performance, USMLE step 1 and 2 scores from group 2. MCAT scores were obtained from the AAMC database. UGPAs were extracted from a centralized service or standardized to 0-4 reporting score. Preclerkship and clerkship scores were calculated as mean based on a combination of written/ practical exams and evaluations. Step 1 and 2 were assessed based on pass/fail outcomes. Data was analyzed separately for schools and cohorts using linear and logistic regression with predictors being MCAT alone, UGPAs alone or combined with outcomes of performance in preclerkship/clerkship and step 1,2 results. In addition, the predictions were analyzed with respect to race, ethnicity, socioeconomic status and gender.
What were the results?
Median correlations between MCAT scores were 0.59, 0.59 and 0.53 for preclerkship performance, step 1 and step 2 results as compared to UGPAs correlations of 0.55, 0.50 and 0.47 respectively. When both MCAT and UGPAs were combined the median increased to 0.66, 0.62 and 0.57 for preclerkship performance, step 1 and step 2 results. The results were comparable when analyzed with respect to students of different backgrounds and gender..
How can I apply this to my teaching?
The use of standardized testing results as a screening for the admissions process is a debated topic with differing opinions. This multicenter study does validate that combined MCAT scores and UGPAs do reliably predict performance in preclerkship/clerkship courses as well as on standardized exams. Furthermore, the predictive validity of MCAT scores and UGPAs still held among students from different ethnic groups and those of different socioeconomic backgrounds.
Editor’s Note: It is not surprising that MCAT scores (based on MCQ’s) and UGPA’s (often based on MCQ’s for science majors) predict performance in preclerkship/clerkship courses and USMLE exams, all of which are largely based on MCQ’s. We have trained a generation of students to crush MCQ exams, largely by doing hundreds of them in their free time. It would be nice to reward them for doing something else. (JG) |