Predicting and enhancing American Board of Surgery In-Training Examination performance: Does writing questions really help?

Ross E. Willis, Daniel L. Dent, Joseph D. Love, Jason W. Kempenich, John Uecker, Kimberly M. Brown, J. Scott Thomas, Pedro P. Gomez, Andrew J. Adams, John R. Admire, Julie M. Sprunt, Kristen M. Kahrig, Katie Wiggins-Dohlvik

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Background The generative learning model posits that individuals remember content they have generated better than materials created by others. The goals of this study were to evaluate question generation as a study method for the American Board of Surgery In-Training Examination (ABSITE) and determine whether practice test scores and other data predict ABSITE performance. Methods Residents (n = 206) from 6 general surgery programs were randomly assigned to one of the two study conditions. One group wrote questions for practice examinations. All residents took 2 practice examinations. Results There was not a significant effect of writing questions on ABSITE score. Practice test scores, United States Medical Licensing Examination Step 1 scores, and previous ABSITE scores were significantly correlated with ABSITE performance. Conclusions The generative learning model was not supported. Performance on practice tests and other data can be used for early identification of residents at risk of performing poorly on the ABSITE.

Original languageEnglish (US)
Pages (from-to)361-368
Number of pages8
JournalAmerican journal of surgery
Volume211
Issue number2
DOIs
StatePublished - Feb 1 2016

Keywords

  • ABSITE
  • Medical knowledge
  • Surgical education

ASJC Scopus subject areas

  • Surgery

Fingerprint

Dive into the research topics of 'Predicting and enhancing American Board of Surgery In-Training Examination performance: Does writing questions really help?'. Together they form a unique fingerprint.

Cite this