GPT-4 Artificial Intelligence Model Outperforms ChatGPT, Medical Students, and Neurosurgery Residents on Neurosurgery Written Board-Like Questions

Artificial intelligence (AI) and machine learning have transformed health care with applications in various specialized fields. Neurosurgery can benefit from artificial intelligence in surgical planning, predicting patient outcomes, and analyzing neuroimaging data. GPT-4, an updated language model w...

Full description

Saved in:
Bibliographic Details
Published inWorld neurosurgery Vol. 179; pp. e160 - e165
Main Authors Guerra, Gage A., Hofmann, Hayden, Sobhani, Sina, Hofmann, Grady, Gomez, David, Soroudi, Daniel, Hopkins, Benjamin S., Dallas, Jonathan, Pangal, Dhiraj J., Cheok, Stephanie, Nguyen, Vincent N., Mack, William J., Zada, Gabriel
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial intelligence (AI) and machine learning have transformed health care with applications in various specialized fields. Neurosurgery can benefit from artificial intelligence in surgical planning, predicting patient outcomes, and analyzing neuroimaging data. GPT-4, an updated language model with additional training parameters, has exhibited exceptional performance on standardized exams. This study examines GPT-4’s competence on neurosurgical board-style questions, comparing its performance with medical students and residents, to explore its potential in medical education and clinical decision-making. GPT-4’s performance was examined on 643 Congress of Neurological Surgeons Self-Assessment Neurosurgery Exam (SANS) board-style questions from various neurosurgery subspecialties. Of these, 477 were text-based and 166 contained images. GPT-4 refused to answer 52 questions that contained no text. The remaining 591 questions were inputted into GPT-4, and its performance was evaluated based on first-time responses. Raw scores were analyzed across subspecialties and question types, and then compared to previous findings on Chat Generative pre-trained transformer performance against SANS users, medical students, and neurosurgery residents. GPT-4 attempted 91.9% of Congress of Neurological Surgeons SANS questions and achieved 76.6% accuracy. The model’s accuracy increased to 79.0% for text-only questions. GPT-4 outperformed Chat Generative pre-trained transformer (P < 0.001) and scored highest in pain/peripheral nerve (84%) and lowest in spine (73%) categories. It exceeded the performance of medical students (26.3%), neurosurgery residents (61.5%), and the national average of SANS users (69.3%) across all categories. GPT-4 significantly outperformed medical students, neurosurgery residents, and the national average of SANS users. The mode’s accuracy suggests potential applications in educational settings and clinical decision-making, enhancing provider efficiency, and improving patient care.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1878-8750
1878-8769
1878-8769
DOI:10.1016/j.wneu.2023.08.042