Accuracy of Commercial Large Language Model (ChatGPT) to Predict the Diagnosis for Prehospital Patients Suitable for Ambulance Transport Decisions: Diagnostic Accuracy Study

While ambulance transport decisions guided by artificial intelligence (AI) could be useful, little is known of the accuracy of AI in making patient diagnoses based on the pre-hospital patient care report (PCR). The primary objective of this study was to assess the accuracy of ChatGPT (OpenAI, Inc.,...

Full description

Saved in:
Bibliographic Details
Published inPrehospital emergency care Vol. 29; no. 3; p. 238
Main Authors Miller, Eric D, Franc, Jeffrey Michael, Hertelendy, Attila J, Issa, Fadi, Hart, Alexander, Woodward, Christina A, Newbury, Bradford, Newbury, Kiera, Mathew, Dana, Whitten-Chung, Kimberly, Bauer, Eric, Voskanyan, Amalia, Ciottone, Gregory R
Format Journal Article
LanguageEnglish
Published England 2025
Subjects
Online AccessGet more information

Cover

Loading…
More Information
Summary:While ambulance transport decisions guided by artificial intelligence (AI) could be useful, little is known of the accuracy of AI in making patient diagnoses based on the pre-hospital patient care report (PCR). The primary objective of this study was to assess the accuracy of ChatGPT (OpenAI, Inc., San Francisco, CA, USA) to predict a patient's diagnosis using the PCR by comparing to a reference standard assigned by experienced paramedics. The secondary objective was to classify cases where the AI diagnosis did not agree with the reference standard as paramedic correct, ChatGPT correct, or equally correct. This diagnostic accuracy study used a zero-shot learning model and greedy decoding. A convenience sample of PCRs from paramedic students was analyzed by an untrained ChatGPT-4 model to determine the single most likely diagnosis. A reference standard was provided by an experienced paramedic reviewing each PCR and giving a differential diagnosis of three items. A trained prehospital professional assessed the ChatGPT diagnosis as concordant or non-concordant with one of the three paramedic diagnoses. If non-concordant, two board-certified emergency physicians independently decided if the ChatGPT or the paramedic diagnosis was more likely to be correct. ChatGPT-4 diagnosed 78/104 (75.0%) of PCRs correctly (95% confidence interval: 65.3-82.7%). Among the 26 cases of disagreement, judgment by the emergency physicians was that in 6/26 (23.0%) the paramedic diagnosis was more likely to be correct. There was only one case of the 104 (0.96%) where transport decisions based on the AI guided diagnosis would have been potentially dangerous to the patient (under-triage). In this study, overall accuracy of ChatGPT to diagnose patients based on their emergency medical services PCR was 75.0%. In cases where the ChatGPT diagnosis was considered less likely than paramedic diagnosis, most commonly the AI diagnosis was more critical than the paramedic diagnosis-potentially leading to over-triage. The under-triage rate was <1%.
ISSN:1545-0066
DOI:10.1080/10903127.2025.2460775