ChatGPT-Generated Differential Diagnosis Lists for Complex Case–Derived Clinical Vignettes: Diagnostic Accuracy Evaluation
Background:The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.Objective:This study aims to evaluate the accuracy of t...
Saved in:
Published in | JMIR medical informatics Vol. 11; p. e48808 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Toronto
JMIR Publications
09.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Background:The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.Objective:This study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan.Methods:We searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis.Results:In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models’ diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022).Conclusions:This study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making. |
---|---|
AbstractList | Background:The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.Objective:This study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan.Methods:We searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis.Results:In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models’ diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022).Conclusions:This study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making. The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.BACKGROUNDThe diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown.This study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan.OBJECTIVEThis study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan.We searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis.METHODSWe searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis.In total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models' diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022).RESULTSIn total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models' diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022).This study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making.CONCLUSIONSThis study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making. BackgroundThe diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes derived from general internal medicine (GIM) department case reports is unknown. ObjectiveThis study aims to evaluate the accuracy of the differential diagnosis lists generated by both third-generation ChatGPT (ChatGPT-3.5) and fourth-generation ChatGPT (ChatGPT-4) by using case vignettes from case reports published by the Department of GIM of Dokkyo Medical University Hospital, Japan. MethodsWe searched PubMed for case reports. Upon identification, physicians selected diagnostic cases, determined the final diagnosis, and displayed them into clinical vignettes. Physicians typed the determined text with the clinical vignettes in the ChatGPT-3.5 and ChatGPT-4 prompts to generate the top 10 differential diagnoses. The ChatGPT models were not specially trained or further reinforced for this task. Three GIM physicians from other medical institutions created differential diagnosis lists by reading the same clinical vignettes. We measured the rate of correct diagnosis within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and the top diagnosis. ResultsIn total, 52 case reports were analyzed. The rates of correct diagnosis by ChatGPT-4 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 83% (43/52), 81% (42/52), and 60% (31/52), respectively. The rates of correct diagnosis by ChatGPT-3.5 within the top 10 differential diagnosis lists, top 5 differential diagnosis lists, and top diagnosis were 73% (38/52), 65% (34/52), and 42% (22/52), respectively. The rates of correct diagnosis by ChatGPT-4 were comparable to those by physicians within the top 10 (43/52, 83% vs 39/52, 75%, respectively; P=.47) and within the top 5 (42/52, 81% vs 35/52, 67%, respectively; P=.18) differential diagnosis lists and top diagnosis (31/52, 60% vs 26/52, 50%, respectively; P=.43) although the difference was not significant. The ChatGPT models’ diagnostic accuracy did not significantly vary based on open access status or the publication date (before 2011 vs 2022). ConclusionsThis study demonstrates the potential diagnostic accuracy of differential diagnosis lists generated using ChatGPT-3.5 and ChatGPT-4 for complex clinical vignettes from case reports published by the GIM department. The rate of correct diagnoses within the top 10 and top 5 differential diagnosis lists generated by ChatGPT-4 exceeds 80%. Although derived from a limited data set of case reports from a single department, our findings highlight the potential utility of ChatGPT-4 as a supplementary tool for physicians, particularly for those affiliated with the GIM department. Further investigations should explore the diagnostic accuracy of ChatGPT by using distinct case materials beyond its training data. Such efforts will provide a comprehensive insight into the role of artificial intelligence in enhancing clinical decision-making. |
Author | Tokumasu, Kazuki Hirosawa, Takanobu Kawamura, Ren Harada, Yukinori Shimizu, Taro Mizuta, Kazuya Kaji, Yuki Suzuki, Tomoharu |
AuthorAffiliation | 1 Department of Diagnostic and Generalist Medicine Dokkyo Medical University Tochigi Japan 3 Department of General Medicine International University of Health and Welfare Narita Hospital Chiba Japan 4 Department of Hospital Medicine Urasoe General Hospital Okinawa Japan 2 Department of General Medicine Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences Okayama Japan |
AuthorAffiliation_xml | – name: 2 Department of General Medicine Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences Okayama Japan – name: 1 Department of Diagnostic and Generalist Medicine Dokkyo Medical University Tochigi Japan – name: 3 Department of General Medicine International University of Health and Welfare Narita Hospital Chiba Japan – name: 4 Department of Hospital Medicine Urasoe General Hospital Okinawa Japan |
Author_xml | – sequence: 1 givenname: Takanobu orcidid: 0000-0002-3573-8203 surname: Hirosawa fullname: Hirosawa, Takanobu – sequence: 2 givenname: Ren orcidid: 0000-0002-5632-3218 surname: Kawamura fullname: Kawamura, Ren – sequence: 3 givenname: Yukinori orcidid: 0000-0001-6042-7397 surname: Harada fullname: Harada, Yukinori – sequence: 4 givenname: Kazuya orcidid: 0009-0000-8822-7127 surname: Mizuta fullname: Mizuta, Kazuya – sequence: 5 givenname: Kazuki orcidid: 0000-0001-9513-6864 surname: Tokumasu fullname: Tokumasu, Kazuki – sequence: 6 givenname: Yuki orcidid: 0000-0002-0267-9876 surname: Kaji fullname: Kaji, Yuki – sequence: 7 givenname: Tomoharu orcidid: 0000-0002-5557-0516 surname: Suzuki fullname: Suzuki, Tomoharu – sequence: 8 givenname: Taro orcidid: 0000-0002-3788-487X surname: Shimizu fullname: Shimizu, Taro |
BookMark | eNpdks9uEzEQxleoFS0l77ASQkKqQu1dr-3lgqptCZUiwaFwtbzjSepoYwfbG1GJA-_AG_IkuElBNCf_-803n2fmRXHkvMOimFDytqItv2BSEvmsOK2qlk5b3rKj__YnxSTGFSGEMso5F8-Lk1pIWjEuT4sf3Z1Os8-30xk6DDqhKa_sYoEBXbJ6yAe9dD7aWM5tTLFc-FB2fr0Z8HvZ6Yi_f_66wmC3Oa4brLOQY77apcOUML77G54slJcAY9BwX15v9TDqZL17WRwv9BBx8rieFV8-XN92H6fzT7Ob7nI-hYbQNDVAoKcCBEHT11VPjTBCGMp6bQhwQkxfmfwxBIZCNy0wAKCyhkYCSiD1WXGz1zVer9Qm2LUO98prq3YXPiyVDtnjgIrkXC2gkI3gTDAmOW_rumItq3qiNcta7_dam7Ffo4Fcp6CHJ6JPX5y9U0u_VZQ0LaN1mxXePCoE_23EmNTaRsBh0A79GFUlBZM14_wh2asDdOXH4HKtVO6ubIgkosrU6z0FwccYcPHPDSXqYT7Ubj4yd37AgU27RmSjdjig_wBvVrzp |
CitedBy_id | crossref_primary_10_1002_ajmg_a_63878 crossref_primary_10_1186_s12909_024_06309_x crossref_primary_10_2196_58758 crossref_primary_10_2196_53985 crossref_primary_10_7759_cureus_48235 crossref_primary_10_1177_20552076241265215 crossref_primary_10_1007_s00508_024_02343_3 crossref_primary_10_7759_cureus_50629 crossref_primary_10_1016_j_oooo_2024_11_087 crossref_primary_10_1515_dx_2024_0027 crossref_primary_10_1001_jamanetworkopen_2024_57879 crossref_primary_10_1038_s41746_025_01543_z crossref_primary_10_1177_00031348241256075 crossref_primary_10_3389_fdmed_2024_1456208 crossref_primary_10_1111_bcp_16275 crossref_primary_10_1016_j_xops_2024_100600 crossref_primary_10_2196_54067 crossref_primary_10_1111_epi_18322 crossref_primary_10_3390_diagnostics14161779 crossref_primary_10_2196_51798 crossref_primary_10_1111_nan_12997 crossref_primary_10_2196_54704 crossref_primary_10_1016_j_anndiagpath_2024_152392 crossref_primary_10_1111_odi_15082 crossref_primary_10_1515_dx_2024_0095 crossref_primary_10_3390_diagnostics14212393 crossref_primary_10_2147_JPR_S509845 crossref_primary_10_1111_jep_14011 crossref_primary_10_2196_59267 crossref_primary_10_2196_56110 crossref_primary_10_3390_app142310802 crossref_primary_10_1038_s41746_025_01556_8 crossref_primary_10_1016_j_esmorw_2024_100042 crossref_primary_10_4103_NRR_NRR_D_24_00165 crossref_primary_10_2196_65263 crossref_primary_10_1038_s41598_023_50884_w crossref_primary_10_1186_s12911_024_02757_z crossref_primary_10_1016_j_dld_2024_02_014 crossref_primary_10_7759_cureus_73127 crossref_primary_10_3390_curroncol31070284 crossref_primary_10_1007_s11606_024_09177_9 crossref_primary_10_2196_59133 crossref_primary_10_3390_jcm14020572 crossref_primary_10_1016_j_jfma_2024_08_032 crossref_primary_10_1016_S1473_3099_23_00750_8 crossref_primary_10_1515_dx_2024_0033 crossref_primary_10_1007_s00784_024_05587_5 crossref_primary_10_1016_j_spinee_2024_12_035 crossref_primary_10_1038_s41591_024_03180_7 crossref_primary_10_1186_s12911_024_02824_5 crossref_primary_10_3390_info16010038 |
Cites_doi | 10.2196/35225 10.1097/INF.0000000000003852 10.1136/bcr-2017-223844 10.1001/jama.2022.13735 10.1371/journal.pone.0148991 10.1111/opo.12131 10.1515/dx-2023-0116 10.1148/radiol.223312 10.1370/afm.2908 10.1515/dx-2013-0029 10.1371/journal.pdig.0000198 10.1056/nejmra2302038 10.1016/j.amjmed.2023.02.011 10.2147/ijgm.s96741 10.17226/21794 10.1016/B978-0-12-398476-0.00002-6 10.1016/j.dsx.2023.102744 10.1136/bmjqs-2022-015436 10.1038/s41591-023-02289-5 10.48550/arXiv.2212.00857 10.2196/31810 10.1056/nejmsr2214184 10.3390/ijerph20043378 10.1111/medu.13382 10.1001/jama.2023.8288 10.3390/healthcare10040608 10.48550/arXiv.2303.08774 |
ContentType | Journal Article |
Copyright | 2023. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. Takanobu Hirosawa, Ren Kawamura, Yukinori Harada, Kazuya Mizuta, Kazuki Tokumasu, Yuki Kaji, Tomoharu Suzuki, Taro Shimizu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 09.10.2023. Takanobu Hirosawa, Ren Kawamura, Yukinori Harada, Kazuya Mizuta, Kazuki Tokumasu, Yuki Kaji, Tomoharu Suzuki, Taro Shimizu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 09.10.2023. 2023 |
Copyright_xml | – notice: 2023. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: Takanobu Hirosawa, Ren Kawamura, Yukinori Harada, Kazuya Mizuta, Kazuki Tokumasu, Yuki Kaji, Tomoharu Suzuki, Taro Shimizu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 09.10.2023. – notice: Takanobu Hirosawa, Ren Kawamura, Yukinori Harada, Kazuya Mizuta, Kazuki Tokumasu, Yuki Kaji, Tomoharu Suzuki, Taro Shimizu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 09.10.2023. 2023 |
DBID | AAYXX CITATION 3V. 7X7 7XB 88C 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M0T PHGZM PHGZT PIMPY PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI 7X8 5PM DOA |
DOI | 10.2196/48808 |
DatabaseName | CrossRef ProQuest Central (Corporate) Health & Medical Collection ProQuest Central (purchase pre-March 2016) Healthcare Administration Database (Alumni) ProQuest Hospital Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest One ProQuest Central Proquest Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni) Healthcare Administration Database ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest Health & Medical Research Collection ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing ProQuest Central ProQuest Health & Medical Research Collection Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Health & Medical Research Collection ProQuest Central (New) ProQuest One Academic Eastern Edition ProQuest Health Management ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) ProQuest Health & Medical Complete ProQuest One Academic UKI Edition ProQuest Health Management (Alumni Edition) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | Publicly Available Content Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: BENPR name: ProQuest Central (New) url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine |
EISSN | 2291-9694 |
ExternalDocumentID | oai_doaj_org_article_0c0c9ce78576474486693324942b0aa4 PMC10594139 10_2196_48808 |
GroupedDBID | 53G 5VS 7X7 8FI 8FJ AAFWJ AAYXX ABUWG ADBBV AFKRA AFPKN ALIPV ALMA_UNASSIGNED_HOLDINGS AOIJS BAWUL BCNDV BENPR CCPQU CITATION DIK EMOBN FYUFA GROUPED_DOAJ HMCUK HYE KQ8 M0T M48 M~E OK1 PGMZT PHGZM PHGZT PIMPY RPM UKHRP 3V. 7XB 8FK AZQEC DWQXO K9. PJZUB PKEHL PPXIY PQEST PQQKQ PQUKI 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c501t-dc0cb17c70edb32b1d7d77d14bad0c600db2d667ec4e7a59c4ccc183c58ce8c03 |
IEDL.DBID | M48 |
ISSN | 2291-9694 |
IngestDate | Wed Aug 27 01:29:04 EDT 2025 Thu Aug 21 18:35:52 EDT 2025 Fri Jul 11 02:59:44 EDT 2025 Fri Jul 25 21:55:56 EDT 2025 Tue Jul 01 01:42:00 EDT 2025 Thu Apr 24 23:05:07 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c501t-dc0cb17c70edb32b1d7d77d14bad0c600db2d667ec4e7a59c4ccc183c58ce8c03 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0009-0000-8822-7127 0000-0002-5557-0516 0000-0002-5632-3218 0000-0002-3788-487X 0000-0001-6042-7397 0000-0002-0267-9876 0000-0001-9513-6864 0000-0002-3573-8203 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.2196/48808 |
PMID | 37812468 |
PQID | 2918508072 |
PQPubID | 4997117 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_0c0c9ce78576474486693324942b0aa4 pubmedcentral_primary_oai_pubmedcentral_nih_gov_10594139 proquest_miscellaneous_2874834664 proquest_journals_2918508072 crossref_primary_10_2196_48808 crossref_citationtrail_10_2196_48808 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20231009 |
PublicationDateYYYYMMDD | 2023-10-09 |
PublicationDate_xml | – month: 10 year: 2023 text: 20231009 day: 9 |
PublicationDecade | 2020 |
PublicationPlace | Toronto |
PublicationPlace_xml | – name: Toronto – name: Toronto, Canada |
PublicationTitle | JMIR medical informatics |
PublicationYear | 2023 |
Publisher | JMIR Publications |
Publisher_xml | – name: JMIR Publications |
References | ref13 ref12 ref15 ref14 ref11 ref10 ref2 ref1 ref17 ref16 ref19 ref18 Balogh, EP (ref4) 2015 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref27 ref8 ref9 ref3 ref6 ref5 Greenes, R (ref7) 2014 |
References_xml | – ident: ref8 doi: 10.2196/35225 – ident: ref14 doi: 10.1097/INF.0000000000003852 – ident: ref23 doi: 10.1136/bcr-2017-223844 – ident: ref20 doi: 10.1001/jama.2022.13735 – ident: ref6 doi: 10.1371/journal.pone.0148991 – ident: ref25 doi: 10.1111/opo.12131 – ident: ref27 doi: 10.1515/dx-2023-0116 – ident: ref13 doi: 10.1148/radiol.223312 – ident: ref9 doi: 10.1370/afm.2908 – ident: ref1 doi: 10.1515/dx-2013-0029 – ident: ref15 doi: 10.1371/journal.pdig.0000198 – ident: ref11 doi: 10.1056/nejmra2302038 – ident: ref19 doi: 10.1016/j.amjmed.2023.02.011 – ident: ref3 doi: 10.2147/ijgm.s96741 – year: 2015 ident: ref4 publication-title: Improving Diagnosis in Health Care doi: 10.17226/21794 – start-page: 49 year: 2014 ident: ref7 publication-title: Clinical Decision Support (Second Edition) doi: 10.1016/B978-0-12-398476-0.00002-6 – ident: ref17 doi: 10.1016/j.dsx.2023.102744 – ident: ref2 doi: 10.1136/bmjqs-2022-015436 – ident: ref12 doi: 10.1038/s41591-023-02289-5 – ident: ref26 doi: 10.48550/arXiv.2212.00857 – ident: ref5 doi: 10.2196/31810 – ident: ref18 doi: 10.1056/nejmsr2214184 – ident: ref22 doi: 10.3390/ijerph20043378 – ident: ref24 doi: 10.1111/medu.13382 – ident: ref21 doi: 10.1001/jama.2023.8288 – ident: ref10 doi: 10.3390/healthcare10040608 – ident: ref16 doi: 10.48550/arXiv.2303.08774 |
SSID | ssj0001416667 |
Score | 2.4840326 |
Snippet | Background:The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical... The diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical vignettes... BackgroundThe diagnostic accuracy of differential diagnoses generated by artificial intelligence chatbots, including ChatGPT models, for complex clinical... |
SourceID | doaj pubmedcentral proquest crossref |
SourceType | Open Website Open Access Repository Aggregation Database Enrichment Source Index Database |
StartPage | e48808 |
SubjectTerms | Accuracy Artificial intelligence Case reports Chatbots Multimedia Original Paper Patients Physicians |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Na9tAEB1KDqZQSpu2VG0aNpCriLxeaVe5JXY-KEnpISm5CWl23RiMUmI7JNBD_0P-YX5J3q5kY_XSS2-2d4WlnTeaN7vDG6Jd-HI2cBLgRfSLFXIIvAcV3B3Ugh1em9L6fcjzb9nppfp6lV6ttfryNWGNPHCzcHsJJ5yz0wbEWGkkE1mGHBxJg5JVUpZBCRQxby2ZCrsryh-H6R698rXOQNmeB6rpBJ-g0d8hlt2yyLU4c_yGXrcEURw0N_aWXrh6k3rn7RH4O_o9vC7nJ98v4kYwGoRRjNomJ3DWKb6E2rnJTJzBgjMBUiq800_dvRgiZD39eRwBdXe4rtUEnYofk5-1L_mZ7S8vx5-LA-bFbckP4milCP6eLo-PLoancdtCIeY06c9ji9Wr-pp14mw1kFXfaqu17auqtAmD7NhKWiyUY-V0measmBlezqlhZzgZfKCN-qZ2H0mk48opqX3LqkzlmTFs7BgEQOJDrsYc0e5ybQtu9cV9m4tpgTzDm6AIJohoezXtVyOo8feEQ2-Y1aDXvw4_ABVFi4riX6iIaGtp1qJ1ylkhc5ATMGQtI9pZDcOd_BlJWbubBeYY7bdX8YgRmQ4cOjfUHakn10GY23NVkIL80_94hM_00re2D4WD-RZtzG8X7gsI0LzaDlh_BsT_BTY priority: 102 providerName: Directory of Open Access Journals – databaseName: Health & Medical Collection dbid: 7X7 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3NTtwwEB5RkFClqoL-qGkpciWuEVmvE9u9ILr8CZWqB6j2FiVjL6y0ytLNLqISB96BN-yTdCbrXRoO3JLYlpPMjOfzePQNwA7Zctb1kpSXvF-saA9B66AicydogZ6WTek4Dnn2Izu5UKf9tB8CbnVIq1ysic1C7cbIMfJdacmzELzRcu_6d8xVo_h0NZTQeAFrTF3GWq37-jHGovhQTK_DK854Jl3bZXU1LRfUMPW34GU7OfI_b3O0Aa8DTBT7c7luwoqv3sD6WTgIfwt3vatievzzPJ7TRhNsFAeh1AmZ7Ihumgy6YS2-kxxrQdBUsOmP_K3okeP6e_9wQLp3Q-MCM-hI_BpeVpz4U39dDKfJxT7ibFLgH3G45AV_BxdHh-e9kzgUUogxTTrT2GGCZUejTrwru7LsOO20dh1VFi5BgjyulI5-lEfldZFaVIhIto6pQW8w6b6H1Wpc-Q8g0kHpldRcuCpTNjMGjRsQDJB0YdUAI9hZ_NscA8s4F7sY5bTbYBHkjQgi2F52u57Tajzt8I0Fs2xkFuzmwXhymQejyhP6LoteG9o0KU0bzSyzXUKIVskyKQoVwdZCrHkwzTp_VKQIviybyaj4pKSo_HhGfYzmICt9YgSmpQ6tF2q3VMOrhp6bEStBA_vx-dk_wUsuXd8kBtotWJ1OZv4zAZxpud1o8T9mN_5h priority: 102 providerName: ProQuest |
Title | ChatGPT-Generated Differential Diagnosis Lists for Complex Case–Derived Clinical Vignettes: Diagnostic Accuracy Evaluation |
URI | https://www.proquest.com/docview/2918508072 https://www.proquest.com/docview/2874834664 https://pubmed.ncbi.nlm.nih.gov/PMC10594139 https://doaj.org/article/0c0c9ce78576474486693324942b0aa4 |
Volume | 11 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1fa9RAEB9sC0UQqf8w2oYV-hrN5TbZrCClvV4t4pUiPelbSGb32oOQ0_sjLfjgd-g39JP421zuaMRH35LsLMnuzGR-MzvMEO1Dl5OujSC8sH6BhA-B_6CEugNasMVvMzIuDjk4S06H8tNlfC-bsNnA2T9dO9dPajgt3958vz2Awn9wacwQoHdOBtMN2oIxUq6JwaBB-HWYRbpzMbVNj1rULStUF-tvIcx2fuQ9g3OyQ48bpCgOl6x9Qg9s9ZS2B81Z-DP62bvO5x_PL4Jl5WggR3HcdDuB1pa4qZPoxjPxGaycCaBT4bS_tDeiB9v1-9fdMcTvB-Y1xUFL8XV8Vbncn9n71XS8XBwyL6Y534r-ujT4cxqe9C96p0HTSyHgOOzMA8MhFx3FKrSm6EZFxyijlOnIIjchA_WYIjLYKMvSqjzWLJkZ6s5xyjblsPuCNqtJZV-SiEeFlZFyvasSqZM05dSMgAQiXGg5Yo_2V3ubcVNo3PW7KDM4HI4FWc0Cj_w12bdlZY2_CY4cY9aDrhB2_WAyvcoavcpCrEuzVSn8JqngayaJ7gIkahkVYZ5Lj3ZXbM1WwpVFGigFUFlFHr1ZD0Ov3GFJXtnJAjSpcnFWLNGjtCUOrQ9qj1Tj67pCtwOtQAf61f9Ywmt66Hrc1xmEepc259OF3QMSmhc-bahL5dPWUf_s_ItfxxP8Wv7_AMS1Dm4 |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtNAEB6VIhWkCvErUkpZpHK06mw2XhsJoZK0pDSpOKQoN9ee3bSRIqfkB6jEgXfgPXgonoRvHTvFHLj1FmfXduz5Zuab3ckM0S50OWhYCfDC-3kKMQTsoIK6g1qwhdmUxq1D9k6Czqn6MGgO1uhX-V8Yl1ZZ2sTcUJsJuzXyPRnBs4DeaPn28rPnuka53dWyhcYSFsf26itCttmbozbk-0rKw4N-q-MVXQU8bvr1uWfY57SuWfvWpA2Z1o02Wpu6ShPjM_y_SaUJAm1ZWZ00I1bMDOBzM2Qbst_AdW_RbThe3wV7eqCv13SU24TTG7TpMqyB7T2nHmHF5eWdASp0tpqM-Zd3O7xP9wpaKvaXOHpAazZ7SBu9YuP9EX1vXSTz9x_73rJMNWiqaBetVWAixjjIM_ZGM9EFbmYCVFg4UzO230QLjvL3j59tYP0LzisqkY7Fp9F55hKNZq_L03Fzsc-8mCZ8JQ5Wdcgf0-mNvOIntJ5NMvuURHOYWiW1a5QVqCgIQw7NELRD4kOkhlyj3fLdxlxUNXfNNcYxohsngjgXQY12VtMul2U8_p3wzglmNeiqbudfTKbncaHEsY_nitjqEEGa0ghsgyBqgJFGSqZ-kqgabZdijQtTMIuvgVujl6thKLHbmUkyO1lgTqjdoi4esUZhBQ6VH1QdyUYXeTlwx5BBRaKt_9_9Bd3p9HvduHt0cvyM7kqQtTwpMdqm9fl0YZ-DXM3TnRzRgs5uWoX-ALggPXE |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3NbtNAEB6VVIqQEOJXGEpZpHK04mzW3jUSQm1-aGkbRahFvRl7dtNGipwSJ0AlDrwDb8Pj8CTMOusUc-DWW5xd27Hnm5lvdiczADuky1HHcAIveT9fUAxBdlCQuhO1QENmk2u7Dnk8jPZPxfuz8GwDflX_hbFplZVNLA21nqFdI2_xmDwL0RvJW2OXFjHqDd5efvZtBym701q101hB5NBcfaXwrXhz0CNZv-J80D_p7vuuw4CPYdBe-BoDzNoSZWB01uFZW0stpW6LLNUBEhfQGddRJA0KI9MwRoGIpAQYKjQKgw5d9xZsShsVNWBzrz8cfbhe4RF2S0424Y7Ntyakt6yyqJoDLPsE1MhtPTXzL183uAd3HUlluytU3YcNkz-A5rHbhn8I37sX6eLd6MRfFa0m0sp6rtEKGYwpHZT5e5OCHRGKCkbEmFnDMzXfWJfc5u8fP3uE_C90nqtLOmUfJ-e5TTsqXlen083ZLuJynuIV66-rkj-C0xt5yY-hkc9y8wRYOM6M4NK2zYpEHCmFSo-JhHD6EIsxerBTvdsEXY1z22pjmlCsY0WQlCLwYHs97XJV1OPfCXtWMOtBW4O7_GI2P0-cSicBPVeMRioK2YSkMDeK4g7x01jwLEhT4cFWJdbEGYYiuYaxBy_Xw6TSdp8mzc1sSXOUtEu89IgeqBocaj-oPpJPLsri4JYvEzGJn_7_7i-gSeqTHB0MD5_BbU7MrcxQjLegsZgvzXNiWots20Gawaeb1qI_a4VDDA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ChatGPT-Generated+Differential+Diagnosis+Lists+for+Complex+Case%E2%80%93Derived+Clinical+Vignettes%3A+Diagnostic+Accuracy+Evaluation&rft.jtitle=JMIR+medical+informatics&rft.au=Takanobu+Hirosawa&rft.au=Ren+Kawamura&rft.au=Yukinori+Harada&rft.au=Kazuya+Mizuta&rft.date=2023-10-09&rft.pub=JMIR+Publications&rft.eissn=2291-9694&rft.volume=11&rft.spage=e48808&rft_id=info:doi/10.2196%2F48808&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_0c0c9ce78576474486693324942b0aa4 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2291-9694&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2291-9694&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2291-9694&client=summon |