Performance of ChatGPT on Stage 1 of the Taiwanese medical licensing exam

Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Sen...

Full description

Saved in:
Bibliographic Details
Published inDIGITAL HEALTH Vol. 10; p. 20552076241233144
Main Authors Huang, Chao-Hsiung, Hsiao, Han-Jung, Yeh, Pei-Chun, Wu, Kuo-Chen, Kao, Chia-Hung
Format Journal Article
LanguageEnglish
Published London, England SAGE Publications 01.01.2024
Sage Publications Ltd
SAGE Publishing
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English. Methods In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject. Result In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology. Conclusion ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.
AbstractList Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English. In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject. In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology. ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.
Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English. Methods In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject. Result In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology. Conclusion ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.
Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English. Methods In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject. Result In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology. Conclusion ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.
Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English.IntroductionSince its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The objective of this study is to evaluate ChatGPT's accuracy and logical reasoning across all 10 subjects featured in Stage 1 of Senior Professional and Technical Examinations for Medical Doctors (SPTEMD) in Taiwan, with questions that encompass both Chinese and English.In this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject.MethodsIn this study, we tested ChatGPT-4 to complete SPTEMD Stage 1. The model was presented with multiple-choice questions extracted from three separate tests conducted in February 2022, July 2022, and February 2023. These questions encompass 10 subjects, namely biochemistry and molecular biology, anatomy, embryology and developmental biology, histology, physiology, microbiology and immunology, parasitology, pharmacology, pathology, and public health. Subsequently, we analyzed the model's accuracy for each subject.In all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology.ResultIn all three tests, ChatGPT achieved scores surpassing the 60% passing threshold, resulting in an overall average score of 87.8%. Notably, its best performance was in biochemistry, where it garnered an average score of 93.8%. Conversely, the performance of the generative pre-trained transformer (GPT)-4 assistant on anatomy, parasitology, and embryology was not as good. In addition, its scores were highly variable in embryology and parasitology.ChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.ConclusionChatGPT has the potential to facilitate not only exam preparation but also improve the accessibility of medical education and support continuous education for medical professionals. In conclusion, this study has demonstrated ChatGPT's potential competence across various subjects within the SPTEMD Stage 1 and suggests that it could be a helpful tool for learning and exam preparation for medical students and professionals.
Author Han-Jung Hsiao
Pei-Chun Yeh
Chia-Hung Kao
Kuo-Chen Wu
Chao-Hsiung Huang
Author_xml – sequence: 1
  givenname: Chao-Hsiung
  surname: Huang
  fullname: Huang, Chao-Hsiung
– sequence: 2
  givenname: Han-Jung
  surname: Hsiao
  fullname: Hsiao, Han-Jung
– sequence: 3
  givenname: Pei-Chun
  surname: Yeh
  fullname: Yeh, Pei-Chun
– sequence: 4
  givenname: Kuo-Chen
  surname: Wu
  fullname: Wu, Kuo-Chen
– sequence: 5
  givenname: Chia-Hung
  orcidid: 0000-0002-6368-3676
  surname: Kao
  fullname: Kao, Chia-Hung
  email: dr.kaochiahung@gmail.com
BackLink https://cir.nii.ac.jp/crid/1873118015920282112$$DView record in CiNii
https://www.ncbi.nlm.nih.gov/pubmed/38371244$$D View this record in MEDLINE/PubMed
BookMark eNp9kk9v1DAQxSNUREvpB-CCIsGByxaP__uIVrSsVIlKLOfIccZbV0lc7KyAb4_TlIKKtBfbevq9GY-fX1ZHYxyxql4DOQdQ6gMlQlCiJOVAGQPOn1Uns7aaxaN_zsfVWc6hJVRLA5KKF9Ux00wB5fyk2lxj8jENdnRYR1-vb-x0eb2t41h_newOa5jV6QbrrQ0_7IgZ6wG74Gxf98HhmMO4q_GnHV5Vz73tM5497KfVt4tP2_Xn1dWXy83649XKSUKmlaMOAUFJ6R1SQboOtEelAaVvjSW-E5Ihc7L1klHetgqc1U4aWSRhNTutNkvdLtrb5i6FwaZfTbShuRdi2jU2TcH12CjmPVfEM-0574w03ABXmgpNbIfElFrvl1p3KX7fY56aIWSHfV8GjfvcUEO10FqIue3bJ-ht3KexTNowSg0wSog8SAE3SjHBZ-rNA7Vvy2M-DvEnlgLAArgUc07oHxEgzZx-81_6xaOeeFyY7BTiOCUb-oPO88WZS-B_L3zI8G4xjCGULvMKWjEATUAYWn4ahcL-BgvzxDc
CitedBy_id crossref_primary_10_1002_ca_24243
crossref_primary_10_3390_healthcare12161637
crossref_primary_10_3390_info15090543
crossref_primary_10_2196_60807
crossref_primary_10_3390_healthcare12171726
crossref_primary_10_1007_s10916_024_02103_w
crossref_primary_10_7759_cureus_66324
crossref_primary_10_1007_s12149_024_01992_8
crossref_primary_10_1515_gme_2024_0021
crossref_primary_10_1177_20552076241293932
crossref_primary_10_1097_JCMA_0000000000001130
crossref_primary_10_1007_s44163_024_00215_3
Cites_doi 10.7861/clinmed.2023-0078
10.1016/j.bja.2023.04.025
10.1093/asj/sjad130
10.1371/journal.pdig.0000198
10.1016/j.ajog.2023.04.020
10.1097/JCMA.0000000000000942
10.1007/s10916-023-01961-0
10.1371/journal.pone.0278673
10.6018/edumed.556511
10.1148/radiol.230582
10.3390/healthcare11212855
10.1097/MD.0000000000034673
10.1097/JCMA.0000000000000946
10.1007/s10916-023-01957-w
10.2196/48039
10.1016/j.compbiomed.2023.107807
ContentType Journal Article
Copyright The Author(s) 2024
The Author(s) 2024.
The Author(s) 2024. This work is licensed under the Creative Commons Attribution – Non-Commercial – No Derivatives License https://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
The Author(s) 2024 This article is distributed under the terms of the Creative Commons Attribution-NoDerivs 4.0 License (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits any use, reproduction and distribution of the work as published without adaptation or alteration, provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: The Author(s) 2024
– notice: The Author(s) 2024.
– notice: The Author(s) 2024. This work is licensed under the Creative Commons Attribution – Non-Commercial – No Derivatives License https://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: The Author(s) 2024 This article is distributed under the terms of the Creative Commons Attribution-NoDerivs 4.0 License (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits any use, reproduction and distribution of the work as published without adaptation or alteration, provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID RYH
AFRWT
AAYXX
CITATION
NPM
3V.
7X7
7XB
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BENPR
CCPQU
DWQXO
FYUFA
GHDGH
K9.
M0S
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQQKQ
PQUKI
7X8
DOA
DOI 10.1177/20552076241233144
DatabaseName CiNii Complete
Sage Journals GOLD Open Access 2024
CrossRef
PubMed
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Hospital Premium Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest One Community College
ProQuest Central
Proquest Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Health & Medical Complete (Alumni)
ProQuest Health & Medical Collection
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Academic
ProQuest One Academic UKI Edition
MEDLINE - Academic
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
ProQuest One Community College
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
ProQuest Hospital Collection (Alumni)
ProQuest Central
ProQuest Health & Medical Complete
Health Research Premium Collection
ProQuest One Academic UKI Edition
Health and Medicine Complete (Alumni Edition)
ProQuest Central Korea
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList PubMed
Publicly Available Content Database
ProQuest Health & Medical Complete (Alumni)


MEDLINE - Academic
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 3
  dbid: AFRWT
  name: Sage Journals GOLD Open Access 2024
  url: http://journals.sagepub.com/
  sourceTypes: Publisher
– sequence: 4
  dbid: 7X7
  name: Health & Medical Collection
  url: https://search.proquest.com/healthcomplete
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
EISSN 2055-2076
ExternalDocumentID oai_doaj_org_article_73ff470f38f44d9694914782580ade09
38371244
10_1177_20552076241233144
10.1177_20552076241233144
Genre Journal Article
GrantInformation_xml – fundername: This study is supported in part by China Medical University Hospital (DMR-112-072, DMR-112-073).
GroupedDBID 0R~
53G
54M
5VS
7X7
8FI
8FJ
AAJPV
AAJQC
AANEX
AASGM
ABAFQ
ABAWP
ABNCE
ABQXT
ABUWG
ABVFX
ABXGC
ACARO
ACGFS
ACHEB
ACROE
ADBBV
ADOGD
AEFTW
AEUHG
AEWDL
AFCOW
AFKRA
AFKRG
AFRWT
AJUZI
ALIPV
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AUTPY
AYAKG
BAWUL
BCNDV
BDDNI
BDZRT
BENPR
BMVBW
BPHCQ
BSEHC
BVXVI
CCPQU
DC.
DF.
DIK
EBS
FYUFA
GROUPED_DOAJ
H13
HMCUK
HYE
J8X
K.F
KQ8
M~E
O9-
OK1
PHGZM
PHGZT
PIMPY
PQQKQ
ROL
RPM
RYH
SAUOL
SCDPB
SCNPE
SFC
SFH
UKHRP
Y4B
EJD
AAYXX
CITATION
NPM
3V.
7XB
8FK
AZQEC
DWQXO
K9.
PKEHL
PQEST
PQUKI
PUEGO
7X8
ID FETCH-LOGICAL-c600t-c2ce1e1766fce250dd18fe781e6fb9a0fd563e3c6bf6324bb71ca8c696c6b5a83
IEDL.DBID AFRWT
ISSN 2055-2076
IngestDate Wed Aug 27 01:28:24 EDT 2025
Fri Jul 11 08:51:35 EDT 2025
Wed Aug 27 09:42:10 EDT 2025
Wed Aug 27 09:42:28 EDT 2025
Thu Apr 03 07:04:23 EDT 2025
Thu Apr 24 23:11:08 EDT 2025
Sun Jul 06 05:02:07 EDT 2025
Sun Jul 13 06:02:36 EDT 2025
Thu Jun 26 22:04:42 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Taiwanese medical licensing exam
OpenAI
ChatGPT
artificial intelligence
educational measurement
Language English
License This article is distributed under the terms of the Creative Commons Attribution-NoDerivs 4.0 License (https://creativecommons.org/licenses/by-nc-nd/4.0/) which permits any use, reproduction and distribution of the work as published without adaptation or alteration, provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
The Author(s) 2024.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c600t-c2ce1e1766fce250dd18fe781e6fb9a0fd563e3c6bf6324bb71ca8c696c6b5a83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-6368-3676
OpenAccessLink https://journals.sagepub.com/doi/full/10.1177/20552076241233144?utm_source=summon&utm_medium=discovery-provider
PMID 38371244
PQID 3149773546
PQPubID 4451128
ParticipantIDs doaj_primary_oai_doaj_org_article_73ff470f38f44d9694914782580ade09
proquest_miscellaneous_2928588558
proquest_journals_3229132006
proquest_journals_3149773546
pubmed_primary_38371244
crossref_primary_10_1177_20552076241233144
crossref_citationtrail_10_1177_20552076241233144
sage_journals_10_1177_20552076241233144
nii_cinii_1873118015920282112
PublicationCentury 2000
PublicationDate 2024-01-01
PublicationDateYYYYMMDD 2024-01-01
PublicationDate_xml – month: 01
  year: 2024
  text: 2024-01-01
  day: 01
PublicationDecade 2020
PublicationPlace London, England
PublicationPlace_xml – name: London, England
– name: United States
– name: Thousand Oaks
PublicationTitle DIGITAL HEALTH
PublicationTitleAlternate Digit Health
PublicationYear 2024
Publisher SAGE Publications
Sage Publications Ltd
SAGE Publishing
Publisher_xml – name: SAGE Publications
– name: Sage Publications Ltd
– name: SAGE Publishing
References Humar 2023; 43
Sedaghat 2023; 23
Bommineni 2023
Al Kahf 2023; 18
Flores-Cohaila 2023; 9
Huang 2023; 11
Weng 2023; 86
Seetharaman 2023; 47
Bhayana, Krishna, Bleakney 2023; 307
Carrasco 2023; 4
Li 2023; 229
Oztermeli, Oztermeli 2023; 102
Birkett, Fowler, Pullen 2023; 131
Kao 2012; 55
Sahin 2023; 169
Kao, Chuang, Yang 2023
Kung 2023; 2
Bonetti 2023
Wang 2023; 47
Wang, Shen, Chen 2023; 86
bibr1-20552076241233144
Carrasco JP (bibr12-20552076241233144) 2023; 4
Bommineni VL (bibr4-20552076241233144) 2023
bibr10-20552076241233144
bibr13-20552076241233144
bibr20-20552076241233144
bibr18-20552076241233144
bibr2-20552076241233144
bibr22-20552076241233144
bibr8-20552076241233144
bibr5-20552076241233144
Kao YS (bibr17-20552076241233144) 2023
bibr15-20552076241233144
bibr6-20552076241233144
Kao MC (bibr21-20552076241233144) 2012; 55
bibr3-20552076241233144
bibr11-20552076241233144
bibr7-20552076241233144
bibr16-20552076241233144
bibr9-20552076241233144
Bonetti MA (bibr14-20552076241233144) 2023
bibr19-20552076241233144
References_xml – volume: 307
  year: 2023
  article-title: Performance of ChatGPT on a radiology board-style examination: insights into current strengths and limitations
  publication-title: Radiology
– volume: 55
  start-page: 38
  year: 2012
  end-page: 49
  article-title: Overview of the history of the examination system for medical doctors in Taiwan
  publication-title: Taiwan Med J
– year: 2023
  article-title: How does ChatGPT perform on the Italian residency admission national exam compared to 15,869 medical graduates?
  publication-title: Ann Biomed Eng
– volume: 102
  year: 2023
  article-title: ChatGPT performance in the medical specialty exam: an observational study
  publication-title: Medicine (Baltimore)
– volume: 229
  start-page: 172.e1
  year: 2023
  end-page: 172.e12
  article-title: ChatGPT outscored human candidates in a virtual objective structured clinical examination in obstetrics and gynecology
  publication-title: Am J Obstet Gynecol
– volume: 9
  year: 2023
  article-title: Performance of ChatGPT on the Peruvian National Licensing Medical Examination: cross-sectional study
  publication-title: JMIR Med Educ
– volume: 169
  start-page: 107807
  year: 2023
  article-title: Beyond human in neurosurgical exams: ChatGPT's success in the Turkish neurosurgical society proficiency board exams
  publication-title: Comput Biol Med
– volume: 86
  start-page: 762
  year: 2023
  end-page: 766
  article-title: ChatGPT failed Taiwan's family medicine board exam
  publication-title: J Chin Med Assoc
– volume: 23
  start-page: 278
  year: 2023
  end-page: 279
  article-title: Early applications of ChatGPT in medical practice, education and research
  publication-title: Clin Med (Lond)
– volume: 11
  start-page: 2855
  year: 2023
  article-title: Performance of ChatGPT on registered nurse license exam in Taiwan: a descriptive study
  publication-title: Healthcare (Basel)
– volume: 18
  year: 2023
  article-title: Chatbot-based serious games: a useful tool for training medical students? A randomized controlled trial
  publication-title: PLoS One
– volume: 86
  start-page: 653
  year: 2023
  end-page: 658
  article-title: Performance of ChatGPT on the pharmacist licensing examination in Taiwan
  publication-title: J Chin Med Assoc
– volume: 131
  year: 2023
  end-page: e35
  article-title: Performance of ChatGPT on a primary FRCA multiple choice question bank
  publication-title: Br J Anaesth
– volume: 2
  year: 2023
  article-title: Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models
  publication-title: PLOS Digit Health
– volume: 4
  start-page: 12
  year: 2023
  end-page: 18
  article-title: ¿Es capaz “ChatGPT” de aprobar el examen MIR de 2022? Implicaciones de la inteligencia artificial en la educación médica en España
  publication-title: Revista Española de Educación Médica
– volume: 47
  start-page: 86
  year: 2023
  article-title: Chatgpt performs on the Chinese national medical licensing examination
  publication-title: J Med Syst
– volume: 43
  year: 2023
  end-page: NP1089
  article-title: ChatGPT is equivalent to first-year plastic surgery residents: evaluation of ChatGPT on the plastic surgery in-service examination
  publication-title: Aesthet Surg J
– year: 2023
  article-title: Performance of ChatGPT on the MCAT: the road to personalized and equitable premedical learning
  publication-title: MedRxiv
– year: 2023
  article-title: Use of ChatGPT on Taiwan's examination for medical doctors
  publication-title: Ann Biomed Eng
– volume: 47
  start-page: 61
  year: 2023
  article-title: Revolutionizing medical education: can ChatGPT boost subjective learning and expression?
  publication-title: J Med Syst
– ident: bibr2-20552076241233144
  doi: 10.7861/clinmed.2023-0078
– ident: bibr1-20552076241233144
– year: 2023
  ident: bibr14-20552076241233144
  publication-title: Ann Biomed Eng
– ident: bibr6-20552076241233144
  doi: 10.1016/j.bja.2023.04.025
– volume: 55
  start-page: 38
  year: 2012
  ident: bibr21-20552076241233144
  publication-title: Taiwan Med J
– ident: bibr7-20552076241233144
  doi: 10.1093/asj/sjad130
– ident: bibr9-20552076241233144
  doi: 10.1371/journal.pdig.0000198
– ident: bibr8-20552076241233144
  doi: 10.1016/j.ajog.2023.04.020
– year: 2023
  ident: bibr17-20552076241233144
  publication-title: Ann Biomed Eng
– ident: bibr19-20552076241233144
  doi: 10.1097/JCMA.0000000000000942
– ident: bibr16-20552076241233144
  doi: 10.1007/s10916-023-01961-0
– ident: bibr22-20552076241233144
  doi: 10.1371/journal.pone.0278673
– volume: 4
  start-page: 12
  year: 2023
  ident: bibr12-20552076241233144
  publication-title: Revista Española de Educación Médica
  doi: 10.6018/edumed.556511
– ident: bibr5-20552076241233144
  doi: 10.1148/radiol.230582
– ident: bibr18-20552076241233144
  doi: 10.3390/healthcare11212855
– ident: bibr11-20552076241233144
  doi: 10.1097/MD.0000000000034673
– ident: bibr15-20552076241233144
  doi: 10.1097/JCMA.0000000000000946
– ident: bibr3-20552076241233144
  doi: 10.1007/s10916-023-01957-w
– ident: bibr13-20552076241233144
  doi: 10.2196/48039
– ident: bibr10-20552076241233144
  doi: 10.1016/j.compbiomed.2023.107807
– year: 2023
  ident: bibr4-20552076241233144
  publication-title: MedRxiv
– ident: bibr20-20552076241233144
SSID ssib028691625
ssj0001737916
Score 2.3337536
Snippet Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical...
Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical exams. The...
Introduction Since its release by OpenAI in November 2022, numerous studies have subjected ChatGPT to various tests to evaluate its performance in medical...
SourceID doaj
proquest
pubmed
crossref
sage
nii
SourceType Open Website
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 20552076241233144
SubjectTerms Biochemistry
Chatbots
Cognition & reasoning
Computer applications to medicine. Medical informatics
Embryology
Multiple choice
Original Research Article
Parasitology
Professionals
R858-859.7
Test preparation
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1bSx0xEB6KD-JLqda2Wy-kUCgUlm7uyWMVLy1YfDiCb0s2m7QHdI_UI_rzO7O7HhW1fenLwibZ22Sy881M8gXgo0Gzo9tkysyNLpVtdOlycHhKObY2u9iHBo5-mMMT9f1Un97b6ovmhA30wIPgvliZs7JVli4r1XrjlecKzZp2VWjTsHQPbd49Z6qPrlhpEfiMaUxiWBKV1gKddrRYQkp0Ix4Yop6vH81LN50-BTUfTPPqLc_-K3g5Qkb2dXjVVXiRujVYPhqT4q_h2_Hd5H82y2z3V5gfHE_YrGMIJX8mxqkUgR6bhOl1oC0n2fmQn2Fn-J_oKFzA0k04X4eT_b3J7mE57pBQRgQq8zKKmHgijsccE4KZtuUuJ-t4MrnxocqtNjLJaJpMtOxNY3kMLhpvsEgHJ9_AUjfr0jtgnqOYfYgJAZlqrPdJR3S0Y_QRQQmvCqhuxVXHkT6cdrE4q_nIGP5IwgV8XlxyMXBn_K3xDvXBoiHRXvcFqAz1qAz1v5ShgC3sQXw_OnJnJRHcIWAT5Fkisixg87Zv63GsXtb4dATBUivzdLUQntaZV1j9YVGNg5AyK9hns6vLWnjhtHNauwLeDiqz-BAKARCIKuAT6dDdjZ8Vxfv_IYoNWMGvVkO0aBOW5r-v0hbip3mz3Q-VPx7NCOU
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Health & Medical Collection
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3dSxwxEA-thdKXYr-3akmhUCgs3XwnT9JKrS1YfDjh3pZsNtED3bXeif75zuzm7pBaXxY2yS5JJsn8MpP8hpBPGtSOaqMuE9OqlKZRpU3ewiv62Npkw2AaOPyjD47l76maZoPbPB-rXK6Jw0Ld9gFt5F8FQHljhJJ69-JviVGj0LuaQ2g8Jk-QugxHtZmatY3FCAPwJzszkWeJV0px2LqD3uICfinvqKOBtR-UTDeb3Qc47xz2GvTP_iZ5noEj_TZK-gV5FLuX5Olhdo2_Ir-O1lcAaJ_o3qlf_Dya0L6jAChPImWYCnCPTvzs2mPgSXo-emnoGawWHRoNaLzx56_J8f6Pyd5BmeMklAHgyqIMPEQWkekxhQiQpm2ZTdFYFnVqnK9Sq7SIIugmITl70xgWvA3aaUhS3oo3ZKPru_iOUMeSNM6HCLBMNsa5qAJst0NwAaAJqwpSLburDplEHGNZnNUs84b_08MF-bL65GJk0Hio8HeUwaogkl8PCf3lSZ3nUm1EglpWSdgkZeu0k45JQDrKVr6NlSvIDkgQ6odPZo1AmjuAbRz3l4AvC7K9lG2dZ-y8Xo-v-7M5d3jbvILsj6tsmIroXwGZ9VfzmjtulbVK2YK8HYfMqiFoCEAoVZDPOIbWP_5vV7x_uJZb5Bm0R47WoG2ysbi8ijuAjxbNh2ES3ALNnQGl
  priority: 102
  providerName: ProQuest
Title Performance of ChatGPT on Stage 1 of the Taiwanese medical licensing exam
URI https://cir.nii.ac.jp/crid/1873118015920282112
https://journals.sagepub.com/doi/full/10.1177/20552076241233144
https://www.ncbi.nlm.nih.gov/pubmed/38371244
https://www.proquest.com/docview/3149773546
https://www.proquest.com/docview/3229132006
https://www.proquest.com/docview/2928588558
https://doaj.org/article/73ff470f38f44d9694914782580ade09
Volume 10
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3ra9RAEB_aK4hfxLfR9lhBEIRoNsm-Pklbelah5ShXvG8h2ezWg14ibQ71v3cmjzuKp_glIbOb7GZf85uZ3RmANxLZjiidDD2XIkxVIULtc42PZGMrvbatauDsXJ5epl_mYr4D9XAWpm_B2_e0rQpr1C7WNLtJG_2hNzKixC5EjBI4sp84SVAm-Lhqllmn7h6iahCF7NOrJZm2LW2I_BUOx9t2YS9WUsQj2DucXHydbdQyKlGmDZhKZYRUSG8L3VruHW7WOv1HHlUtFtvw6p29Yi37mjyEBz3uZIfdQHkEO656DPfOesv6E_g83ZwgYLVnx9_y5tN0xuqKIR69cowTFdEim-WLHznFrWTLzsjDrnGxqUjnwNzPfPkULicns-PTsA-zEFpEO01oY-u4I0eR3jpERGXJtXdKcyd9YfLIl0ImLrGy8OTbvSgUt7m20kgkiVwnz2BU1ZV7AcxwnyqTW4eoLi2UMU5YlNatNRaRDY8CiIbmymzvg5xCYVxnvHc7_kcLB_Bu_cr3zgHHvzIfUR-sM5Lv7JZQ31xl_VTMVOKxlpFPtE_T0kiTGp4iUBI6yksXmQAOsAexfnTlWiXkJQ9RX0ziKcLTAPaHvs2G4Zph6YikE5HK7clxbOiweoTJr9fJOJPJPIN9Vq9us9jEWmgthA7geTdk1j9CegRCYgG8pTG0-fBfm-Llf-d8Bffx19JOr7QPo-Zm5Q4QaTXFGHbVXI37OYL3o5Pz6cW41Vv8BipTGl4
linkProvider SAGE Publications
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VIgEXxJtAC0YCISFFxImfB4SgUHZpt-phK_WWJo5dVmqT0t2q8Kf4jcwk2V1VlN56iRTbiWzPeObzjD0D8Fqh2pGVV3HgSsZClzI2oTD4Sj62KhjXmgZGO2qwJ77vy_0V-DO_C0PHKucysRXUVePIRv4-QyivdSaF-njyM6asUeRdnafQ6Nhiy_8-xy3b9MPwC9L3TZpufh1vDOI-q0DsULnPYpc6zz3FRQzOIwCoKm6C14Z7FUpbJKGSKvOZU2WgUOZlqbkrjFNWYZEsTIb_vQE3UfEmtNnT-3pp09GZRrjVO08prlOaSJkmKHAEaggcgrig_tosAajU6snkMoB74XBZq-8278HdHqiyTx1n3YcVXz-AW6PeFf8QhrvLKwesCWzjRzH7tjtmTc0QwB56xqkU4SUbF5PzghJdsuPOK8SOUDrVZKRg_ldx_Aj2rmUGH8Nq3dT-KTDLg9C2cB5hoCi1tV463N47Zx1CIZ5EkMynK3d90HLKnXGU8z5O-T8zHMG7xScnXcSOqxp_JhosGlKw7bagOT3M-7Wb6yxgL5OQmSBEZZUVlgtEVtIkReUTG8E6UhD7R09udEZh9RAmprSfRTwbwdqctnkvIab5kp8vr05TS7fbE6x-tajGpU_-HKRZczbNU5saaYyUJoInHcssBkKGB4JuEbwlHlr--L9T8ezqXr6E24PxaDvfHu5sPYc7ODbRWaLWYHV2eubXEZvNyhftgmBwcN0r8C8MdEAq
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwED9BJ028IL4X2MBISEhIgTixHftxDMoGbKpQJ_YWOY49Km3JtHWCP5-7xG01URAvkeKPxL6zfb-7s88ArxSKHdl4lQauZCrKWqY6WI2v5GNrgna9aeDwSO0fi88n8iQa3OgsTKTg1VvaVoUt6hdrmt0XTXgXfYyosEuZowKO0icvClQJbsOGEKjkjWBjd_zt-3RlZSmL0vT3n1KdlCpF1-ba79wQTn0MfxQ57Wy2Dn7e2PrVS6PxPbgbYSTbHfh-H2759gFsHkZH-UM4mKwOBLAusL0fdv5pMmVdyxBennrGKRXBH5va2U9L11Cy88Fnw85w7WjJhMD8L3v-CI7HH6d7-2m8NSF1CF7mqcud557iPgbnEeA0DdfBl5p7FWpjs9BIVfjCqTpQqPa6Lrmz2imjMElaXTyGUdu1fguY4UGUxjqPIE3UpTFeOlS-nTMOgQrPEsgW5KpcDClON1ucVTxGEf-Dwgm8WVa5GOJp_Kvwe-LBsiCFwu4TusvTKs6sqiwCtjILhQ5CNEYZYbhA3CN1ZhufmQR2kIPYPnpyXRYU9A5BXE7aJqLNBLYXvK0Wo6_CvyMwLqRQ67Pz3NDZ8wyzXy6zcWKStwV51l1fVbnJtdRaSp3Ak2HILDtCZgECVgm8pjG0-vBfSfH0v0u-gM3Jh3H19eDoyzO4g70Ug8VoG0bzy2u_gxhqXj-PE-U3lzwFYQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Performance+of+ChatGPT+on+Stage+1+of+the+Taiwanese+medical+licensing+exam&rft.jtitle=Digital+health&rft.au=Chao-Hsiung%2C+Huang&rft.au=Han-Jung%2C+Hsiao&rft.au=Pei-Chun%2C+Yeh&rft.au=Kuo-Chen%2C+Wu&rft.date=2024-01-01&rft.pub=Sage+Publications+Ltd&rft.eissn=2055-2076&rft.volume=10&rft_id=info:doi/10.1177%2F20552076241233144&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2055-2076&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2055-2076&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2055-2076&client=summon