Integrating ChatGPT in Orthopedic Education for Medical Undergraduates: Randomized Controlled Trial
ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication. The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice...
Saved in:
Published in | Journal of medical Internet research Vol. 26; no. 6; p. e57037 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Canada
Journal of Medical Internet Research
20.08.2024
JMIR Publications |
Subjects | |
Online Access | Get full text |
ISSN | 1438-8871 1439-4456 1438-8871 |
DOI | 10.2196/57037 |
Cover
Abstract | ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.
The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results.
We first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester.
ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group.
ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods.
Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0. |
---|---|
AbstractList | ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.
The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results.
We first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester.
ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group.
ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods.
Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0. BackgroundChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication. ObjectiveThe study aimed to evaluate ChatGPT’s accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results. MethodsWe first evaluated ChatGPT’s accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups’ understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups’ performance in other disciplines were noted through a follow-up at the end of the semester. ResultsChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group. ConclusionsChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT’s integration into medical education, enhancing contemporary instructional methods. Trial RegistrationChinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0 ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication. The study aimed to evaluate ChatGPT’s accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results. We first evaluated ChatGPT’s accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups’ understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups’ performance in other disciplines were noted through a follow-up at the end of the semester. ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group. ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT’s integration into medical education, enhancing contemporary instructional methods. ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.BACKGROUNDChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication.The study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results.OBJECTIVEThe study aimed to evaluate ChatGPT's accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results.We first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester.METHODSWe first evaluated ChatGPT's accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups' understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups' performance in other disciplines were noted through a follow-up at the end of the semester.ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group.RESULTSChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group.ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods.CONCLUSIONSChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT's integration into medical education, enhancing contemporary instructional methods.Chinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0.TRIAL REGISTRATIONChinese Clinical Trial Registry Chictr2300071774; https://www.chictr.org.cn/hvshowproject.html ?id=225740&v=1.0. Background ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex requirements of human verbal communication. Objective The study aimed to evaluate ChatGPT’s accuracy in answering orthopedics-related multiple-choice questions (MCQs) and assess its short-term effects as a learning aid through a randomized controlled trial. In addition, long-term effects on student performance in other subjects were measured using final examination results. Methods We first evaluated ChatGPT’s accuracy in answering MCQs pertaining to orthopedics across various question formats. Then, 129 undergraduate medical students participated in a randomized controlled study in which the ChatGPT group used ChatGPT as a learning tool, while the control group was prohibited from using artificial intelligence software to support learning. Following a 2-week intervention, the 2 groups’ understanding of orthopedics was assessed by an orthopedics test, and variations in the 2 groups’ performance in other disciplines were noted through a follow-up at the end of the semester. Results ChatGPT-4.0 answered 1051 orthopedics-related MCQs with a 70.60% (742/1051) accuracy rate, including 71.8% (237/330) accuracy for A1 MCQs, 73.7% (330/448) accuracy for A2 MCQs, 70.2% (92/131) accuracy for A3/4 MCQs, and 58.5% (83/142) accuracy for case analysis MCQs. As of April 7, 2023, a total of 129 individuals participated in the experiment. However, 19 individuals withdrew from the experiment at various phases; thus, as of July 1, 2023, a total of 110 individuals accomplished the trial and completed all follow-up work. After we intervened in the learning style of the students in the short term, the ChatGPT group answered more questions correctly than the control group (ChatGPT group: mean 141.20, SD 26.68; control group: mean 130.80, SD 25.56; P=.04) in the orthopedics test, particularly on A1 (ChatGPT group: mean 46.57, SD 8.52; control group: mean 42.18, SD 9.43; P=.01), A2 (ChatGPT group: mean 60.59, SD 10.58; control group: mean 56.66, SD 9.91; P=.047), and A3/4 MCQs (ChatGPT group: mean 19.57, SD 5.48; control group: mean 16.46, SD 4.58; P=.002). At the end of the semester, we found that the ChatGPT group performed better on final examinations in surgery (ChatGPT group: mean 76.54, SD 9.79; control group: mean 72.54, SD 8.11; P=.02) and obstetrics and gynecology (ChatGPT group: mean 75.98, SD 8.94; control group: mean 72.54, SD 8.66; P=.04) than the control group. Conclusions ChatGPT answers orthopedics-related MCQs accurately, and students using it excel in both short-term and long-term assessments. Our findings strongly support ChatGPT’s integration into medical education, enhancing contemporary instructional methods. Trial Registration Chinese Clinical Trial Registry Chictr2300071774; |
Audience | Academic |
Author | Zhang, Yiyi Li, Hua Xue, Zhaowen Zheng, Xiaofei Dong, Qiu Huang, Jiadong Zhang, Yiming Ouyang, Jianfeng Gan, Wenyi |
AuthorAffiliation | 3 Department of Orthopaedics, Beijing Jishuitan Hospital Beijing China 2 Department of Joint Surgery and Sports Medicine, Zhuhai People's Hospital (Zhuhai Hospital Affiliated With Jinan University) Zhuhai, Guangdong China 1 The First Clinical Medical College of Jinan University, The First Affiliated Hospital of Jinan University Guangzhou China 4 Jinan University-University of Birmingham Joint Institute, Jinan University Guangzhou China |
AuthorAffiliation_xml | – name: 2 Department of Joint Surgery and Sports Medicine, Zhuhai People's Hospital (Zhuhai Hospital Affiliated With Jinan University) Zhuhai, Guangdong China – name: 4 Jinan University-University of Birmingham Joint Institute, Jinan University Guangzhou China – name: 1 The First Clinical Medical College of Jinan University, The First Affiliated Hospital of Jinan University Guangzhou China – name: 3 Department of Orthopaedics, Beijing Jishuitan Hospital Beijing China |
Author_xml | – sequence: 1 givenname: Wenyi orcidid: 0000-0003-1886-8062 surname: Gan fullname: Gan, Wenyi – sequence: 2 givenname: Jianfeng orcidid: 0000-0003-2708-8500 surname: Ouyang fullname: Ouyang, Jianfeng – sequence: 3 givenname: Hua orcidid: 0000-0001-8481-0235 surname: Li fullname: Li, Hua – sequence: 4 givenname: Zhaowen orcidid: 0009-0001-5807-9810 surname: Xue fullname: Xue, Zhaowen – sequence: 5 givenname: Yiming orcidid: 0000-0003-3366-9790 surname: Zhang fullname: Zhang, Yiming – sequence: 6 givenname: Qiu orcidid: 0000-0002-8904-627X surname: Dong fullname: Dong, Qiu – sequence: 7 givenname: Jiadong orcidid: 0009-0002-9914-9667 surname: Huang fullname: Huang, Jiadong – sequence: 8 givenname: Xiaofei orcidid: 0000-0001-7502-6131 surname: Zheng fullname: Zheng, Xiaofei – sequence: 9 givenname: Yiyi orcidid: 0009-0001-8507-5794 surname: Zhang fullname: Zhang, Yiyi |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/39163598$$D View this record in MEDLINE/PubMed |
BookMark | eNptkl9rFDEUxQep2D_2K8iACIpsTSYzmcQXKUutC9WKbp_DneTObMpssmYyon56s7u1dIrkIeHe3z3cE85xduC8wyw7peSsoJK_q2rC6ifZES2ZmAlR04MH78PseBhuCSlIKemz7JBJylklxVGmFy5iFyBa1-XzFcTLr8vcuvw6xJXfoLE6vzCjTn3v8taH_PO2Bn1-4wyGNGhGiDi8z7-BM35t_6DJ597F4Ps-PZfBQv88e9pCP-Dp3X2S3Xy8WM4_za6uLxfz86uZLktez4BRTkQtRaMrRoCRFpEzAVxTLCnKygDyBkwhiDBESMO51CWVUEMjaSvYSbbY6xoPt2oT7BrCb-XBql3Bh05BiFb3qEDSypSNZlTokjHRSKLTP5G64BzbBpPWh73WZmzWaDQmS9BPRKcdZ1eq8z8VpawuGONJ4fWdQvA_RhyiWttBY9-DQz8OihFZ0bpijCT05R7tIO1mXeuTpN7i6lyQqkqOSZGos_9Q6RhcW53S0NpUnwy8mQwkJuKv2ME4DGrx_cuUffHQ773Rf0lJwKs9oIMfhoDtPUKJ2iZQ7RKYuLePOG3jLj5pW9s_ov8C7hfYZg |
CitedBy_id | crossref_primary_10_1007_s11695_025_07794_9 crossref_primary_10_1097_JS9_0000000000002223 crossref_primary_10_1016_j_compedu_2024_105224 crossref_primary_10_1097_JS9_0000000000002130 |
Cites_doi | 10.1016/j.psc.2021.03.008 10.1016/j.resuscitation.2023.109729 10.12669/pjms.35.3.44 10.7759/cureus.51961 10.2196/48978 10.2196/16504 10.1007/s10916-023-01957-w 10.2519/jospt.2018.7850 10.7326/0003-4819-147-7-200710020-00006 10.5114/biolsport.2023.125623 10.2196/15118 10.7759/cureus.37023 10.2196/jmir.3240 10.1038/s41598-023-31341-0 10.1016/j.jad.2021.06.070 10.1152/advan.00016.2001 10.5858/arpa.2018-0467-RA 10.1007/s10459-007-9090-2 10.1227/neu.0000000000002000 10.3390/healthcare11060887 10.1038/d41586-023-00191-1 10.1002/ase.2270 10.1016/j.stemcr.2022.12.009 10.1080/10872981.2023.2181052 10.1148/radiol.230163 10.1097/00001888-199910000-00014 10.1038/d41586-022-04397-7 10.1186/1472-6920-13-139 10.2196/50945 10.1007/s11920-018-0878-y 10.1080/0142159X.2024.2308779 10.1093/asj/sjad128 10.1016/j.isci.2024.109713 10.2196/50882 10.2196/jmir.1923 10.1007/s11019-023-10136-0 10.1016/S2589-7500(23)00023-7 10.1001/jamainternmed.2023.1838 10.1038/d41586-023-00340-6 10.1038/d41586-022-04437-2 10.1186/s12909-022-03992-6 10.1007/s10916-024-02072-0 10.1097/00001888-200410001-00022 10.1186/1472-6920-10-12 10.1007/s10439-023-03338-3 10.1097/ACM.0000000000005242 10.1016/j.ijsu.2011.10.001 10.1097/00007632-200204010-00019 10.1016/j.xcrm.2021.100348 10.1016/j.scitotenv.2023.164154 10.1016/j.suc.2021.05.015 10.1007/s11999-009-0781-2 10.2196/51346 10.2196/48163 10.1016/j.cct.2021.106645 |
ContentType | Journal Article |
Copyright | Wenyi Gan, Jianfeng Ouyang, Hua Li, Zhaowen Xue, Yiming Zhang, Qiu Dong, Jiadong Huang, Xiaofei Zheng, Yiyi Zhang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.08.2024. COPYRIGHT 2024 Journal of Medical Internet Research Wenyi Gan, Jianfeng Ouyang, Hua Li, Zhaowen Xue, Yiming Zhang, Qiu Dong, Jiadong Huang, Xiaofei Zheng, Yiyi Zhang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.08.2024. 2024 |
Copyright_xml | – notice: Wenyi Gan, Jianfeng Ouyang, Hua Li, Zhaowen Xue, Yiming Zhang, Qiu Dong, Jiadong Huang, Xiaofei Zheng, Yiyi Zhang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.08.2024. – notice: COPYRIGHT 2024 Journal of Medical Internet Research – notice: Wenyi Gan, Jianfeng Ouyang, Hua Li, Zhaowen Xue, Yiming Zhang, Qiu Dong, Jiadong Huang, Xiaofei Zheng, Yiyi Zhang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.08.2024. 2024 |
DBID | AAYXX CITATION CGR CUY CVF ECM EIF NPM ISN 7X8 5PM DOA |
DOI | 10.2196/57037 |
DatabaseName | CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Canada in Context MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) MEDLINE - Academic |
DatabaseTitleList | MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Library & Information Science Education |
EISSN | 1438-8871 |
ExternalDocumentID | oai_doaj_org_article_a915d4bc318c4338b90c87107266efbe PMC11372336 A805528002 39163598 10_2196_57037 |
Genre | Randomized Controlled Trial Journal Article |
GeographicLocations | China |
GeographicLocations_xml | – name: China |
GroupedDBID | --- .4I .DC 29L 2WC 36B 53G 5GY 5VS 77K 7RV 7X7 8FI 8FJ AAFWJ AAKPC AAWTL AAYXX ABDBF ABIVO ABUWG ACGFO ADBBV AEGXH AENEX AFKRA AFPKN AIAGR ALIPV ALMA_UNASSIGNED_HOLDINGS ALSLI AOIJS BAWUL BCNDV BENPR CCPQU CITATION CNYFK CS3 DIK DU5 DWQXO E3Z EAP EBD EBS EJD ELW EMB EMOBN ESX F5P FRP FYUFA GROUPED_DOAJ GX1 HMCUK HYE IAO ICO IEA IHR INH ISN ITC KQ8 M1O M48 NAPCQ OK1 OVT P2P PGMZT PHGZM PHGZT PIMPY PQQKQ RNS RPM SJN SV3 TR2 UKHRP XSB CGR CUY CVF ECM EIF NPM PMFND 77I 7X8 PPXIY PRQQA PUEGO 5PM |
ID | FETCH-LOGICAL-c4467-a31608798bc530a30fee638a6c1e41e95dae6bad2808d089d669c419a7ab91f83 |
IEDL.DBID | M48 |
ISSN | 1438-8871 1439-4456 |
IngestDate | Wed Aug 27 01:13:44 EDT 2025 Thu Aug 21 18:35:49 EDT 2025 Fri Sep 05 12:40:50 EDT 2025 Tue Jun 17 22:03:39 EDT 2025 Tue Jun 10 20:59:18 EDT 2025 Fri Jun 27 05:49:04 EDT 2025 Wed Jan 15 06:03:52 EST 2025 Tue Jul 01 02:06:16 EDT 2025 Thu Apr 24 22:58:49 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Keywords | learning aid medical education natural language processing ChatGPT orthopedics large language model artificial intelligence randomized controlled trial |
Language | English |
License | Wenyi Gan, Jianfeng Ouyang, Hua Li, Zhaowen Xue, Yiming Zhang, Qiu Dong, Jiadong Huang, Xiaofei Zheng, Yiyi Zhang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.08.2024. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c4467-a31608798bc530a30fee638a6c1e41e95dae6bad2808d089d669c419a7ab91f83 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Undefined-3 |
ORCID | 0000-0003-1886-8062 0009-0001-5807-9810 0000-0001-7502-6131 0009-0001-8507-5794 0000-0001-8481-0235 0000-0002-8904-627X 0000-0003-2708-8500 0000-0003-3366-9790 0009-0002-9914-9667 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.2196/57037 |
PMID | 39163598 |
PQID | 3095175330 |
PQPubID | 23479 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_a915d4bc318c4338b90c87107266efbe pubmedcentral_primary_oai_pubmedcentral_nih_gov_11372336 proquest_miscellaneous_3095175330 gale_infotracmisc_A805528002 gale_infotracacademiconefile_A805528002 gale_incontextgauss_ISN_A805528002 pubmed_primary_39163598 crossref_primary_10_2196_57037 crossref_citationtrail_10_2196_57037 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20240820 |
PublicationDateYYYYMMDD | 2024-08-20 |
PublicationDate_xml | – month: 8 year: 2024 text: 20240820 day: 20 |
PublicationDecade | 2020 |
PublicationPlace | Canada |
PublicationPlace_xml | – name: Canada – name: Toronto, Canada |
PublicationTitle | Journal of medical Internet research |
PublicationTitleAlternate | J Med Internet Res |
PublicationYear | 2024 |
Publisher | Journal of Medical Internet Research JMIR Publications |
Publisher_xml | – name: Journal of Medical Internet Research – name: JMIR Publications |
References | ref13 ref12 ref56 ref15 ref14 ref53 ref52 ref11 ref55 ref10 ref54 ref17 ref16 ref19 ref18 ref51 ref50 ref46 ref45 ref48 ref47 ref42 ref41 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 Al-Azri, H (ref44) 2014; 60 ref35 ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref2 ref1 ref39 ref38 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 |
References_xml | – ident: ref33 doi: 10.1016/j.psc.2021.03.008 – ident: ref5 doi: 10.1016/j.resuscitation.2023.109729 – ident: ref42 doi: 10.12669/pjms.35.3.44 – ident: ref49 doi: 10.7759/cureus.51961 – ident: ref10 doi: 10.2196/48978 – ident: ref17 doi: 10.2196/16504 – ident: ref24 doi: 10.1007/s10916-023-01957-w – ident: ref34 doi: 10.2519/jospt.2018.7850 – ident: ref35 doi: 10.7326/0003-4819-147-7-200710020-00006 – ident: ref53 doi: 10.5114/biolsport.2023.125623 – ident: ref18 doi: 10.2196/15118 – ident: ref4 doi: 10.7759/cureus.37023 – ident: ref45 doi: 10.2196/jmir.3240 – ident: ref26 doi: 10.1038/s41598-023-31341-0 – volume: 60 start-page: 157 issue: 2 year: 2014 ident: ref44 publication-title: Can Fam Physician – ident: ref56 doi: 10.1016/j.jad.2021.06.070 – ident: ref32 doi: 10.1152/advan.00016.2001 – ident: ref36 doi: 10.5858/arpa.2018-0467-RA – ident: ref38 doi: 10.1007/s10459-007-9090-2 – ident: ref12 doi: 10.1227/neu.0000000000002000 – ident: ref52 doi: 10.3390/healthcare11060887 – ident: ref22 doi: 10.1038/d41586-023-00191-1 – ident: ref23 doi: 10.1002/ase.2270 – ident: ref6 doi: 10.1016/j.stemcr.2022.12.009 – ident: ref27 doi: 10.1080/10872981.2023.2181052 – ident: ref20 doi: 10.1148/radiol.230163 – ident: ref39 doi: 10.1097/00001888-199910000-00014 – ident: ref55 doi: 10.1038/d41586-022-04397-7 – ident: ref11 doi: 10.1186/1472-6920-13-139 – ident: ref51 doi: 10.2196/50945 – ident: ref13 doi: 10.1007/s11920-018-0878-y – ident: ref48 doi: 10.1080/0142159X.2024.2308779 – ident: ref7 doi: 10.1093/asj/sjad128 – ident: ref50 doi: 10.1016/j.isci.2024.109713 – ident: ref19 doi: 10.2196/50882 – ident: ref31 doi: 10.2196/jmir.1923 – ident: ref21 doi: 10.1007/s11019-023-10136-0 – ident: ref25 doi: 10.1016/S2589-7500(23)00023-7 – ident: ref8 doi: 10.1001/jamainternmed.2023.1838 – ident: ref3 doi: 10.1038/d41586-023-00340-6 – ident: ref54 doi: 10.1038/d41586-022-04437-2 – ident: ref46 doi: 10.1186/s12909-022-03992-6 – ident: ref47 doi: 10.1007/s10916-024-02072-0 – ident: ref14 doi: 10.1097/00001888-200410001-00022 – ident: ref40 doi: 10.1186/1472-6920-10-12 – ident: ref9 doi: 10.1007/s10439-023-03338-3 – ident: ref28 doi: 10.1097/ACM.0000000000005242 – ident: ref30 doi: 10.1016/j.ijsu.2011.10.001 – ident: ref15 doi: 10.1097/00007632-200204010-00019 – ident: ref41 doi: 10.1016/j.xcrm.2021.100348 – ident: ref2 doi: 10.1016/j.scitotenv.2023.164154 – ident: ref37 doi: 10.1016/j.suc.2021.05.015 – ident: ref1 doi: 10.1007/s11999-009-0781-2 – ident: ref16 doi: 10.2196/51346 – ident: ref29 doi: 10.2196/48163 – ident: ref43 doi: 10.1016/j.cct.2021.106645 |
SSID | ssj0020491 |
Score | 2.4227896 |
Snippet | ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and complex... Background ChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and... BackgroundChatGPT is a natural language processing model developed by OpenAI, which can be iteratively updated and optimized to accommodate the changing and... |
SourceID | doaj pubmedcentral proquest gale pubmed crossref |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source |
StartPage | e57037 |
SubjectTerms | Academic achievement Analysis Clinical trials College students Computational linguistics Data mining Education Education, Medical, Undergraduate - methods Educational Measurement - methods Female Generative Artificial Intelligence Humans Language processing Machine learning Male Medical colleges Medical personnel Medical students Methods Natural language interfaces Natural Language Processing Original Paper Orthopedics - education Students, Medical - statistics & numerical data Training Young Adult |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Lb9QwEB6hHiokhKC8Am1lUAWnVe08HJtbWVEKUguCrdSb5Ve6K5Vs1d298OuZSbyrDRy4cIviSeTHN54Ze_wZ4Cg6r2vP61EdfT4qRZCoc6XGKEWGUDmX-0iB4vmFPLssv1xVV1tXfVFOWE8P3HfcsdWiCqXziD1fYjzlNPfo5PMaLUtsXKTZl2u-DqZSqIV-r9iFB5TojBA7JpqpemB5OoL-v6fhLTs0zJHcMjqnj-Bh8hbZSV_Lx3AvtntwkM4asLcsHSaizmVJS_dg9zztlz8BWmXu2CDQQLHx1C4_fZuwWcu-3i2n81sSY5sMD4a_YmnfhnXXIeGHYUW-6Hv23bZh_nP2KwY27pPbb_BxQuh9CpenHyfjs1G6VmHkS5oWbSEkV7VWzlcFtwVvYkQttNKLWIqoq2CjdDbkiqvAlQ5Sal8KbWvrtGhU8Qx22nkbXwBzUenK-6jzIEv0LWzRKNUIYozhMjqewdG6y41PnON09cWNwdiDRsZ0I5PB4UbstifZ-FPgA43XppA4sbsXiBSTkGL-hZQM3tBoG2K9aCmt5tquFgvz-ceFOVG8ojrzPIN3SaiZY029TacUsL1ElDWQ3B9Iolr6QfHrNagMFVEuWxvnq4UpyKutKas3g-c9yDYNo2PQxKmYgRrAb9DyYUk7m3as4EIUdV4U8uX_6KtXcD9H740Wz3O-DzvLu1U8QO9r6Q47RfsNpvgtWw priority: 102 providerName: Directory of Open Access Journals |
Title | Integrating ChatGPT in Orthopedic Education for Medical Undergraduates: Randomized Controlled Trial |
URI | https://www.ncbi.nlm.nih.gov/pubmed/39163598 https://www.proquest.com/docview/3095175330 https://pubmed.ncbi.nlm.nih.gov/PMC11372336 https://doaj.org/article/a915d4bc318c4338b90c87107266efbe |
Volume | 26 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3raxQxEA-2hSKIaH2ttkeUop_OJvtINoJIe7RW4a6l3kG_LXlt7-Dc1XuA-tc7s5cet-pHvyzLzuyyycxkZpLJL4QcemOVtEx2pbdxN-VOgM2lCrIU4VxmTGw9Jor9gTgfpZ-vs41qwtCB83-mdnie1Gg2ffvj-88PYPDvsYwZFOgIQaTkFtkBZyQw_-qn64WEGAJgvkvutVhbLqhB6v97PN5wSO1iyQ3vc_aA3A9hIz1eyfkhueOrPXIQNh3Q1zTsKsJepsFc98huPyycPyI43dzAQoCnor2xXny8HNJJRS9mizE0GNjoutSDwqdoWMChzblI8KJbYlD6jl7pytVfJ7-8o71VlfsUboeoxo_J6Ox02DvvhvMVujbF8VEnXLBcqtzYLGE6YaX3YI5aWO5T7lXmtBdGuzhnuWO5ckIom3KlpTaKl3nyhGxXdeWfEWp8rjJrvYqdSCHI0EmZ5yVH6BgmvGERObzt8sIG8HE8A2NaQBKCkikayUSks2b7tkLb-JPhBOW1JiI4dvOgnt0UwdYKrXjmUmNhuLIppOBGMQt5IZMQjPjS-Ii8QmkXCH9RYX3NjV7O58WnL4PiOGcZ_jOLI_ImMJU1_KnVYbsCtBcRs1qc-y1OsE_bIr-8VaoCSVjUVvl6OS8SDG8llvdG5OlKydYNw_3QCK4Ykbylfq2WtynVZNzAg3OeyDhJxPP_0VcvyN0YwjicRY_ZPtlezJb-AMKwhemQLXktO2Tn5HRwedVpJjPg2ucXncYEfwNgvDbI |
linkProvider | Scholars Portal |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Integrating+ChatGPT+in+Orthopedic+Education+for+Medical+Undergraduates%3A+Randomized+Controlled+Trial&rft.jtitle=Journal+of+medical+Internet+research&rft.au=Wenyi+Gan&rft.au=Jianfeng+Ouyang&rft.au=Hua+Li&rft.au=Zhaowen+Xue&rft.date=2024-08-20&rft.pub=JMIR+Publications&rft.eissn=1438-8871&rft.volume=26&rft.spage=e57037&rft_id=info:doi/10.2196%2F57037&rft.externalDBID=DOA&rft.externalDocID=oai_doaj_org_article_a915d4bc318c4338b90c87107266efbe |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1438-8871&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1438-8871&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1438-8871&client=summon |