The Potential of ChatGPT as a Self-Diagnostic Tool in Common Orthopedic Diseases: Exploratory Study
Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery....
Saved in:
Published in | Journal of Medical Internet Research Vol. 25; no. 11; p. e47621 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Canada
JMIR Publications Inc
15.09.2023
Journal of Medical Internet Research Gunther Eysenbach MD MPH, Associate Professor JMIR Publications |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT's accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations.
The aim of this study was to evaluate ChatGPT's ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations.
Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study.
The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, -0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases "essential," "recommended," "best," and "important" were used. Specifically, "essential" occurred in 4 out of 125, "recommended" in 12 out of 125, "best" in 6 out of 125, and "important" in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention.
The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. |
---|---|
AbstractList | Background:Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT’s accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations.Objective:The aim of this study was to evaluate ChatGPT’s ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations.Methods:Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study.Results:The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, –0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases “essential,” “recommended,” “best,” and “important” were used. Specifically, “essential” occurred in 4 out of 125, “recommended” in 12 out of 125, “best” in 6 out of 125, and “important” in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention.Conclusions:The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT's accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations. The aim of this study was to evaluate ChatGPT's ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations. Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study. The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, -0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases "essential," "recommended," "best," and "important" were used. Specifically, "essential" occurred in 4 out of 125, "recommended" in 12 out of 125, "best" in 6 out of 125, and "important" in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention. The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT's accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations.BACKGROUNDArtificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT's accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations.The aim of this study was to evaluate ChatGPT's ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations.OBJECTIVEThe aim of this study was to evaluate ChatGPT's ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations.Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study.METHODSOver a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study.The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, -0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases "essential," "recommended," "best," and "important" were used. Specifically, "essential" occurred in 4 out of 125, "recommended" in 12 out of 125, "best" in 6 out of 125, and "important" in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention.RESULTSThe ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, -0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases "essential," "recommended," "best," and "important" were used. Specifically, "essential" occurred in 4 out of 125, "recommended" in 12 out of 125, "best" in 6 out of 125, and "important" in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention.The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study.CONCLUSIONSThe accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT’s accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations. The aim of this study was to evaluate ChatGPT’s ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations. Over a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study. The ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, –0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases “essential,” “recommended,” “best,” and “important” were used. Specifically, “essential” occurred in 4 out of 125, “recommended” in 12 out of 125, “best” in 6 out of 125, and “important” in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention. The accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. BackgroundArtificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art chatbot capable of creating natural conversations using NLP. The use of AI in medicine can have a tremendous impact on health care delivery. Although some studies have evaluated ChatGPT’s accuracy in self-diagnosis, there is no research regarding its precision and the degree to which it recommends medical consultations. ObjectiveThe aim of this study was to evaluate ChatGPT’s ability to accurately and precisely self-diagnose common orthopedic diseases, as well as the degree of recommendation it provides for medical consultations. MethodsOver a 5-day course, each of the study authors submitted the same questions to ChatGPT. The conditions evaluated were carpal tunnel syndrome (CTS), cervical myelopathy (CM), lumbar spinal stenosis (LSS), knee osteoarthritis (KOA), and hip osteoarthritis (HOA). Answers were categorized as either correct, partially correct, incorrect, or a differential diagnosis. The percentage of correct answers and reproducibility were calculated. The reproducibility between days and raters were calculated using the Fleiss κ coefficient. Answers that recommended that the patient seek medical attention were recategorized according to the strength of the recommendation as defined by the study. ResultsThe ratios of correct answers were 25/25, 1/25, 24/25, 16/25, and 17/25 for CTS, CM, LSS, KOA, and HOA, respectively. The ratios of incorrect answers were 23/25 for CM and 0/25 for all other conditions. The reproducibility between days was 1.0, 0.15, 0.7, 0.6, and 0.6 for CTS, CM, LSS, KOA, and HOA, respectively. The reproducibility between raters was 1.0, 0.1, 0.64, –0.12, and 0.04 for CTS, CM, LSS, KOA, and HOA, respectively. Among the answers recommending medical attention, the phrases “essential,” “recommended,” “best,” and “important” were used. Specifically, “essential” occurred in 4 out of 125, “recommended” in 12 out of 125, “best” in 6 out of 125, and “important” in 94 out of 125 answers. Additionally, 7 out of the 125 answers did not include a recommendation to seek medical attention. ConclusionsThe accuracy and reproducibility of ChatGPT to self-diagnose five common orthopedic conditions were inconsistent. The accuracy could potentially be improved by adding symptoms that could easily identify a specific location. Only a few answers were accompanied by a strong recommendation to seek medical attention according to our study standards. Although ChatGPT could serve as a potential first step in accessing care, we found variability in accurate self-diagnosis. Given the risk of harm with self-diagnosis without medical follow-up, it would be prudent for an NLP to include clear language alerting patients to seek expert medical opinions. We hope to shed further light on the use of AI in a future clinical study. |
Audience | Academic |
Author | Takuya Ibara Kazuya Tsukamoto Tomoyuki Kuroiwa Akiko Yamamoto Aida Sarcon Eriku Yamada Koji Fujita |
AuthorAffiliation | 3 Department of Surgery Mayo Clinic Rochester, MN United States 2 Division of Orthopedic Surgery Research Mayo Clinic Rochester, MN United States 1 Department of Orthopaedic and Spinal Surgery Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University Tokyo Japan 4 Department of Functional Joint Anatomy Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University Tokyo Japan 5 Division of Medical Design Innovations Open Innovation Center, Institute of Research Innovation Tokyo Medical and Dental University Tokyo Japan |
AuthorAffiliation_xml | – name: 5 Division of Medical Design Innovations Open Innovation Center, Institute of Research Innovation Tokyo Medical and Dental University Tokyo Japan – name: 3 Department of Surgery Mayo Clinic Rochester, MN United States – name: 4 Department of Functional Joint Anatomy Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University Tokyo Japan – name: 2 Division of Orthopedic Surgery Research Mayo Clinic Rochester, MN United States – name: 1 Department of Orthopaedic and Spinal Surgery Graduate School of Medical and Dental Sciences Tokyo Medical and Dental University Tokyo Japan |
Author_xml | – sequence: 1 givenname: Tomoyuki orcidid: 0000-0002-9942-1811 surname: Kuroiwa fullname: Kuroiwa, Tomoyuki – sequence: 2 givenname: Aida orcidid: 0000-0002-2763-878X surname: Sarcon fullname: Sarcon, Aida – sequence: 3 givenname: Takuya orcidid: 0000-0002-0518-1918 surname: Ibara fullname: Ibara, Takuya – sequence: 4 givenname: Eriku orcidid: 0000-0001-8777-9552 surname: Yamada fullname: Yamada, Eriku – sequence: 5 givenname: Akiko orcidid: 0000-0003-3639-8201 surname: Yamamoto fullname: Yamamoto, Akiko – sequence: 6 givenname: Kazuya orcidid: 0000-0003-4927-2149 surname: Tsukamoto fullname: Tsukamoto, Kazuya – sequence: 7 givenname: Koji orcidid: 0000-0003-3733-0188 surname: Fujita fullname: Fujita, Koji |
BackLink | https://cir.nii.ac.jp/crid/1871709542412627200$$DView record in CiNii https://www.ncbi.nlm.nih.gov/pubmed/37713254$$D View this record in MEDLINE/PubMed |
BookMark | eNptkm1r2zAUhc3oWF_WvzAE22BjpJNkWS_7MkradYGyFpJ9FrJ9nSjYVibJo_33k5t2a8IQSEJ67pF0dI6zg971kGWnBJ9RovhnJjglL7IjwnI5kVKQg2fzw-w4hDXGFDNFXmWHuRAkpwU7yqrFCtCti9BHa1rkGjRdmXh1u0AmIIPm0DaTC2uWvQvRVmjhXItsj6au61yPbnxcuQ3UaefCBjABwhd0ebdpnTfR-Xs0j0N9_zp72Zg2wOnjeJL9_Ha5mH6fXN9czabn15OKFyxOaGnSBNcGwIgyF7KuwQCRBDNGKUmXb2RDscoVVpgA5orgplE1l5hxKCA_yWZb3dqZtd542xl_r52x-mHB-aU2Pr2iBQ1c8iJPZjBas6IiZc1LLgUvhKwEKUetr1utzVB2UFfJH2_aHdHdnd6u9NL91gQXjPBcJoUPjwre_RogRN3ZUEHbmh7cEDSV42lEMZLQt3vo2g2-T15pqkj6WMW5_EctTXqB7RuXDq5GUX0uuMCEKVYk6uw_VGo1dLZKmWlsWt8p-LhTkJgId3FphhD0bP5jl33z3JW_djzlKQGftkDlXQgeGl3ZaKJ1o0m2Te7oMa36Ia2Jfr9HPwnuc--2XG9tEhx7klItsCoYZYRyKijG-R8Rc-yQ |
CitedBy_id | crossref_primary_10_1016_j_arthro_2024_12_010 crossref_primary_10_2196_58831 crossref_primary_10_1177_09760016241259851 crossref_primary_10_1002_mds_30168 crossref_primary_10_1002_ijgo_15751 crossref_primary_10_2196_59607 crossref_primary_10_2196_51603 crossref_primary_10_1177_19386400241235834 crossref_primary_10_1007_s00266_024_04157_0 crossref_primary_10_1007_s41666_024_00171_8 crossref_primary_10_3390_jpm14080877 crossref_primary_10_1007_s00393_024_01535_6 crossref_primary_10_1007_s00508_024_02343_3 crossref_primary_10_1016_j_jham_2025_100217 crossref_primary_10_1001_jamanetworkopen_2024_57879 crossref_primary_10_7759_cureus_81068 crossref_primary_10_2196_56121 crossref_primary_10_3390_app14156390 crossref_primary_10_2196_54985 crossref_primary_10_1016_j_jns_2024_123360 crossref_primary_10_1007_s10916_024_02132_5 crossref_primary_10_1016_j_cmpb_2024_108013 crossref_primary_10_1097_CM9_0000000000003456 crossref_primary_10_1177_10497315241313071 crossref_primary_10_2106_JBJS_23_01417 crossref_primary_10_1016_j_jshs_2024_101016 crossref_primary_10_3390_diagnostics14242818 crossref_primary_10_2196_64509 crossref_primary_10_1002_lary_31434 crossref_primary_10_7759_cureus_49373 crossref_primary_10_1016_j_compbiomed_2024_109130 crossref_primary_10_1007_s40123_024_01066_y crossref_primary_10_2196_54297 crossref_primary_10_1186_s12911_024_02709_7 crossref_primary_10_2196_52992 crossref_primary_10_3233_ADR_240069 crossref_primary_10_1002_ksa_12639 crossref_primary_10_1016_j_arthro_2024_09_013 crossref_primary_10_3390_informatics12010009 crossref_primary_10_1002_ams2_70042 crossref_primary_10_1177_17585732241283971 crossref_primary_10_1302_2048_0105_141_360219 crossref_primary_10_60118_001c_121815 crossref_primary_10_1016_j_tsc_2024_101549 crossref_primary_10_1016_j_wneu_2024_05_052 crossref_primary_10_3389_fpsyg_2024_1426209 crossref_primary_10_1016_j_arthro_2024_09_020 crossref_primary_10_1093_geront_gnae062 crossref_primary_10_2106_JBJS_OA_24_00099 crossref_primary_10_1016_j_jfma_2024_08_032 crossref_primary_10_1016_j_jposna_2025_100164 crossref_primary_10_12677_acm_2025_151203 crossref_primary_10_1016_S1473_3099_23_00750_8 crossref_primary_10_2147_CCID_S478309 crossref_primary_10_2196_68560 crossref_primary_10_2196_60807 crossref_primary_10_2147_JMDH_S508511 crossref_primary_10_7759_cureus_65343 crossref_primary_10_1016_j_csbj_2023_11_058 crossref_primary_10_1016_j_ijoa_2025_104353 |
Cites_doi | 10.1371/journal.pone.0279842 10.2196/40425 10.3390/diagnostics13020286 10.1037/h0031643 10.1145/2702613.2732865 10.1136/bmj.h3480 10.2196/47184 10.1038/d41586-023-00056-7 10.2196/37478 10.3390/ijerph20043378 10.1136/bjophthalmol-2022-321141 10.1148/radiol.230171 10.2196/43803 10.1002/sim.2294 10.2196/46924 10.1007/s10151-023-02772-8 10.1093/jamia/ocac240 10.1038/s41591-018-0316-z 10.1016/j.semcancer.2023.01.006 10.2196/13445 10.21203/rs.3.rs-2566942/v1 10.1016/j.mayocp.2012.08.020 10.2196/46581 10.1517/14740338.7.3.227 10.1200/CCI.22.00006 10.2196/19928 10.2307/2529310 10.1016/j.imu.2023.101253 10.1016/j.socscimed.2015.04.004 10.2196/47564 10.2196/40167 |
ContentType | Journal Article |
Copyright | Tomoyuki Kuroiwa, Aida Sarcon, Takuya Ibara, Eriku Yamada, Akiko Yamamoto, Kazuya Tsukamoto, Koji Fujita. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.09.2023. COPYRIGHT 2023 Journal of Medical Internet Research 2023. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. Tomoyuki Kuroiwa, Aida Sarcon, Takuya Ibara, Eriku Yamada, Akiko Yamamoto, Kazuya Tsukamoto, Koji Fujita. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.09.2023. 2023 |
Copyright_xml | – notice: Tomoyuki Kuroiwa, Aida Sarcon, Takuya Ibara, Eriku Yamada, Akiko Yamamoto, Kazuya Tsukamoto, Koji Fujita. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.09.2023. – notice: COPYRIGHT 2023 Journal of Medical Internet Research – notice: 2023. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: Tomoyuki Kuroiwa, Aida Sarcon, Takuya Ibara, Eriku Yamada, Akiko Yamamoto, Kazuya Tsukamoto, Koji Fujita. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.09.2023. 2023 |
DBID | RYH AAYXX CITATION CGR CUY CVF ECM EIF NPM ISN 3V. 7QJ 7RV 7X7 7XB 8FI 8FJ 8FK ABUWG AFKRA ALSLI AZQEC BENPR CCPQU CNYFK DWQXO E3H F2A FYUFA GHDGH K9. KB0 M0S M1O NAPCQ PHGZM PHGZT PIMPY PKEHL PPXIY PQEST PQQKQ PQUKI PRINS PRQQA 7X8 5PM DOA |
DOI | 10.2196/47621 |
DatabaseName | CiNii Complete CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Gale In Context: Canada ProQuest Central (Corporate) Applied Social Sciences Index & Abstracts (ASSIA) Nursing & Allied Health Database Health & Medical Collection ProQuest Central (purchase pre-March 2016) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central UK/Ireland Social Science Premium Collection ProQuest Central Essentials ProQuest Central ProQuest One Library & information science collection. ProQuest Central Korea Library & Information Sciences Abstracts (LISA) Library & Information Science Abstracts (LISA) Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Nursing & Allied Health Database (Alumni Edition) Health & Medical Collection (Alumni) Library Science Database Nursing & Allied Health Premium ProQuest Central Premium ProQuest One Academic Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Health & Nursing ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China ProQuest One Social Sciences MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Publicly Available Content Database ProQuest One Academic Middle East (New) Library and Information Science Abstracts (LISA) ProQuest Central Essentials ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest One Health & Nursing Applied Social Sciences Index and Abstracts (ASSIA) ProQuest Central China ProQuest Central ProQuest Library Science Health Research Premium Collection Health and Medicine Complete (Alumni Edition) ProQuest Central Korea Library & Information Science Collection ProQuest Central (New) Social Science Premium Collection ProQuest One Social Sciences ProQuest One Academic Eastern Edition ProQuest Nursing & Allied Health Source ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Hospital Collection (Alumni) Nursing & Allied Health Premium ProQuest Health & Medical Complete ProQuest One Academic UKI Edition ProQuest Nursing & Allied Health Source (Alumni) ProQuest One Academic ProQuest One Academic (New) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | Publicly Available Content Database MEDLINE MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 4 dbid: BENPR name: ProQuest Central url: https://www.proquest.com/central sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Medicine Library & Information Science |
EISSN | 1438-8871 |
ExternalDocumentID | oai_doaj_org_article_e6865349142d45c1bd6b6876578c71be PMC10541638 A767014945 37713254 10_2196_47621 |
Genre | Journal Article |
GroupedDBID | --- .4I .DC 29L 2WC 36B 53G 5GY 5VS 77K 7RV 7X7 8FI 8FJ AAFWJ AAKPC AAWTL ABDBF ABIVO ABUWG ACGFO ADBBV AEGXH AENEX AFKRA AFPKN AIAGR ALIPV ALMA_UNASSIGNED_HOLDINGS ALSLI AOIJS BAWUL BCNDV BENPR CCPQU CNYFK CS3 DIK DU5 DWQXO E3Z EAP EBD EBS EJD ELW EMB EMOBN ESX F5P FRP FYUFA GROUPED_DOAJ GX1 HMCUK HYE IAO ICO IEA IHR INH ISN ITC KQ8 M1O M48 NAPCQ OK1 OVT P2P PGMZT PHGZM PHGZT PIMPY PQQKQ RNS RPM RYH SJN SV3 TR2 UKHRP XSB AAYXX CITATION CGR CUY CVF ECM EIF NPM PMFND 3V. 7QJ 7XB 8FK ACUHS AZQEC E3H F2A K9. PKEHL PPXIY PQEST PQUKI PRINS PRQQA 7X8 5PM PUEGO |
ID | FETCH-LOGICAL-c654t-2bac650daeea7b378ddeae181044221020f8f209390901e06910ff9d68046e5e3 |
IEDL.DBID | M48 |
ISSN | 1438-8871 1439-4456 |
IngestDate | Wed Aug 27 01:28:39 EDT 2025 Thu Aug 21 18:36:14 EDT 2025 Fri Jul 11 02:09:00 EDT 2025 Fri Jul 25 22:23:50 EDT 2025 Tue Jun 17 22:21:38 EDT 2025 Tue Jun 10 21:25:03 EDT 2025 Fri Jun 27 05:57:08 EDT 2025 Thu Jan 02 22:37:55 EST 2025 Tue Jul 01 02:06:11 EDT 2025 Thu Apr 24 23:01:39 EDT 2025 Thu Jun 26 23:10:23 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 11 |
Keywords | chatbot natural language processing precision accuracy diagnosis language model health information orthopedic disease artificial intelligence generative pretrained transformer AI model ChatGPT self-diagnosis |
Language | English |
License | Tomoyuki Kuroiwa, Aida Sarcon, Takuya Ibara, Eriku Yamada, Akiko Yamamoto, Kazuya Tsukamoto, Koji Fujita. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.09.2023. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c654t-2bac650daeea7b378ddeae181044221020f8f209390901e06910ff9d68046e5e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-3639-8201 0000-0002-9942-1811 0000-0003-4927-2149 0000-0001-8777-9552 0000-0003-3733-0188 0000-0002-2763-878X 0000-0002-0518-1918 |
OpenAccessLink | http://journals.scholarsportal.info/openUrl.xqy?doi=10.2196/47621 |
PMID | 37713254 |
PQID | 2917629668 |
PQPubID | 2033121 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_e6865349142d45c1bd6b6876578c71be pubmedcentral_primary_oai_pubmedcentral_nih_gov_10541638 proquest_miscellaneous_2865781941 proquest_journals_2917629668 gale_infotracmisc_A767014945 gale_infotracacademiconefile_A767014945 gale_incontextgauss_ISN_A767014945 pubmed_primary_37713254 crossref_citationtrail_10_2196_47621 crossref_primary_10_2196_47621 nii_cinii_1871709542412627200 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-09-15 |
PublicationDateYYYYMMDD | 2023-09-15 |
PublicationDate_xml | – month: 09 year: 2023 text: 2023-09-15 day: 15 |
PublicationDecade | 2020 |
PublicationPlace | Canada |
PublicationPlace_xml | – name: Canada – name: Toronto – name: Toronto, Canada |
PublicationTitle | Journal of Medical Internet Research |
PublicationTitleAlternate | J Med Internet Res |
PublicationYear | 2023 |
Publisher | JMIR Publications Inc Journal of Medical Internet Research Gunther Eysenbach MD MPH, Associate Professor JMIR Publications |
Publisher_xml | – name: JMIR Publications Inc – name: Journal of Medical Internet Research – name: Gunther Eysenbach MD MPH, Associate Professor – name: JMIR Publications |
References | ref13 ref35 ref12 ref34 ref15 ref37 ref14 ref36 ref31 ref30 ref11 ref33 ref10 ref32 ref2 ref1 ref17 ref39 ref16 ref38 ref19 ref18 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 ref29 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref40 |
References_xml | – ident: ref6 doi: 10.1371/journal.pone.0279842 – ident: ref1 doi: 10.2196/40425 – ident: ref12 doi: 10.3390/diagnostics13020286 – ident: ref31 doi: 10.1037/h0031643 – ident: ref40 doi: 10.1145/2702613.2732865 – ident: ref29 – ident: ref20 doi: 10.1136/bmj.h3480 – ident: ref24 – ident: ref3 doi: 10.2196/47184 – ident: ref35 doi: 10.1038/d41586-023-00056-7 – ident: ref25 – ident: ref4 doi: 10.2196/37478 – ident: ref21 doi: 10.3390/ijerph20043378 – ident: ref27 – ident: ref13 doi: 10.1136/bjophthalmol-2022-321141 – ident: ref36 doi: 10.1148/radiol.230171 – ident: ref18 doi: 10.2196/43803 – ident: ref30 doi: 10.1002/sim.2294 – ident: ref37 doi: 10.2196/46924 – ident: ref7 doi: 10.1007/s10151-023-02772-8 – ident: ref8 doi: 10.1093/jamia/ocac240 – ident: ref10 doi: 10.1038/s41591-018-0316-z – ident: ref34 – ident: ref11 doi: 10.1016/j.semcancer.2023.01.006 – ident: ref16 doi: 10.2196/13445 – ident: ref22 doi: 10.21203/rs.3.rs-2566942/v1 – ident: ref23 doi: 10.1016/j.mayocp.2012.08.020 – ident: ref2 doi: 10.2196/46581 – ident: ref19 doi: 10.1517/14740338.7.3.227 – ident: ref28 – ident: ref9 doi: 10.1200/CCI.22.00006 – ident: ref15 doi: 10.2196/19928 – ident: ref26 – ident: ref32 doi: 10.2307/2529310 – ident: ref38 doi: 10.1016/j.imu.2023.101253 – ident: ref17 doi: 10.1016/j.socscimed.2015.04.004 – ident: ref39 doi: 10.2196/47564 – ident: ref5 doi: 10.2196/40167 – ident: ref33 – ident: ref14 |
SSID | ssj0020491 |
Score | 2.5941157 |
Snippet | Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a state-of-the-art... Background Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a... Background:Artificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a... BackgroundArtificial intelligence (AI) has gained tremendous popularity recently, especially the use of natural language processing (NLP). ChatGPT is a... |
SourceID | doaj pubmedcentral proquest gale pubmed crossref nii |
SourceType | Open Website Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | e47621 |
SubjectTerms | Accuracy Answers Arthritis Artificial Intelligence Attention Chatbots Clinical standards Communication Computational linguistics Computer applications to medicine. Medical informatics Diagnostic tests Disease Health care Health care delivery Humans Knee Language processing Lumbar spinal stenosis Medical care Medical diagnosis Medical research Medicine, Experimental Multimedia Musculoskeletal Diseases Natural language interfaces Natural Language Processing Original Paper Orthopedics Osteoarthritis Osteoarthritis, Knee Pain Patients Popularity Public aspects of medicine Quality management R858-859.7 RA1-1270 Repetitive strain injuries Reproducibility Reproducibility of Results Selfdiagnosis Spinal Cord Diseases |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1La9wwEBZtDqFQSpu-3GaDWkJ7MrGsl91bmjRNC0kD2UBuRpLl7MJih9h76L_vjK01cSj00suyrGZlazQjfR-aGRGyr0pZpc7ZWJdCxYJ78Dkhk1hkTGjpXOk5JjifnavTK_HzWl7fu-oLY8KG8sCD4g68ypTkImciLYV0zJbKKnBhsDSnmfW4-sKetyFTgWoB7mXb5CkGOoOJHQhweTbZefoC_eMy_LheLv8GMR9GSt7bek6ek2cBM9LD4V1fkEe-3iGzkHFAP9GQUoQqpsFXd8j2WTg1f0kc2AK9aDoMDIJ-mooeLUz3_WJOTUsNvfSrKj4eQu7gCXTeNCu6rCnmjkCPv-66RXOLndHj4TSn_UKH0L3-hJ5iKOLvV-Tq5Nv86DQOlyvETknRxak18CUpjfdGW64zWOeMh_0-ESJFHphUWQWa5XkCkMEnCnBFVeWlyoBRe-n5a7JVN7V_S6gVnIvSWAPzLpj2wFhyJa0wzLjU2Coi-xvFFy5UHscLMFYFMBCcn6Kfn4jsjWK3Q6mNhwJfcdbGRqyM3f8A9lIEeyn-ZS8R-YhzXmDtixqDa27Mum2LH5fnxaFWGhmjkBH5HISqBt7UmZCrAOPFclkTyd2JJDinmzTPwLRg1PjJgJxqgLQCUFOq8AA8gb9vjK4Ii0dbpEChVQo8NIvIh7EZe8aAuNo3a5DJcEQsF6CVN4ONjnrhWjMOxD8i2cR6J4qbttTLRV9aHNB2j9Df_Q9VvydPUoCEGF3D5C7Z6u7WfgYQrrN7vbf-AXbhPzE priority: 102 providerName: Directory of Open Access Journals – databaseName: Library Science Database dbid: M1O link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1db9MwFLXYkCYkxMf4WGCdDJrgKVuc2E7KCxobYyB1m7RO2ltkO85aUSVdkz7Ar-fexA2kQjzwUlX1rWMnx9fnxsfXhOzLTOShMdqPMy59HlkYc1wEPk8Yj4UxmY1wg_PoXJ5d82834sa9cKucrHLlExtHnZUG35EfhhBXyBDIefJxfufjqVG4uuqO0Ngg94EocxyYI3bRBVzAftkWeYhyZwDaIYcKWG_-adL0d854o5hO_0Y01_WSf0xAp49Jump6qzv5frCs9YH5uZbV8f_79oQ8ctyUHrVgekru2WKbDNzOBvqOuq1L-Cip8wnbZGvkVuefEQOYo5dljQIkqKfM6fFE1V8ux1RVVNErO8v9k1baB1eg47Kc0WlBcY8K1HixqCflHCujJ-2qUfWBthLBRglAUfL44zm5Pv08Pj7z3SEOvpGC136oFXwJMmWtinUUJ-BPlQVeEXAeYrwZ5EkeBsNoGAA1sYEE_pLnw0wmELlbYaMXZLMoC7tDqOZRxDOlFeCLs9hCZDSUQnPFlAmVzj2yv3q0qXEZzvGgjVkKkQ4iIG0Q4JG9zmzepvRYN_iEuOgKMQN380O5uE3dgE6tTKSIAGA8zLgwTGdSS5hawAOamGnrkbeIqhRzbBQo4rlVy6pKv16dp0exjDEy5cIj751RXkJLjXJ7IqC_mJarZ7nbswQnYHrFAwAv9Bo_GQTBMVBnDuwslLjQHsDfV8hLnZOq0t-w88ibrhhrRuFdYcsl2CTYIzbkcFdetqOguy9RHLMoFNwjSW989G5cv6SYTpoU5sDqm0jg1b_b9Zo8CIFUoj6HiV2yWS-WdgAksNZ7zUj_BcrYWqs priority: 102 providerName: ProQuest |
Title | The Potential of ChatGPT as a Self-Diagnostic Tool in Common Orthopedic Diseases: Exploratory Study |
URI | https://cir.nii.ac.jp/crid/1871709542412627200 https://www.ncbi.nlm.nih.gov/pubmed/37713254 https://www.proquest.com/docview/2917629668 https://www.proquest.com/docview/2865781941 https://pubmed.ncbi.nlm.nih.gov/PMC10541638 https://doaj.org/article/e6865349142d45c1bd6b6876578c71be |
Volume | 25 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3ra9swEBdrC2Uwxta9vDVBG2X75NWW9XAGY_S5bpA0rAn0m5FluQkEe0scWP_73dmKqUs_7Isx1lmO7iH9Lnc6EXIgM5EzY1JfZVz6PLJgc1wEPo9DroQxmY1wg_NwJC-m_Oe1uJNN6Bi4etC1w_OkpsvF579_br-BwX_FNGZQoEMOBg0O0A4sRgoPMRjyNpDAAADXPhcHuwZ7CnfJk85rneWortrfzs1bxXz-EO68nz55Zz06f0aeOiBJjxrJPyePbLFHem4bAv1I3T4j5Dt1BrxHdoculP6CGFAQOi4rzBaCfsqcnsx09X08oXpFNb2yi9w_bfLw4At0UpYLOi8obiiBHi-X1QwYBZ3R0ybEs_pCm3y-OmxPMT_x9iWZnp9NTi58d-KCb6Tglc9SDTdBpq3VKo1UDJOftgACAs4ZOodBHucsGESDAHCEDSSAjTwfZDIGN9sKG70i20VZ2DeEpjyKeKZTDcoAcrHgxgykSLkOtWE6zT1ysGF8Ylw5cjwVY5GAW4LySWr5eKTfkv1u6m_cJzhGqbWNWC67flAubxJnfYmVsRQRqAJnGRcmTDOZSlgHYLoyKkytRz6gzBMsiFFgxs2NXq9WyY-rUXKkpEI3kguPfHJEeQm_1Gi3gQHGizW0OpT7HUqwWNNp7oFqwajxGoJSKsC5HLSXSYyKB_D6RumSjUEkDPxqycA5jT3yvm3GnjFLrrDlGmhiHFE44MCV142OtnyJlAojJrhH4o72dhjXbSnms7reOEDwGra__Y8PvyOPGcBAzKgJxT7ZrpZr2wPYVqV9sqWuVZ_sHJ-Nxr_69Z8fcB2Gl_3aZP8BT1dATA |
linkProvider | Scholars Portal |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3db9MwELe2ThpICMH4CqzDoAFP0RLHdlIkhLZ1o2VrmbZO2ltwHGetVCWlSYX2T_E3cpekhUyIt71UUX1xbd_5PurfnQnZlbFImNaR7cdc2twzsOe4cGweuNwXWsfGwwTnwVD2LvnXK3G1Rn4tc2EQVrnUiaWijjON_5HvMYgrJAPnPPg8-2HjrVF4urq8QqMSixNz8xNCtvxTvwv8fcfY8dHosGfXtwrYWgpe2CxS8ODEyhjlR54fwAZXBgydwznDAMhJgoRBoN9xwFYaR4JBTZJOLAMIJY0wHvS7Tja4Jx3WIhsHR8Oz81WIB_62u0keIMAaRHuPw5DdhsUrLwZYqf_1dDL5l2t7G6H5l8k7fkQe1r4q3a-E6zFZM-kWadeZDvQ9rVOZkLW01hFbZHNQn9Y_IRpkkJ5lBQKSoJ8soYdjVXw5G1GVU0UvzDSxuxXUD36BjrJsSicpxZwV6PHbvBhnM-yMdqtTpPwjrSCDJTKAIgTy5im5vJPlf0ZaaZaaF4RG3PN4rCIF8sZd30Ck1JEi4spVmqkoscjucuFDXVc8x4s3piFEPsifsOSPRXZWZLOqxMdtggPk2qoRK3KXX2Tz67De4KGRgRQesJ-zmAvtRrGMJJga0IjadyNjkbfI8xBrbqQI6rlWizwP-xfDcN-XPkaqXFjkQ02UZDBSreocCZgvlulqUG43KEEp6EZzG0QLZo2fLgTFPrjSHLw1JvHg3YHXl0IX1korD_9sMYu8WTVjzwjES022AJoAZ-R2OKzK80pGV-vi-b7rMcEtEjSkt7FwzZZ0Mi5LmoOXX0YGL_8_rtfkXm80OA1P-8OTV-Q-A4cTsTuu2CatYr4wbXAQi2in3pWUfL9rRfAbg3p03A |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Rb9MwELa2TqqQEIIBI7AOgwY8RU0cx0mRENrWjZWxUrFO2ltwHGetVCWlaYX21_h13CVuIBPibS9VVF9c23e-u6--OxOyLxI_ZUrFdpBwYXNPw57jvmPz0OWBr1SiPUxwPh-K00v--cq_2iC_1rkwGFa51omlok5yhf-RdxngCsHAOQ-7qQmLGPVPPs5_2HiDFJ60rq_TqETkTN_8BPhWfBj0gddvGDs5Hh-d2uaGAVsJny9tFkt4cBKptQxiLwhhs0sNRs_hnCEYctIwZQD6ew7YTe0IMK5p2ktECLBS-9qDfjfJVoCoqEW2Do-Ho2813APf222T-xhsDWLe5TB8t2H9yksCalOwmU2n_3Jzb0dr_mX-Th6SB8ZvpQeVoD0iGzrbJh2T9UDfUpPWhGymRl9sk_a5Obl_TBTIIx3lSwxOgn7ylB5N5PLTaExlQSW90LPU7ldhf_ALdJznMzrNKOavQI9fF8tJPsfOaL86USre0yp8sIwSoBgOefOEXN7J8j8lrSzP9DNCY-55PJGxBNnjbqABNfWEH3PpSsVknFpkf73wkTLVz_ESjlkEKAj5E5X8scheTTavyn3cJjhErtWNWJ27_CJfXEdms0dahML3gP2cJdxXbpyIWIDZAe2oAjfWFnmNPI-w_kaGknwtV0URDS6G0UEgAkSt3LfIO0OU5jBSJU2-BMwXS3Y1KHcblKAgVKO5A6IFs8ZPFwByAG41B8-NCTyEd-D1tdBFRoEV0Z_tZpFXdTP2jEF5mc5XQBPijNweh1XZqWS0XhcvCFyP-dwiYUN6GwvXbMmmk7K8OXj8JUp4_v9xvSRtUADRl8Hw7AW5x8D3xDAe198lreVipTvgKy7jPbMpKfl-13rgNyu-eRE |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=The+Potential+of+ChatGPT+as+a+Self-Diagnostic+Tool+in+Common+Orthopedic+Diseases%3A+Exploratory+Study&rft.jtitle=Journal+of+medical+Internet+research&rft.au=Kuroiwa%2C+Tomoyuki&rft.au=Sarcon%2C+Aida&rft.au=Ibara%2C+Takuya&rft.au=Yamada%2C+Eriku&rft.date=2023-09-15&rft.issn=1438-8871&rft.eissn=1438-8871&rft.volume=25&rft.spage=e47621&rft_id=info:doi/10.2196%2F47621&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1438-8871&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1438-8871&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1438-8871&client=summon |