A comparative study of English and Japanese ChatGPT responses to anaesthesia-related medical questions

The expansion of artificial intelligence (AI) within large language models (LLMs) has the potential to streamline healthcare delivery. Despite the increased use of LLMs, disparities in their performance particularly in different languages, remain underexplored. This study examines the quality of Cha...

Full description

Saved in:
Bibliographic Details
Published inBJA open Vol. 10; p. 100296
Main Authors Ando, Kazuo, Sato, Masaki, Wakatsuki, Shin, Nagai, Ryotaro, Chino, Kumiko, Kai, Hinata, Sasaki, Tomomi, Kato, Rie, Nguyen, Teresa Phuongtram, Guo, Nan, Sultan, Pervez
Format Journal Article
LanguageEnglish
Published England Elsevier Ltd 01.06.2024
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The expansion of artificial intelligence (AI) within large language models (LLMs) has the potential to streamline healthcare delivery. Despite the increased use of LLMs, disparities in their performance particularly in different languages, remain underexplored. This study examines the quality of ChatGPT responses in English and Japanese, specifically to questions related to anaesthesiology. Anaesthesiologists proficient in both languages were recruited as experts in this study. Ten frequently asked questions in anaesthesia were selected and translated for evaluation. Three non-sequential responses from ChatGPT were assessed for content quality (accuracy, comprehensiveness, and safety) and communication quality (understanding, empathy/tone, and ethics) by expert evaluators. Eight anaesthesiologists evaluated English and Japanese LLM responses. The overall quality for all questions combined was higher in English compared with Japanese responses. Content and communication quality were significantly higher in English compared with Japanese LLMs responses (both P<0.001) in all three responses. Comprehensiveness, safety, and understanding were higher scores in English LLM responses. In all three responses, more than half of the evaluators marked overall English responses as better than Japanese responses. English LLM responses to anaesthesia-related frequently asked questions were superior in quality to Japanese responses when assessed by bilingual anaesthesia experts in this report. This study highlights the potential for language-related disparities in healthcare information and the need to improve the quality of AI responses in underrepresented languages. Future studies are needed to explore these disparities in other commonly spoken languages and to compare the performance of different LLMs.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2772-6096
2772-6096
DOI:10.1016/j.bjao.2024.100296