Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard

Purpose To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard. Methods Fifty frequently asked questions on obstructive sleep apnoea in English we...

Full description

Saved in:
Bibliographic Details
Published inEuropean archives of oto-rhino-laryngology Vol. 281; no. 2; pp. 985 - 993
Main Authors Cheong, Ryan Chin Taw, Unadkat, Samit, Mcneillis, Venkata, Williamson, Andrew, Joseph, Jonathan, Randhawa, Premjit, Andrews, Peter, Paleri, Vinidh
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Purpose To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard. Methods Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool–Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard. Results A total of 46 questions were curated and categorized into three domains: condition ( n  = 14), investigation ( n  = 9) and treatment ( n  = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% ( p  < 0.001); investigation 89.94% vs. 71.67% ( p  < 0.001); treatment 90.78% vs.73.74% ( p  < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% ( p  < 0.001); investigation 72.22% vs. 54.44% ( p  = 0.05); treatment 73.04% vs. 54.78% ( p  = 0.002). The mean Flesch–Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard. Conclusion Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0937-4477
1434-4726
DOI:10.1007/s00405-023-08319-9