Artificial Intelligence‐Generated Patient Education Materials for Helicobacter pylori Infection: A Comparative Analysis

ABSTRACT Background Patient education contributes to improve public awareness of Helicobacter pylori. Large language models (LLMs) offer opportunities to revolutionize patient education transformatively. This study aimed to assess the quality of patient educational materials (PEMs) generated by LLMs...

Full description

Saved in:
Bibliographic Details
Published inHelicobacter (Cambridge, Mass.) Vol. 29; no. 4; pp. e13115 - n/a
Main Authors Zeng, Shuyan, Kong, Qingzhou, Wu, Xiaoqi, Ma, Tian, Wang, Limei, Xu, Leiqi, Kou, Guanjun, Zhang, Mingming, Yang, Xiaoyun, Zuo, Xiuli, Li, Yueyue, Li, Yanqing
Format Journal Article
LanguageEnglish
Published Oxford Wiley Subscription Services, Inc 01.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:ABSTRACT Background Patient education contributes to improve public awareness of Helicobacter pylori. Large language models (LLMs) offer opportunities to revolutionize patient education transformatively. This study aimed to assess the quality of patient educational materials (PEMs) generated by LLMs and compared with physician sourced. Materials and Methods Unified instruction about composing a PEM about H. pylori at a sixth‐grade reading level in both English and Chinese were given to physician and five LLMs (Bing Copilot, Claude 3 Opus, Gemini Pro, ChatGPT‐4, and ERNIE Bot 4.0). The assessments of the completeness and comprehensibility of the Chinese PEMs were conducted by five gastroenterologists and 50 patients according to three‐point Likert scale. Gastroenterologists were asked to evaluate both English and Chinese PEMs and determine the accuracy and safety. The accuracy was assessed by six‐point Likert scale. The minimum acceptable scores were 4, 2, and 2 for accuracy, completeness, and comprehensibility, respectively. The Flesch–Kincaid and Simple Measure of Gobbledygook scoring systems were employed as readability assessment tools. Results Accuracy and comprehensibility were acceptable for English PEMs of all sources, while completence was not satisfactory. Physician‐sourced PEM had the highest accuracy mean score of 5.60 and LLM‐generated English PEMs ranged from 4.00 to 5.40. The completeness score was comparable between physician‐sourced PEM and LLM‐generated PEMs in English. Chinese PEMs from LLMs proned to have lower score in accuracy and completeness assessment than English PEMs. The mean score for completeness of five LLM‐generated Chinese PEMs was 1.82–2.70 in patients' perspective, which was higher than gastroenterologists' assessment. Comprehensibility was satisfactory for all PEMs. No PEM met the recommended sixth‐grade reading level. Conclusion LLMs have potential in assisting patient education. The accuracy and comprehensibility of LLM‐generated PEMs were acceptable, but further optimization on improving completeness and accounting for a variety of linguistic contexts are essential for enhancing the feasibility.
Bibliography:This study was supported by the Key R&D Program of Shandong Province, China (Major Scientific and Technological Innovation Project) (NO. 2021CXGC010506).
Yueyue Li, Xiuli Zuo, and Xiaoyun Yang were co‐corresponding authors for this study.
Shuyan Zeng and Qingzhou Kong contributed equally to this study.
Funding
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1083-4389
1523-5378
1523-5378
DOI:10.1111/hel.13115