Chatbot Performance in Defining and Differentiating Palliative Care, Supportive Care, Hospice Care

Artificial intelligence (AI) chatbot platforms are increasingly used by patients as sources of information. However, there is limited data on the performance of these platforms, especially regarding palliative care terms. We evaluated the accuracy, comprehensiveness, reliability, and readability of...

Full description

Saved in:
Bibliographic Details
Published inJournal of pain and symptom management Vol. 67; no. 5; pp. e381 - e391
Main Authors Kim, Min Ji, Admane, Sonal, Chang, Yuchieh Kathryn, Shih, Kao-swi Karina, Reddy, Akhila, Tang, Michael, Cruz, Maxine De La, Taylor, Terry Pham, Bruera, Eduardo, Hui, David
Format Journal Article
LanguageEnglish
Published United States Elsevier Inc 01.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial intelligence (AI) chatbot platforms are increasingly used by patients as sources of information. However, there is limited data on the performance of these platforms, especially regarding palliative care terms. We evaluated the accuracy, comprehensiveness, reliability, and readability of three AI platforms in defining and differentiating “palliative care,” “supportive care,” and “hospice care.” We asked ChatGPT, Microsoft Bing Chat, Google Bard to define and differentiate “palliative care,” “supportive care,” and “hospice care” and provide three references. Outputs were randomized and assessed by six blinded palliative care physicians using 0–10 scales (10 = best) for accuracy, comprehensiveness, and reliability. Readability was assessed using Flesch Kincaid Grade Level and Flesch Reading Ease scores. The mean (SD) accuracy scores for ChatGPT, Bard, and Bing Chat were 9.1 (1.3), 8.7 (1.5), and 8.2 (1.7), respectively; for comprehensiveness, the scores for the three platforms were 8.7 (1.5), 8.1 (1.9), and 5.6 (2.0), respectively; for reliability, the scores were 6.3 (2.5), 3.2 (3.1), and 7.1 (2.4), respectively. Despite generally high accuracy, we identified some major errors (e.g., Bard stated that supportive care had “the goal of prolonging life or even achieving a cure”). We found several major omissions, particularly with Bing Chat (e.g., no mention of interdisciplinary teams in palliative care or hospice care). References were often unreliable. Readability scores did not meet recommended levels for patient educational materials. We identified important concerns regarding the accuracy, comprehensiveness, reliability, and readability of outputs from AI platforms. Further research is needed to improve their performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0885-3924
1873-6513
DOI:10.1016/j.jpainsymman.2024.01.008