Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation

Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as in...

Full description

Saved in:
Bibliographic Details
Published inNpj mental health research Vol. 3; no. 1; p. 12
Main Authors Stade, Elizabeth C, Stirman, Shannon Wiltsey, Ungar, Lyle H, Boland, Cody L, Schwartz, H Andrew, Yaden, David B, Sedoc, João, DeRubeis, Robert J, Willer, Robb, Eichstaedt, Johannes C
Format Journal Article
LanguageEnglish
Published England Springer Nature B.V 02.04.2024
Nature Publishing Group UK
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2731-4251
2731-4251
DOI:10.1038/s44184-024-00056-z