Use of a large language model with instruction‐tuning for reliable clinical frailty scoring

Background Frailty is an important predictor of health outcomes, characterized by increased vulnerability due to physiological decline. The Clinical Frailty Scale (CFS) is commonly used for frailty assessment but may be influenced by rater bias. Use of artificial intelligence (AI), particularly Larg...

Full description

Saved in:
Bibliographic Details
Published inJournal of the American Geriatrics Society (JAGS) Vol. 72; no. 12; pp. 3849 - 3854
Main Authors Kee, Xiang Lee Jamie, Sng, Gerald Gui Ren, Lim, Daniel Yan Zheng, Tung, Joshua Yi Min, Abdullah, Hairil Rizal, Chowdury, Anupama Roy
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.12.2024
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background Frailty is an important predictor of health outcomes, characterized by increased vulnerability due to physiological decline. The Clinical Frailty Scale (CFS) is commonly used for frailty assessment but may be influenced by rater bias. Use of artificial intelligence (AI), particularly Large Language Models (LLMs) offers a promising method for efficient and reliable frailty scoring. Methods The study utilized seven standardized patient scenarios to evaluate the consistency and reliability of CFS scoring by OpenAI's GPT‐3.5‐turbo model. Two methods were tested: a basic prompt and an instruction‐tuned prompt incorporating CFS definition, a directive for accurate responses, and temperature control. The outputs were compared using the Mann–Whitney U test and Fleiss' Kappa for inter‐rater reliability. The outputs were compared with historic human scores of the same scenarios. Results The LLM's median scores were similar to human raters, with differences of no more than one point. Significant differences in score distributions were observed between the basic and instruction‐tuned prompts in five out of seven scenarios. The instruction‐tuned prompt showed high inter‐rater reliability (Fleiss' Kappa of 0.887) and produced consistent responses in all scenarios. Difficulty in scoring was noted in scenarios with less explicit information on activities of daily living (ADLs). Conclusions This study demonstrates the potential of LLMs in consistently scoring clinical frailty with high reliability. It demonstrates that prompt engineering via instruction‐tuning can be a simple but effective approach for optimizing LLMs in healthcare applications. The LLM may overestimate frailty scores when less information about ADLs is provided, possibly as it is less subject to implicit assumptions and extrapolation than humans. Future research could explore the integration of LLMs in clinical research and frailty‐related outcome prediction.
Bibliography:Xiang Lee Jamie Kee and Gerald Gui Ren Sng contributed equally to this work.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0002-8614
1532-5415
1532-5415
DOI:10.1111/jgs.19114