Large Language Models to Help Appeal Denied Radiotherapy Services

Large language model (LLM) artificial intelligences may help physicians appeal insurer denials of prescribed medical services, a task that delays patient care and contributes to burnout. We evaluated LLM performance at this task for denials of radiotherapy services. We evaluated generative pretraine...

Full description

Saved in:
Bibliographic Details
Published inJCO clinical cancer informatics Vol. 8; p. e2400129
Main Authors Kiser, Kendall J, Waters, Michael, Reckford, Jocelyn, Lundeberg, Christopher, Abraham, Christopher D
Format Journal Article
LanguageEnglish
Published United States 01.09.2024
Subjects
Online AccessGet more information

Cover

Loading…
More Information
Summary:Large language model (LLM) artificial intelligences may help physicians appeal insurer denials of prescribed medical services, a task that delays patient care and contributes to burnout. We evaluated LLM performance at this task for denials of radiotherapy services. We evaluated generative pretrained transformer 3.5 (GPT-3.5; OpenAI, San Francisco, CA), GPT-4, GPT-4 with internet search functionality (GPT-4web), and GPT-3.5ft. The latter was developed by fine-tuning GPT-3.5 via an OpenAI application programming interface with 53 examples of appeal letters written by radiation oncologists. Twenty test prompts with simulated patient histories were programmatically presented to the LLMs, and output appeal letters were scored by three blinded radiation oncologists for language representation, clinical detail inclusion, clinical reasoning validity, literature citations, and overall readiness for insurer submission. Interobserver agreement between radiation oncologists' scores was moderate or better for all domains (Cohen's kappa coefficients: 0.41-0.91). GPT-3.5, GPT-4, and GPT-4web wrote letters that were on average linguistically clear, summarized provided clinical histories without confabulation, reasoned appropriately, and were scored useful to expedite the insurance appeal process. GPT-4 and GPT-4web letters demonstrated superior clinical reasoning and were readier for submission than GPT-3.5 letters ( < .001). Fine-tuning increased GPT-3.5ft confabulation and compromised performance compared with other LLMs across all domains ( < .001). All LLMs, including GPT-4web, were poor at supporting clinical assertions with existing, relevant, and appropriately cited primary literature. When prompted appropriately, three commercially available LLMs drafted letters that physicians deemed would expedite appealing insurer denials of radiotherapy services. LLMs may decrease this task's clerical workload on providers. However, LLM performance worsened when fine-tuned with a task-specific, small training data set.
ISSN:2473-4276
DOI:10.1200/CCI.24.00129