A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

Large language models (LLMs) hold immense promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. In t...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pfohl, Stephen R, Cole-Lewis, Heather, Sayres, Rory, Neal, Darlene, Asiedu, Mercy, Dieng, Awa, Tomasev, Nenad, Qazi Mamunur Rashid, Azizi, Shekoofeh, Rostamzadeh, Negar, McCoy, Liam G, Celi, Leo Anthony, Liu, Yun, Schaekermann, Mike, Walton, Alanna, Parrish, Alicia, Nagpal, Chirag, Singh, Preeti, Dewitt, Akeiylah, Mansfield, Philip, Prakash, Sushant, Heller, Katherine, Karthikesalingam, Alan, Semturs, Christopher, Barral, Joelle, Corrado, Greg, Matias, Yossi, Smith-Loud, Jamila, Horn, Ivor, Singhal, Karan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large language models (LLMs) hold immense promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. In this work, we present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and then conduct an empirical case study with Med-PaLM 2, resulting in the largest human evaluation study in this area to date. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases, and EquityMedQA, a collection of seven newly-released datasets comprising both manually-curated and LLM-generated questions enriched for adversarial queries. Both our human assessment framework and dataset design process are grounded in an iterative participatory approach and review of possible biases in Med-PaLM 2 answers to adversarial queries. Through our empirical study, we find that the use of a collection of datasets curated through a variety of methodologies, coupled with a thorough evaluation protocol that leverages multiple assessment rubric designs and diverse rater groups, surfaces biases that may be missed via narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. We emphasize that while our framework can identify specific forms of bias, it is not sufficient to holistically assess whether the deployment of an AI system promotes equitable health outcomes. We hope the broader community leverages and builds on these tools and methods towards realizing a shared goal of LLMs that promote accessible and equitable healthcare for all.
ISSN:2331-8422