Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph
The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. Ho...
Saved in:
Published in | Transactions of the Association for Computational Linguistics Vol. 13; pp. 220 - 248 |
---|---|
Main Authors | , , , , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA
MIT Press
19.03.2025
|
Online Access | Get full text |
Cover
Loading…
Summary: | The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. However, research to date on UQ for LLMs has been fragmented in terms of techniques and evaluation methodologies. In this work, we address this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines and offers an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches. |
---|---|
Bibliography: | 2025 |
ISSN: | 2307-387X 2307-387X |
DOI: | 10.1162/tacl_a_00737 |