Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph

The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. Ho...

Full description

Saved in:
Bibliographic Details
Published inTransactions of the Association for Computational Linguistics Vol. 13; pp. 220 - 248
Main Authors Vashurin, Roman, Fadeeva, Ekaterina, Vazhentsev, Artem, Rvanova, Lyudmila, Vasilev, Daniil, Tsvigun, Akim, Petrakov, Sergey, Xing, Rui, Sadallah, Abdelrahman, Grishchenkov, Kirill, Panchenko, Alexander, Baldwin, Timothy, Nakov, Preslav, Panov, Maxim, Shelmanov, Artem
Format Journal Article
LanguageEnglish
Published 255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA MIT Press 19.03.2025
Online AccessGet full text

Cover

Loading…
More Information
Summary:The rapid proliferation of large language models (LLMs) has stimulated researchers to seek effective and efficient approaches to deal with LLM hallucinations and low-quality outputs. Uncertainty quantification (UQ) is a key element of machine learning applications in dealing with such challenges. However, research to date on UQ for LLMs has been fragmented in terms of techniques and evaluation methodologies. In this work, we address this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines and offers an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
Bibliography:2025
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00737