x comet : Transparent Machine Translation Evaluation through Fine-grained Error Detection

Widely used learned metrics for machine translation evaluation, such as and , estimate the quality of a translation hypothesis by providing a single sentence-level score. As such, they offer little insight into translation errors (e.g., what are the errors and what is their severity). On the other h...

Full description

Saved in:
Bibliographic Details
Published inTransactions of the Association for Computational Linguistics Vol. 12; pp. 979 - 995
Main Authors Guerreiro, Nuno M., Rei, Ricardo, Stigt, Daan van, Coheur, Luisa, Colombo, Pierre, Martins, André F. T.
Format Journal Article
LanguageEnglish
Published 255 Main Street, 9th Floor, Cambridge, Massachusetts 02142, USA MIT Press 04.09.2024
Online AccessGet full text

Cover

Loading…
More Information
Summary:Widely used learned metrics for machine translation evaluation, such as and , estimate the quality of a translation hypothesis by providing a single sentence-level score. As such, they offer little insight into translation errors (e.g., what are the errors and what is their severity). On the other hand, generative large language models (LLMs) are amplifying the adoption of more granular strategies to evaluation, attempting to detail and categorize translation errors. In this work, we introduce , an open-source learned metric designed to bridge the gap between these approaches. integrates both sentence-level evaluation and error span detection capabilities, exhibiting state-of-the-art performance across all types of evaluation (sentence-level, system-level, and error span detection). Moreover, it does so while highlighting and categorizing error spans, thus enriching the quality assessment. We also provide a robustness analysis with stress tests, and show that x is largely capable of identifying localized critical errors and hallucinations.
Bibliography:2024
ISSN:2307-387X
2307-387X
DOI:10.1162/tacl_a_00683