Trained MT Metrics Learn to Cope with Machine-translated References

Neural metrics trained on human evaluations of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled experiment and compare a baseline metric that has not been trained on human evaluations (Prism) to a trained version of th...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Vamvas, Jannis, Domhan, Tobias, Trenous, Sony, Sennrich, Rico, Hasler, Eva
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 01.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural metrics trained on human evaluations of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled experiment and compare a baseline metric that has not been trained on human evaluations (Prism) to a trained version of the same metric (Prism+FT). Surprisingly, we find that Prism+FT becomes more robust to machine-translated references, which are a notorious problem in MT evaluation. This suggests that the effects of metric training go beyond the intended effect of improving overall correlation with human judgments.
ISSN:2331-8422