Understanding and Detecting Hallucinations in Neural Machine Translation via Model Introspection
Neural sequence generation models are known to “hallucinate”, by producing outputs that are unrelated to the source text. These hallucinations are potentially harmful, yet it remains unclear in what conditions they arise and how to mitigate their impact. In this work, we first identify internal mode...
Saved in:
Published in | Transactions of the Association for Computational Linguistics Vol. 11; pp. 546 - 564 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
One Broadway, 12th Floor, Cambridge, Massachusetts 02142, USA
MIT Press
12.06.2023
MIT Press Journals, The The MIT Press |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!