Dialect-robust Evaluation of Generated Text
Evaluation metrics that are not robust to dialect variation make it impossible to tell how well systems perform for many groups of users, and can even penalize systems for producing text in lower-resource dialects. However, currently, there exists no way to quantify how metrics respond to change in...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.11.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Evaluation metrics that are not robust to dialect variation make it
impossible to tell how well systems perform for many groups of users, and can
even penalize systems for producing text in lower-resource dialects. However,
currently, there exists no way to quantify how metrics respond to change in the
dialect of a generated utterance. We thus formalize dialect robustness and
dialect awareness as goals for NLG evaluation metrics. We introduce a suite of
methods and corresponding statistical tests one can use to assess metrics in
light of the two goals. Applying the suite to current state-of-the-art metrics,
we demonstrate that they are not dialect-robust and that semantic perturbations
frequently lead to smaller decreases in a metric than the introduction of
dialect features. As a first step to overcome this limitation, we propose a
training schema, NANO, which introduces regional and language information to
the pretraining process of a metric. We demonstrate that NANO provides a
size-efficient way for models to improve the dialect robustness while
simultaneously improving their performance on the standard metric benchmark. |
---|---|
DOI: | 10.48550/arxiv.2211.00922 |