Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation
Self-supervised neural machine translation (SSNMT) jointly learns to identify and select suitable training data from comparable (rather than parallel) corpora and to translate, in a way that the two tasks support each other in a virtuous circle. In this study, we provide an in-depth analysis of the...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.04.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Self-supervised neural machine translation (SSNMT) jointly learns to identify
and select suitable training data from comparable (rather than parallel)
corpora and to translate, in a way that the two tasks support each other in a
virtuous circle. In this study, we provide an in-depth analysis of the sampling
choices the SSNMT model makes during training. We show how, without it having
been told to do so, the model self-selects samples of increasing (i) complexity
and (ii) task-relevance in combination with (iii) performing a denoising
curriculum. We observe that the dynamics of the mutual-supervision signals of
both system internal representation types are vital for the extraction and
translation performance. We show that in terms of the Gunning-Fog Readability
index, SSNMT starts extracting and learning from Wikipedia data suitable for
high school students and quickly moves towards content suitable for first year
undergraduate students. |
---|---|
DOI: | 10.48550/arxiv.2004.03151 |