Tricks for Training Sparse Translation Models
Multi-task learning with an unbalanced data distribution skews model learning towards high resource tasks, especially when model capacity is fixed and fully shared across all tasks. Sparse scaling architectures, such as BASELayers, provide flexible mechanisms for different tasks to have a variable n...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.10.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Multi-task learning with an unbalanced data distribution skews model learning
towards high resource tasks, especially when model capacity is fixed and fully
shared across all tasks. Sparse scaling architectures, such as BASELayers,
provide flexible mechanisms for different tasks to have a variable number of
parameters, which can be useful to counterbalance skewed data distributions. We
find that that sparse architectures for multilingual machine translation can
perform poorly out of the box, and propose two straightforward techniques to
mitigate this - a temperature heating mechanism and dense pre-training.
Overall, these methods improve performance on two multilingual translation
benchmarks compared to standard BASELayers and Dense scaling baselines, and in
combination, more than 2x model convergence speed. |
---|---|
DOI: | 10.48550/arxiv.2110.08246 |