How does the task complexity of masked pretraining objectives affect downstream performance?
Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts. Although simpler and computationally efficient pretraining objectives, e.g., predicting the first character of a masked...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Masked language modeling (MLM) is a widely used self-supervised pretraining
objective, where a model needs to predict an original token that is replaced
with a mask given contexts. Although simpler and computationally efficient
pretraining objectives, e.g., predicting the first character of a masked token,
have recently shown comparable results to MLM, no objectives with a masking
scheme actually outperform it in downstream tasks. Motivated by the assumption
that their lack of complexity plays a vital role in the degradation, we
validate whether more complex masked objectives can achieve better results and
investigate how much complexity they should have to perform comparably to MLM.
Our results using GLUE, SQuAD, and Universal Dependencies benchmarks
demonstrate that more complicated objectives tend to show better downstream
results with at least half of the MLM complexity needed to perform comparably
to MLM. Finally, we discuss how we should pretrain a model using a masked
objective from the task complexity perspective. |
---|---|
DOI: | 10.48550/arxiv.2305.10992 |