Transformers can optimally learn regression mixture models
Mixture models arise in many regression problems, but most methods have seen limited adoption partly due to these algorithms' highly-tailored and model-specific nature. On the other hand, transformers are flexible, neural sequence models that present the intriguing possibility of providing gene...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Mixture models arise in many regression problems, but most methods have seen
limited adoption partly due to these algorithms' highly-tailored and
model-specific nature. On the other hand, transformers are flexible, neural
sequence models that present the intriguing possibility of providing
general-purpose prediction methods, even in this mixture setting. In this work,
we investigate the hypothesis that transformers can learn an optimal predictor
for mixtures of regressions. We construct a generative process for a mixture of
linear regressions for which the decision-theoretic optimal procedure is given
by data-driven exponential weights on a finite set of parameters. We observe
that transformers achieve low mean-squared error on data generated via this
process. By probing the transformer's output at inference time, we also show
that transformers typically make predictions that are close to the optimal
predictor. Our experiments also demonstrate that transformers can learn
mixtures of regressions in a sample-efficient fashion and are somewhat robust
to distribution shifts. We complement our experimental observations by proving
constructively that the decision-theoretic optimal procedure is indeed
implementable by a transformer. |
---|---|
DOI: | 10.48550/arxiv.2311.08362 |