In-Context Learning through the Bayesian Prism
In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$. The function $f$ comes from a fu...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.06.2023
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2306.04891 |
Cover
Summary: | In-context learning (ICL) is one of the surprising and useful features of
large language models and subject of intense research. Recently, stylized
meta-learning-like ICL setups have been devised that train transformers on
sequences of input-output pairs $(x, f(x))$. The function $f$ comes from a
function class and generalization is checked by evaluating on sequences
generated from unseen functions from the same class. One of the main
discoveries in this line of research has been that for several function
classes, such as linear regression, transformers successfully generalize to new
functions in the class. However, the inductive biases of these models resulting
in this behavior are not clearly understood. A model with unlimited training
data and compute is a Bayesian predictor: it learns the pretraining
distribution. In this paper we empirically examine how far this Bayesian
perspective can help us understand ICL. To this end, we generalize the previous
meta-ICL setup to hierarchical meta-ICL setup which involve unions of multiple
task families. We instantiate this setup on a diverse range of linear and
nonlinear function families and find that transformers can do ICL in this
setting as well. Where Bayesian inference is tractable, we find evidence that
high-capacity transformers mimic the Bayesian predictor. The Bayesian
perspective provides insights into the inductive bias of ICL and how
transformers perform a particular task when they are trained on multiple tasks.
We also find that transformers can learn to generalize to new function classes
that were not seen during pretraining. This involves deviation from the
Bayesian predictor. We examine these deviations in more depth offering new
insights and hypotheses. |
---|---|
DOI: | 10.48550/arxiv.2306.04891 |