In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern
Pre-trained large language models based on Transformers have demonstrated amazing in-context learning (ICL) abilities. Given several demonstration examples, the models can implement new tasks without any parameter updates. However, it is still an open question to understand the mechanism of ICL. In...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Pre-trained large language models based on Transformers have demonstrated
amazing in-context learning (ICL) abilities. Given several demonstration
examples, the models can implement new tasks without any parameter updates.
However, it is still an open question to understand the mechanism of ICL. In
this paper, we interpret the inference process of ICL as a gradient descent
process in a contrastive learning pattern. Firstly, leveraging kernel methods,
we establish the relationship between gradient descent and self-attention
mechanism under generally used softmax attention setting instead of linear
attention setting. Then, we analyze the corresponding gradient descent process
of ICL from the perspective of contrastive learning without negative samples
and discuss possible improvements of this contrastive learning pattern, based
on which the self-attention layer can be further modified. Finally, we design
experiments to support our opinions. To the best of our knowledge, our work is
the first to provide the understanding of ICL from the perspective of
contrastive learning and has the potential to facilitate future model design by
referring to related works on contrastive learning. |
---|---|
DOI: | 10.48550/arxiv.2310.13220 |