In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern
Pre-trained large language models based on Transformers have demonstrated amazing in-context learning (ICL) abilities. Given several demonstration examples, the models can implement new tasks without any parameter updates. However, it is still an open question to understand the mechanism of ICL. In...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Be the first to leave a comment!