In-context Learning with Transformer Is Really Equivalent to a Contrastive Learning Pattern

Pre-trained large language models based on Transformers have demonstrated amazing in-context learning (ICL) abilities. Given several demonstration examples, the models can implement new tasks without any parameter updates. However, it is still an open question to understand the mechanism of ICL. In...

Full description

Saved in:
Bibliographic Details
Main Authors Ren, Ruifeng, Liu, Yong
Format Journal Article
LanguageEnglish
Published 19.10.2023
Subjects
Online AccessGet full text

Cover

Loading…