Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning
Recently, large language models (LLMs) have emerged as a groundbreaking technology and their unparalleled text generation capabilities have sparked interest in their application to the fundamental sentence representation learning task. Existing methods have explored utilizing LLMs as data annotators...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, large language models (LLMs) have emerged as a groundbreaking
technology and their unparalleled text generation capabilities have sparked
interest in their application to the fundamental sentence representation
learning task. Existing methods have explored utilizing LLMs as data annotators
to generate synthesized data for training contrastive learning based sentence
embedding models such as SimCSE. However, since contrastive learning models are
sensitive to the quality of sentence pairs, the effectiveness of these methods
is largely influenced by the content generated from LLMs, highlighting the need
for more refined generation in the context of sentence representation learning.
Building upon this premise, we propose MultiCSR, a multi-level contrastive
sentence representation learning framework that decomposes the process of
prompting LLMs to generate a corpus for training base sentence embedding models
into three stages (i.e., sentence generation, sentence pair construction,
in-batch training) and refines the generated content at these three distinct
stages, ensuring only high-quality sentence pairs are utilized to train a base
contrastive learning model. Our extensive experiments reveal that MultiCSR
enables a less advanced LLM to surpass the performance of ChatGPT, while
applying it to ChatGPT achieves better state-of-the-art results. Comprehensive
analyses further underscore the potential of our framework in various
application scenarios and achieving better sentence representation learning
with LLMs. |
---|---|
DOI: | 10.48550/arxiv.2310.10962 |