Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models
Long-context modeling capabilities are important for large language models (LLMs) in various applications. However, directly training LLMs with long context windows is insufficient to enhance this capability since some training samples do not exhibit strong semantic dependencies across long contexts...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
28.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Long-context modeling capabilities are important for large language models
(LLMs) in various applications. However, directly training LLMs with long
context windows is insufficient to enhance this capability since some training
samples do not exhibit strong semantic dependencies across long contexts. In
this study, we propose a data mining framework \textbf{ProLong} that can assign
each training sample with a long dependency score, which can be used to rank
and filter samples that are more advantageous for enhancing long-context
modeling abilities in LLM training. Specifically, we first use delta perplexity
scores to measure the \textit{Dependency Strength} between text segments in a
given document. Then we refine this metric based on the \textit{Dependency
Distance} of these segments to incorporate spatial relationships across
long-contexts. Final results are calibrated with a \textit{Dependency
Specificity} metric to prevent trivial dependencies introduced by repetitive
patterns. Moreover, a random sampling approach is proposed to optimize the
computational efficiency of ProLong. Comprehensive experiments on multiple
benchmarks indicate that ProLong effectively identifies documents that carry
long dependencies and LLMs trained on these documents exhibit significantly
enhanced long-context modeling capabilities. |
---|---|
DOI: | 10.48550/arxiv.2405.17915 |