Data Similarity is Not Enough to Explain Language Model Performance
Published in EMNLP 2023 Large language models achieve high performance on many but not all downstream tasks. The interaction between pretraining data and task data is commonly assumed to determine this variance: a task with data that is more similar to a model's pretraining data is assumed to b...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Published in EMNLP 2023 Large language models achieve high performance on many but not all downstream
tasks. The interaction between pretraining data and task data is commonly
assumed to determine this variance: a task with data that is more similar to a
model's pretraining data is assumed to be easier for that model. We test
whether distributional and example-specific similarity measures (embedding-,
token- and model-based) correlate with language model performance through a
large-scale comparison of the Pile and C4 pretraining datasets with downstream
benchmarks. Similarity correlates with performance for multilingual datasets,
but in other benchmarks, we surprisingly find that similarity metrics are not
correlated with accuracy or even each other. This suggests that the
relationship between pretraining data and downstream tasks is more complex than
often assumed. |
---|---|
DOI: | 10.48550/arxiv.2311.09006 |