Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes
Machine Learning (ML) in low-data settings remains an underappreciated yet crucial problem. Hence, data augmentation methods to increase the sample size of datasets needed for ML are key to unlocking the transformative potential of ML in data-deprived regions and domains. Unfortunately, the limited...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.12.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Machine Learning (ML) in low-data settings remains an underappreciated yet
crucial problem. Hence, data augmentation methods to increase the sample size
of datasets needed for ML are key to unlocking the transformative potential of
ML in data-deprived regions and domains. Unfortunately, the limited training
set constrains traditional tabular synthetic data generators in their ability
to generate a large and diverse augmented dataset needed for ML tasks. To
address this challenge, we introduce CLLM, which leverages the prior knowledge
of Large Language Models (LLMs) for data augmentation in the low-data regime.
However, not all the data generated by LLMs will improve downstream utility, as
for any generative model. Consequently, we introduce a principled curation
mechanism, leveraging learning dynamics, coupled with confidence and
uncertainty metrics, to obtain a high-quality dataset. Empirically, on multiple
real-world datasets, we demonstrate the superior performance of CLLM in the
low-data regime compared to conventional generators. Additionally, we provide
insights into the LLM generation and curation mechanism, shedding light on the
features that enable them to output high-quality augmented datasets. |
---|---|
DOI: | 10.48550/arxiv.2312.12112 |