Unsupervised Transfer Learning via Adversarial Contrastive Training
Learning a data representation for downstream supervised learning tasks under unlabeled scenario is both critical and challenging. In this paper, we propose a novel unsupervised transfer learning approach using adversarial contrastive training (ACT). Our experimental results demonstrate outstanding...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Learning a data representation for downstream supervised learning tasks under
unlabeled scenario is both critical and challenging. In this paper, we propose
a novel unsupervised transfer learning approach using adversarial contrastive
training (ACT). Our experimental results demonstrate outstanding classification
accuracy with both fine-tuned linear probe and K-NN protocol across various
datasets, showing competitiveness with existing state-of-the-art
self-supervised learning methods. Moreover, we provide an end-to-end
theoretical guarantee for downstream classification tasks in a misspecified,
over-parameterized setting, highlighting how a large amount of unlabeled data
contributes to prediction accuracy. Our theoretical findings suggest that the
testing error of downstream tasks depends solely on the efficiency of data
augmentation used in ACT when the unlabeled sample size is sufficiently large.
This offers a theoretical understanding of learning downstream tasks with a
small sample size. |
---|---|
DOI: | 10.48550/arxiv.2408.08533 |