Pareto Self-Supervised Training for Few-Shot Learning
While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data. Exploiting the complementarity of these two manners, few-shot auxiliary learning has recently draw...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.04.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | While few-shot learning (FSL) aims for rapid generalization to new concepts
with little supervision, self-supervised learning (SSL) constructs supervisory
signals directly computed from unlabeled data. Exploiting the complementarity
of these two manners, few-shot auxiliary learning has recently drawn much
attention to deal with few labeled data. Previous works benefit from sharing
inductive bias between the main task (FSL) and auxiliary tasks (SSL), where the
shared parameters of tasks are optimized by minimizing a linear combination of
task losses. However, it is challenging to select a proper weight to balance
tasks and reduce task conflict. To handle the problem as a whole, we propose a
novel approach named as Pareto self-supervised training (PSST) for FSL. PSST
explicitly decomposes the few-shot auxiliary problem into multiple constrained
multi-objective subproblems with different trade-off preferences, and here a
preference region in which the main task achieves the best performance is
identified. Then, an effective preferred Pareto exploration is proposed to find
a set of optimal solutions in such a preference region. Extensive experiments
on several public benchmark datasets validate the effectiveness of our approach
by achieving state-of-the-art performance. |
---|---|
DOI: | 10.48550/arxiv.2104.07841 |