Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning
Semi-supervised learning is attracting blooming attention, due to its success in combining unlabeled data. To mitigate potentially incorrect pseudo labels, recent frameworks mostly set a fixed confidence threshold to discard uncertain samples. This practice ensures high-quality pseudo labels, but in...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
13.08.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Semi-supervised learning is attracting blooming attention, due to its success
in combining unlabeled data. To mitigate potentially incorrect pseudo labels,
recent frameworks mostly set a fixed confidence threshold to discard uncertain
samples. This practice ensures high-quality pseudo labels, but incurs a
relatively low utilization of the whole unlabeled set. In this work, our key
insight is that these uncertain samples can be turned into certain ones, as
long as the confusion classes for the top-1 class are detected and removed.
Invoked by this, we propose a novel method dubbed ShrinkMatch to learn
uncertain samples. For each uncertain sample, it adaptively seeks a shrunk
class space, which merely contains the original top-1 class, as well as
remaining less likely classes. Since the confusion ones are removed in this
space, the re-calculated top-1 confidence can satisfy the pre-defined
threshold. We then impose a consistency regularization between a pair of
strongly and weakly augmented samples in the shrunk space to strive for
discriminative representations. Furthermore, considering the varied reliability
among uncertain samples and the gradually improved model during training, we
correspondingly design two reweighting principles for our uncertain loss. Our
method exhibits impressive performance on widely adopted benchmarks. Code is
available at https://github.com/LiheYoung/ShrinkMatch. |
---|---|
DOI: | 10.48550/arxiv.2308.06777 |