Generalized Pseudo-Labeling in Consistency Regularization for Semi-Supervised Learning
Semi-Supervised Learning (SSL) reduces annotation cost by exploiting large amounts of unlabeled data. A popular idea in SSL image classification is Pseudo-Labeling (PL), where the predictions of a network are used in order to assign a label to an unlabeled image. However, this practice exposes learn...
Saved in:
Published in | 2023 IEEE International Conference on Image Processing (ICIP) pp. 525 - 529 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
08.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Semi-Supervised Learning (SSL) reduces annotation cost by exploiting large amounts of unlabeled data. A popular idea in SSL image classification is Pseudo-Labeling (PL), where the predictions of a network are used in order to assign a label to an unlabeled image. However, this practice exposes learning to confirmation bias. In this paper we propose Generalized Pseudo-Labeling (GPL), a simple and generic way to exploit negative pseudo-labels in consistency regularization, entailing minimal additional computational overhead and hyperpameter fine-tuning. GPL makes learning more robust by using the information that an image does not belong to a certain class, which is more abundant and reliable. We showcase GPL in the context of FixMatch. In the benchmark using only 40 labels of the CIFAR-10 dataset, adding GPL on top of FixMatch improves the error rate from 7.93% to 6.58%, and on CIFAR-100 with 2500 labels, from 28.02% to 26.85%. |
---|---|
DOI: | 10.1109/ICIP49359.2023.10221965 |