Generalized Pseudo-Labeling in Consistency Regularization for Semi-Supervised Learning

Semi-Supervised Learning (SSL) reduces annotation cost by exploiting large amounts of unlabeled data. A popular idea in SSL image classification is Pseudo-Labeling (PL), where the predictions of a network are used in order to assign a label to an unlabeled image. However, this practice exposes learn...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Image Processing (ICIP) pp. 525 - 529
Main Authors Karaliolios, Nikolaos, Chabot, Florian, Dupont, Camille, Le Borgne, Herve, Pham, Quoc-Cuong, Audigier, Romaric
Format Conference Proceeding
LanguageEnglish
Published IEEE 08.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semi-Supervised Learning (SSL) reduces annotation cost by exploiting large amounts of unlabeled data. A popular idea in SSL image classification is Pseudo-Labeling (PL), where the predictions of a network are used in order to assign a label to an unlabeled image. However, this practice exposes learning to confirmation bias. In this paper we propose Generalized Pseudo-Labeling (GPL), a simple and generic way to exploit negative pseudo-labels in consistency regularization, entailing minimal additional computational overhead and hyperpameter fine-tuning. GPL makes learning more robust by using the information that an image does not belong to a certain class, which is more abundant and reliable. We showcase GPL in the context of FixMatch. In the benchmark using only 40 labels of the CIFAR-10 dataset, adding GPL on top of FixMatch improves the error rate from 7.93% to 6.58%, and on CIFAR-100 with 2500 labels, from 28.02% to 26.85%.
DOI:10.1109/ICIP49359.2023.10221965