LaplaceNet: A Hybrid Graph-Energy Neural Network for Deep Semisupervised Classification

Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecede...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 35; no. 4; pp. 5306 - 5318
Main Authors Sellars, Philip, Aviles-Rivero, Angelica I., Schonlieb, Carola-Bibiane
Format Journal Article
LanguageEnglish
Published United States IEEE 01.04.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semisupervised learning (SSL) has received a lot of recent attention as it alleviates the need for large amounts of labeled data which can often be expensive, requires expert knowledge, and be time consuming to collect. Recent developments in deep semisupervised classification have reached unprecedented performance and the gap between supervised and SSL is ever-decreasing. This improvement in performance has been based on the inclusion of numerous technical tricks, strong augmentation techniques, and costly optimization schemes with multiterm loss functions. We propose a new framework, LaplaceNet, for deep semisupervised classification that has a greatly reduced model complexity. We utilize a hybrid approach where pseudolabels are produced by minimizing the Laplacian energy on a graph. These pseudolabels are then used to iteratively train a neural-network backbone. Our model outperforms state-of-the-art methods for deep semisupervised classification, over several benchmark datasets. Furthermore, we consider the application of strong augmentations to neural networks theoretically and justify the use of a multisampling approach for SSL. We demonstrate, through rigorous experimentation, that a multisampling augmentation approach improves generalization and reduces the sensitivity of the network to augmentation.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2022.3203315