Generative Adversarial Training for Supervised and Semi-supervised Learning

Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, th...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neurorobotics Vol. 16; p. 859610
Main Authors Wang, Xianmin, Li, Jing, Liu, Qi, Zhao, Wenpeng, Li, Zuoyong, Wang, Wenhao
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 24.03.2022
Frontiers Media S.A
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, thus resulting in limited improvement. Instead of designing a specific smoothness function and seeking an approximate solution used in existing AT methods, we propose a new training methodology, named Generative AT (GAT) in this article, for supervised and semi-supervised learning. The key idea of GAT is to formulate the learning task as a minimax game, in which the perturbation generator aims to yield the worst-case perturbations that maximize the deviation of output distribution, while the target classifier is to minimize the impact of this perturbation and prediction error. To solve this minimax optimization problem, a new adversarial loss function is constructed based on the cross-entropy measure. As a result, the smoothness and confidence of the model are both greatly improved. Moreover, we develop a trajectory-preserving-based alternating update strategy to enable the stable training of GAT. Numerous experiments conducted on benchmark datasets clearly demonstrate that the proposed GAT significantly outperforms the state-of-the-art AT methods in terms of supervised and semi-supervised learning tasks, especially when the number of labeled examples is rather small in semi-supervised learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Song Deng, Nanjing University of Posts and Telecommunications, China
This article was submitted to Original Research Article, a section of the journal Frontiers in Neurorobotics
Reviewed by: Yi He, Old Dominion University, United States; Lina Yao, University of New South Wales, Australia
ISSN:1662-5218
1662-5218
DOI:10.3389/fnbot.2022.859610