Prototype Augmentation and Self-Supervision for Incremental Learning

Despite the impressive performance in many individual tasks, deep neural networks suffer from catastrophic forgetting when learning new tasks incrementally. Recently, various incremental learning methods have been proposed, and some approaches achieved acceptable performance relying on stored data o...

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5867 - 5876
Main Authors Zhu, Fei, Zhang, Xu-Yao, Wang, Chuang, Yin, Fei, Liu, Cheng-Lin
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Despite the impressive performance in many individual tasks, deep neural networks suffer from catastrophic forgetting when learning new tasks incrementally. Recently, various incremental learning methods have been proposed, and some approaches achieved acceptable performance relying on stored data or complex generative models. However, storing data from previous tasks is limited by memory or privacy issues, and generative models are usually unstable and inefficient in training. In this paper, we propose a simple non-exemplar based method named PASS, to address the catastrophic forgetting problem in incremental learning. On the one hand, we propose to memorize one class-representative prototype for each old class and adopt prototype augmentation (protoAug) in the deep feature space to maintain the decision boundary of previous tasks. On the other hand, we employ self-supervised learning (SSL) to learn more generalizable and transferable features for other tasks, which demonstrates the effectiveness of SSL in incremental learning. Experimental results on benchmark datasets show that our approach significantly outperforms non-exemplar based methods, and achieves comparable performance compared to exemplar based approaches.
ISSN:2575-7075
DOI:10.1109/CVPR46437.2021.00581