A comparative study of calibration methods for imbalanced class incremental learning

Deep learning approaches are successful in a wide range of AI problems and in particular for visual recognition tasks. However, there are still open problems among which is the capacity to handle streams of visual information and the management of class imbalance in datasets. Existing research appro...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 81; no. 14; pp. 19237 - 19256
Main Authors Aggarwal, Umang, Popescu, Adrian, Belouadah, Eden, Hudelot, Celine
Format Journal Article
LanguageEnglish
Published New York Springer US 01.06.2022
Springer Nature B.V
Springer Verlag
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning approaches are successful in a wide range of AI problems and in particular for visual recognition tasks. However, there are still open problems among which is the capacity to handle streams of visual information and the management of class imbalance in datasets. Existing research approaches these two problems separately while they co-occur in real world applications. Here, we study the problem of learning incrementally from imbalanced datasets. We focus on algorithms which have a constant deep model complexity and use a bounded memory to store exemplars of old classes across incremental states. Since memory is bounded, old classes are learned with fewer images than new classes and an imbalance due to incremental learning is added to the initial dataset imbalance. A score prediction bias in favor of new classes appears and we evaluate a comprehensive set of score calibration methods to reduce it. Evaluation is carried with three datasets, using two dataset imbalance configurations and three bounded memory sizes. Results show that most calibration methods have beneficial effect and that they are most useful for lower bounded memory sizes, which are most interesting in practice. As a secondary contribution, we remove the usual distillation component from the loss function of incremental learning algorithms. We show that simpler vanilla fine tuning is a stronger backbone for imbalanced incremental learning algorithms.
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-020-10485-5