In Defense of the Learning Without Forgetting for Task Incremental Learning
Catastrophic forgetting is one of the major challenges on the road for continual learning systems, which are presented with an on-line stream of tasks. The field has attracted considerable interest and a diverse set of methods have been presented for overcoming this challenge. Learning without Forge...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.07.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Catastrophic forgetting is one of the major challenges on the road for
continual learning systems, which are presented with an on-line stream of
tasks. The field has attracted considerable interest and a diverse set of
methods have been presented for overcoming this challenge. Learning without
Forgetting (LwF) is one of the earliest and most frequently cited methods. It
has the advantages of not requiring the storage of samples from the previous
tasks, of implementation simplicity, and of being well-grounded by relying on
knowledge distillation. However, the prevailing view is that while it shows a
relatively small amount of forgetting when only two tasks are introduced, it
fails to scale to long sequences of tasks. This paper challenges this view, by
showing that using the right architecture along with a standard set of
augmentations, the results obtained by LwF surpass the latest algorithms for
task incremental scenario. This improved performance is demonstrated by an
extensive set of experiments over CIFAR-100 and Tiny-ImageNet, where it is also
shown that other methods cannot benefit as much from similar improvements. |
---|---|
DOI: | 10.48550/arxiv.2107.12304 |