A Simple Lifelong Learning Approach

In lifelong learning, data are used to improve performance not only on the present task, but also on past and future (unencountered) tasks. While typical transfer learning algorithms can improve performance on future tasks, their performance on prior tasks degrades upon learning new tasks (called fo...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Vogelstein, Joshua T, Dey, Jayanta, Helm, Hayden S, LeVine, Will, Mehta, Ronak D, Tomita, Tyler M, Xu, Haoyin, Geisa, Ali, Wang, Qingyang, Gido M van de Ven, Gao, Chenyu, Yang, Weiwei, Tower, Bryan, Larson, Jonathan, White, Christopher M, Priebe, Carey E
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 11.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In lifelong learning, data are used to improve performance not only on the present task, but also on past and future (unencountered) tasks. While typical transfer learning algorithms can improve performance on future tasks, their performance on prior tasks degrades upon learning new tasks (called forgetting). Many recent approaches for continual or lifelong learning have attempted to maintain performance on old tasks given new tasks. But striving to avoid forgetting sets the goal unnecessarily low. The goal of lifelong learning should be to use data to improve performance on both future tasks (forward transfer) and past tasks (backward transfer). In this paper, we show that a simple approach -- representation ensembling -- demonstrates both forward and backward transfer in a variety of simulated and benchmark data scenarios, including tabular, vision (CIFAR-100, 5-dataset, Split Mini-Imagenet, and Food1k), and speech (spoken digit), in contrast to various reference algorithms, which typically failed to transfer either forward or backward, or both. Moreover, our proposed approach can flexibly operate with or without a computational budget.
ISSN:2331-8422