A comprehensive study of class incremental learning algorithms for visual tasks

The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 135; pp. 38 - 54
Main Authors Belouadah, Eden, Popescu, Adrian, Kanellos, Ioannis
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.03.2021
Elsevier
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2020.12.003

Cover

Loading…
Abstract The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not. •Incremental learning algorithms are improved by casting the problem as an imbalanced learning case.•Competitive performance can be achieved without the widely used knowledge distillation component.•Herding-based exemplar selection for past classes clearly outperforms random selection.•Fine-tuning based methods are better when a memory of the past is allowed.•Fixed-representation based methods are better without a memory of the past.
AbstractList The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not.
The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not.The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not.
The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main challenge faced in such cases is catastrophic forgetting, i.e., the tendency of neural networks to underfit past data when new ones are ingested. A first group of approaches tackles forgetting by increasing deep model capacity to accommodate new knowledge. A second type of approaches fix the deep model size and introduce a mechanism whose objective is to ensure a good compromise between stability and plasticity of the model. While the first type of algorithms were compared thoroughly, this is not the case for methods which exploit a fixed size model. Here, we focus on the latter, place them in a common conceptual and experimental framework and propose the following contributions: (1) define six desirable properties of incremental learning algorithms and analyze them according to these properties, (2) introduce a unified formalization of the class-incremental learning problem, (3) propose a common evaluation framework which is more thorough than existing ones in terms of number of datasets, size of datasets, size of bounded memory and number of incremental states, (4) investigate the usefulness of herding for past exemplars selection, (5) provide experimental evidence that it is possible to obtain competitive performance without the use of knowledge distillation to tackle catastrophic forgetting and (6) facilitate reproducibility by integrating all tested methods in a common open-source repository. The main experimental finding is that none of the existing algorithms achieves the best results in all evaluated settings. Important differences arise notably if a bounded memory of past classes is allowed or not. •Incremental learning algorithms are improved by casting the problem as an imbalanced learning case.•Competitive performance can be achieved without the widely used knowledge distillation component.•Herding-based exemplar selection for past classes clearly outperforms random selection.•Fine-tuning based methods are better when a memory of the past is allowed.•Fixed-representation based methods are better without a memory of the past.
Author Kanellos, Ioannis
Belouadah, Eden
Popescu, Adrian
Author_xml – sequence: 1
  givenname: Eden
  orcidid: 0000-0002-3418-1546
  surname: Belouadah
  fullname: Belouadah, Eden
  email: eden.belouadah@cea.fr
  organization: Université Paris-Saclay, CEA, List, F-91120 Palaiseau, France
– sequence: 2
  givenname: Adrian
  orcidid: 0000-0002-8099-824X
  surname: Popescu
  fullname: Popescu, Adrian
  email: adrian.popescu@cea.fr
  organization: Université Paris-Saclay, CEA, List, F-91120 Palaiseau, France
– sequence: 3
  givenname: Ioannis
  orcidid: 0000-0001-5323-1601
  surname: Kanellos
  fullname: Kanellos, Ioannis
  email: ioannis.kanellos@imt-atlantique.fr
  organization: IMT Atlantique, Computer Science Department, CS 83818 F-29238, Cedex 3, Brest, France
BackLink https://www.ncbi.nlm.nih.gov/pubmed/33341513$$D View this record in MEDLINE/PubMed
https://hal.science/hal-03493944$$DView record in HAL
BookMark eNqFkU9vEzEQxS1URNPCN0DIRzhs8L_srjkgRRVQpEi99G55vePGwWsH2xup376OtvTAAU4jzfzek-a9K3QRYgCE3lOypoS2nw_rAHOAsmaE1RVbE8JfoRXtO9mwrmcXaEV6yZuW9OQSXeV8IIS0veBv0CXnXNAN5St0t8UmTscEewjZnQDnMo-POFpsvM4Zu2ASTBCK9tiDTsGFB6z9Q0yu7KeMbUz45PJcz0XnX_ktem21z_DueV6j--_f7m9um93dj583211jNqQvTT_yoRNCWj0wY6yWXatNZ6RllmszGso45cPG1N_qlXYjMM3bYZS6t11r-TX6tNjutVfH5CadHlXUTt1ud-q8I1xILoU40cp-XNhjir9nyEVNLhvwXgeIc1ZMdHTDW8lJRT88o_Mwwfji_CevCogFMCnmnMC-IJSocy3qoJZa1LkWRZmqtVTZl79kxhVdXAwlaef_J_66iKHmeXKQVDYOgoHRJTBFjdH92-AJnDyrsg
CitedBy_id crossref_primary_10_1016_j_artmed_2024_102870
crossref_primary_10_1109_TII_2022_3201977
crossref_primary_10_1016_j_eswa_2024_124588
crossref_primary_10_1109_TGRS_2023_3248601
crossref_primary_10_3390_s24123721
crossref_primary_10_1016_j_comcom_2023_12_030
crossref_primary_10_1109_ACCESS_2023_3301575
crossref_primary_10_3390_info13040186
crossref_primary_10_1007_s11424_024_3449_9
crossref_primary_10_1109_TPAMI_2023_3273574
crossref_primary_10_1145_3580859
crossref_primary_10_3390_s25072024
crossref_primary_10_1088_1361_6501_ad25e3
crossref_primary_10_1007_s00138_024_01635_y
crossref_primary_10_1016_j_neunet_2023_10_039
crossref_primary_10_1007_s10489_023_04454_2
crossref_primary_10_1016_j_eswa_2023_121755
crossref_primary_10_1109_TCCN_2023_3331296
crossref_primary_10_1145_3705725
crossref_primary_10_1016_j_ymssp_2024_111175
crossref_primary_10_1016_j_neunet_2023_01_033
crossref_primary_10_1016_j_neunet_2024_106698
crossref_primary_10_1016_j_ipm_2024_103664
crossref_primary_10_1109_MC_2022_3150308
crossref_primary_10_1109_TIM_2022_3200695
crossref_primary_10_1016_j_ins_2024_121618
crossref_primary_10_4236_ijis_2023_132003
crossref_primary_10_1109_JIOT_2024_3376635
crossref_primary_10_3390_vehicles6020038
crossref_primary_10_1109_TPAMI_2024_3429383
crossref_primary_10_3390_electronics12245023
crossref_primary_10_1016_j_neunet_2024_106788
crossref_primary_10_1109_JPROC_2023_3309299
crossref_primary_10_1038_s42256_022_00568_3
crossref_primary_10_1109_TGRS_2024_3386579
crossref_primary_10_1016_j_knosys_2025_113009
crossref_primary_10_1007_s00521_023_08448_6
crossref_primary_10_1016_j_procir_2023_02_070
crossref_primary_10_1016_j_neunet_2023_01_041
crossref_primary_10_1109_TPAMI_2024_3446949
crossref_primary_10_1007_s10489_022_03509_0
crossref_primary_10_1049_cvi2_70013
crossref_primary_10_1016_j_imavis_2024_105187
crossref_primary_10_1109_ACCESS_2024_3377690
crossref_primary_10_1109_TKDE_2024_3447123
crossref_primary_10_1177_1748006X241252469
crossref_primary_10_1109_ACCESS_2022_3141654
crossref_primary_10_3389_fmed_2023_1227515
crossref_primary_10_1016_j_neunet_2023_01_017
crossref_primary_10_3390_s23125554
crossref_primary_10_1016_j_jik_2023_100313
crossref_primary_10_1016_j_neunet_2022_11_025
crossref_primary_10_1016_j_measurement_2023_113997
crossref_primary_10_1016_j_ymssp_2023_110309
crossref_primary_10_1016_j_seta_2024_103753
crossref_primary_10_1016_j_compmedimag_2023_102290
crossref_primary_10_1109_TPAMI_2022_3213473
crossref_primary_10_15803_ijnc_14_2_123
crossref_primary_10_3390_ani13121957
crossref_primary_10_3390_s23156893
crossref_primary_10_1016_j_engappai_2025_110042
crossref_primary_10_1145_3564786
crossref_primary_10_1016_j_engappai_2024_108212
crossref_primary_10_3389_fphy_2023_1174220
crossref_primary_10_1109_TAI_2024_3386498
crossref_primary_10_1109_LRA_2022_3167736
crossref_primary_10_1016_j_patcog_2023_109310
crossref_primary_10_1016_j_aei_2022_101815
crossref_primary_10_5753_jbcs_2024_3966
crossref_primary_10_1109_TPAMI_2025_3529038
crossref_primary_10_3390_math10040598
crossref_primary_10_1109_TCSVT_2022_3196092
crossref_primary_10_1155_2021_6627740
crossref_primary_10_3390_jmse11091781
crossref_primary_10_1109_TAI_2023_3250207
crossref_primary_10_1109_TPAMI_2024_3396809
crossref_primary_10_1016_j_neunet_2023_06_043
crossref_primary_10_1016_j_aej_2024_10_037
crossref_primary_10_1016_j_neunet_2023_05_006
crossref_primary_10_1109_LGRS_2024_3361500
crossref_primary_10_1007_s10489_024_05493_z
crossref_primary_10_1007_s10994_024_06524_z
crossref_primary_10_1016_j_patcog_2024_110283
crossref_primary_10_1109_TCSVT_2024_3450490
crossref_primary_10_3390_app132111980
crossref_primary_10_1109_TNSM_2023_3287430
Cites_doi 10.1109/ACCESS.2019.2963461
10.1109/TPAMI.2010.57
10.1007/s11263-015-0816-y
10.1016/j.neunet.2018.07.011
10.1109/TSMCB.2005.847744
10.1109/CVPR42600.2020.01220
10.1145/1553374.1553517
10.1109/LSP.2016.2603342
10.1016/j.neunet.2019.09.010
10.1016/0010-0285(76)90013-X
10.1016/j.neucom.2020.03.025
10.1016/j.patcog.2007.07.019
10.1145/584091.584093
10.1109/TPAMI.2013.83
10.1109/ICCV.2019.00067
10.1109/72.238311
ContentType Journal Article
Copyright 2020 Elsevier Ltd
Copyright © 2020 Elsevier Ltd. All rights reserved.
Attribution - NonCommercial
Copyright_xml – notice: 2020 Elsevier Ltd
– notice: Copyright © 2020 Elsevier Ltd. All rights reserved.
– notice: Attribution - NonCommercial
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
1XC
VOOES
DOI 10.1016/j.neunet.2020.12.003
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
Hyper Article en Ligne (HAL)
Hyper Article en Ligne (HAL) (Open Access)
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic

MEDLINE
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
EndPage 54
ExternalDocumentID oai_HAL_hal_03493944v1
33341513
10_1016_j_neunet_2020_12_003
S0893608020304202
Genre Journal Article
Review
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
CGR
CUY
CVF
ECM
EIF
NPM
PKN
7X8
EFKBS
1XC
VOOES
ID FETCH-LOGICAL-c508t-8d3b7449fab2ccfa976ac7c9f2f3acdc12313b5c202cfa17de2a36bd9a8f76f3
IEDL.DBID .~1
ISSN 0893-6080
1879-2782
IngestDate Fri May 09 12:23:12 EDT 2025
Fri Sep 05 04:04:17 EDT 2025
Wed Feb 19 02:28:17 EST 2025
Thu Apr 24 22:56:07 EDT 2025
Tue Jul 01 01:24:37 EDT 2025
Fri Feb 23 02:48:26 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Catastrophic forgetting
Incremental learning
Imbalanced learning
Convolutional neural networks
Image classification
Language English
License Copyright © 2020 Elsevier Ltd. All rights reserved.
Attribution - NonCommercial: http://creativecommons.org/licenses/by-nc
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c508t-8d3b7449fab2ccfa976ac7c9f2f3acdc12313b5c202cfa17de2a36bd9a8f76f3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
ObjectType-Review-3
content type line 23
ORCID 0000-0002-8099-824X
0000-0002-3418-1546
0000-0001-5323-1601
OpenAccessLink https://hal.science/hal-03493944
PMID 33341513
PQID 2471536930
PQPubID 23479
PageCount 17
ParticipantIDs hal_primary_oai_HAL_hal_03493944v1
proquest_miscellaneous_2471536930
pubmed_primary_33341513
crossref_primary_10_1016_j_neunet_2020_12_003
crossref_citationtrail_10_1016_j_neunet_2020_12_003
elsevier_sciencedirect_doi_10_1016_j_neunet_2020_12_003
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-03-01
PublicationDateYYYYMMDD 2021-03-01
PublicationDate_xml – month: 03
  year: 2021
  text: 2021-03-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2021
Publisher Elsevier Ltd
Elsevier
Publisher_xml – name: Elsevier Ltd
– name: Elsevier
References Furlanello, Lipton, Tschannen, Itti, Anandkumar (b20) 2018; vol. 80
Hou, Pan, Loy, Wang, Lin (b28) 2019
Mensink, Verbeek, Perronnin, Csurka (b45) 2013; 35
Parisi, Kemker, Part, Kanan, Wermter (b48) 2018
Kim, Bae, Jo, Choi (b32) 2019
Rebuffi, Bilen, Vedaldi (b55) 2018
Li, Hoiem (b36) 2016
Phuong, M., & Lampert, C. (2019). Towards understanding knowledge distillation. In
Rosenfeld, Tsotsos (b58) 2017
Belfort, de F. Bassani, Araújo (b5) 2017
Martinetz (b42) 1993
Wu, Chen, Wang, Ye, Liu, Guo (b75) 2018
Zhang, Zhang, Ghosh, Li, Tasci, Heck (b82) 2020
Settles (b64) 2010
Cong, Zhao, Li, Wang, Carin (b15) 2020
Jégou, Douze, Schmid (b30) 2011; 33
Shannon (b65) 2001; 5
(pp. 12183–12192).
Fritzke (b19) 1994
Gepperth, Karaoguz (b21) 2017
Javed, Shafait (b29) 2018
Paszke, Gross, Chintala, Chanan, Yang, DeVito (b50) 2017
Venkatesan, Venkateswara, Panchanathan, Li (b72) 2017
(pp. 583–592).
Mccloskey, Cohen (b44) 1989; 24
Fritzke (b18) 1994; 7
Castro, Marín-Jiménez, Guil, Schmid, Alahari (b14) 2018
Lange, Aljundi, Masana, Parisot, Jia, Leonardis (b35) 2019
Hayes, Kanan (b24) 2019
Roy, Panda, Roy (b59) 2020; 121
(pp. 5142–5151).
Belouadah, Popescu (b6) 2018
Liu, Su, Liu, Schiele, Sun (b37) 2020
Zhang, Zhang, Li, Qiao (b83) 2016; 23
Zhou, Mai, Zhang, Xu, Wu, Davis (b85) 2019
Dhar, Singh, Peng, Wu, Chellappa (b16) 2018
Aljundi, Kelchtermans, Tuytelaars (b4) 2019
Belouadah, Popescu (b8) 2020
Zhao, Xiao, Gan, Zhang, Xia (b84) 2020
Kemker, Kanan (b31) 2018
Yim, Joo, Bae, Kim (b79) 2017
Martinetz, Berkovich, Schulten (b43) 1993; 4
(pp. 1121–1128).
Liu, Su, Liu, Schiele, Sun (b38) 2020
Seff, Beatson, Suo, Liu (b62) 2017
Douillard, Cord, Ollion, Robert, Valle (b17) 2020; vol. 12365
Razavian, Azizpour, Sullivan, Carlsson (b53) 2014
Wang, Ramanan, Hebert (b73) 2017
Tao, X., Hong, X., Chang, X., Dong, S., Wei, X., & Gong, Y. (2020). Few-shot class-incremental learning. In
Buda, Maki, Mazurowski (b12) 2018; 106
Pang, Ozawa, Kasabov (b47) 2005; 35
Zalán Bodó (b81) 2011
Russakovsky, Deng, Su, Krause, Satheesh, Ma (b60) 2015; 115
Hayes, Kafle, Shrestha, Acharya, Kanan (b23) 2019
Noh, Araujo, Sim, Weyand, Han (b46) 2017
Boser, Guyon, Vapnik (b11) 1992
Hinton, Vinyals, Dean (b27) 2015
Mallya, Lazebnik (b41) 2018
Park, Kim, Lu, Cho (b49) 2019
Belouadah, E., & Popescu, A. (2019). Il2m: Class incremental learning with dual memory. In
Mallya, Davis, Lazebnik (b40) 2018; vol. 11208
Cao, Shen, Xie, Parkhi, Zisserman (b13) 2018
Yu, Twardowski, Liu, Herranz, Wang, Cheng (b80) 2020
Ghosh, Kulharia, Namboodiri, Torr, Dokania (b22) 2018
Rebuffi, Kolesnikov, Sperl, Lampert (b56) 2017
Aljundi, Chakravarty, Tuytelaars (b3) 2017
Kornblith, Shlens, Le (b33) 2018
Shen, Zhang, Zhang, Liu (b66) 2020; 399
Lughofer (b39) 2008; 41
Rebuffi, Bilen, Vedaldi (b54) 2017
He, Wang, Shan, Chen (b25) 2018
He, Zhang, Ren, Sun (b26) 2016
Beyer, Cimiano (b10) 2013
Aljundi, Babiloni, Elhoseiny, Rohrbach, Tuytelaars (b2) 2018; vol. 11207
Belouadah, Popescu, Kanellos (b9) 2020
Tamaazousti, Borgne, Hudelot, Seddik, Tamaazousti (b69) 2017
Krizhevsky (b34) 2009
Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel (b51) 2012
Shin, Lee, Kim, Kim (b67) 2017
Xu, Furao, Hasegawa, Zhao (b78) 2009; 5476
Welling, M. (2009). Herding dynamical weights to learn. In
Aleo, Arena, Patané (b1) 2010
Xiang, Miao, Chen, Xuan (b77) 2020; 8
Sener, Savarese (b63) 2018
Rusu, Rabinowitz, Desjardins, Soyer, Kirkpatrick, Kavukcuoglu (b61) 2016
Singh, Verma, Mazumder, Carin, Rai (b68) 2020; 33
Tao, Chang, Hong, Wei, Gong (b70) 2020; vol. 12364
Rosch, Mervis, Gray, Johnson, Boyes-Braem (b57) 1976; 8
Wu, Chen, Wang, Ye, Liu, Guo (b76) 2019
Mallya (10.1016/j.neunet.2020.12.003_b40) 2018; vol. 11208
10.1016/j.neunet.2020.12.003_b7
Rebuffi (10.1016/j.neunet.2020.12.003_b54) 2017
Li (10.1016/j.neunet.2020.12.003_b36) 2016
Martinetz (10.1016/j.neunet.2020.12.003_b42) 1993
Wu (10.1016/j.neunet.2020.12.003_b76) 2019
Mccloskey (10.1016/j.neunet.2020.12.003_b44) 1989; 24
Xu (10.1016/j.neunet.2020.12.003_b78) 2009; 5476
Kim (10.1016/j.neunet.2020.12.003_b32) 2019
10.1016/j.neunet.2020.12.003_b52
Liu (10.1016/j.neunet.2020.12.003_b37) 2020
Kornblith (10.1016/j.neunet.2020.12.003_b33) 2018
Krizhevsky (10.1016/j.neunet.2020.12.003_b34) 2009
Yim (10.1016/j.neunet.2020.12.003_b79) 2017
Cong (10.1016/j.neunet.2020.12.003_b15) 2020
Dhar (10.1016/j.neunet.2020.12.003_b16) 2018
Zhang (10.1016/j.neunet.2020.12.003_b83) 2016; 23
Belouadah (10.1016/j.neunet.2020.12.003_b8) 2020
Wang (10.1016/j.neunet.2020.12.003_b73) 2017
Xiang (10.1016/j.neunet.2020.12.003_b77) 2020; 8
Cao (10.1016/j.neunet.2020.12.003_b13) 2018
Martinetz (10.1016/j.neunet.2020.12.003_b43) 1993; 4
Gepperth (10.1016/j.neunet.2020.12.003_b21) 2017
Rebuffi (10.1016/j.neunet.2020.12.003_b55) 2018
Paszke (10.1016/j.neunet.2020.12.003_b50) 2017
Kemker (10.1016/j.neunet.2020.12.003_b31) 2018
Zhang (10.1016/j.neunet.2020.12.003_b82) 2020
Hou (10.1016/j.neunet.2020.12.003_b28) 2019
Hayes (10.1016/j.neunet.2020.12.003_b24) 2019
Noh (10.1016/j.neunet.2020.12.003_b46) 2017
Rusu (10.1016/j.neunet.2020.12.003_b61) 2016
Javed (10.1016/j.neunet.2020.12.003_b29) 2018
Aleo (10.1016/j.neunet.2020.12.003_b1) 2010
Aljundi (10.1016/j.neunet.2020.12.003_b3) 2017
Lange (10.1016/j.neunet.2020.12.003_b35) 2019
Pedregosa (10.1016/j.neunet.2020.12.003_b51) 2012
Shannon (10.1016/j.neunet.2020.12.003_b65) 2001; 5
10.1016/j.neunet.2020.12.003_b71
Furlanello (10.1016/j.neunet.2020.12.003_b20) 2018; vol. 80
Aljundi (10.1016/j.neunet.2020.12.003_b2) 2018; vol. 11207
Castro (10.1016/j.neunet.2020.12.003_b14) 2018
Rosch (10.1016/j.neunet.2020.12.003_b57) 1976; 8
Russakovsky (10.1016/j.neunet.2020.12.003_b60) 2015; 115
Shin (10.1016/j.neunet.2020.12.003_b67) 2017
Beyer (10.1016/j.neunet.2020.12.003_b10) 2013
Sener (10.1016/j.neunet.2020.12.003_b63) 2018
Tao (10.1016/j.neunet.2020.12.003_b70) 2020; vol. 12364
Pang (10.1016/j.neunet.2020.12.003_b47) 2005; 35
Hayes (10.1016/j.neunet.2020.12.003_b23) 2019
10.1016/j.neunet.2020.12.003_b74
Mensink (10.1016/j.neunet.2020.12.003_b45) 2013; 35
Seff (10.1016/j.neunet.2020.12.003_b62) 2017
Venkatesan (10.1016/j.neunet.2020.12.003_b72) 2017
Zhou (10.1016/j.neunet.2020.12.003_b85) 2019
Hinton (10.1016/j.neunet.2020.12.003_b27) 2015
He (10.1016/j.neunet.2020.12.003_b25) 2018
Belfort (10.1016/j.neunet.2020.12.003_b5) 2017
Tamaazousti (10.1016/j.neunet.2020.12.003_b69) 2017
Jégou (10.1016/j.neunet.2020.12.003_b30) 2011; 33
Mallya (10.1016/j.neunet.2020.12.003_b41) 2018
Shen (10.1016/j.neunet.2020.12.003_b66) 2020; 399
Rosenfeld (10.1016/j.neunet.2020.12.003_b58) 2017
Ghosh (10.1016/j.neunet.2020.12.003_b22) 2018
Aljundi (10.1016/j.neunet.2020.12.003_b4) 2019
Razavian (10.1016/j.neunet.2020.12.003_b53) 2014
Zalán Bodó (10.1016/j.neunet.2020.12.003_b81) 2011
Fritzke (10.1016/j.neunet.2020.12.003_b19) 1994
Park (10.1016/j.neunet.2020.12.003_b49) 2019
Wu (10.1016/j.neunet.2020.12.003_b75) 2018
Parisi (10.1016/j.neunet.2020.12.003_b48) 2018
Belouadah (10.1016/j.neunet.2020.12.003_b6) 2018
Douillard (10.1016/j.neunet.2020.12.003_b17) 2020; vol. 12365
Rebuffi (10.1016/j.neunet.2020.12.003_b56) 2017
Settles (10.1016/j.neunet.2020.12.003_b64) 2010
Singh (10.1016/j.neunet.2020.12.003_b68) 2020; 33
Yu (10.1016/j.neunet.2020.12.003_b80) 2020
Fritzke (10.1016/j.neunet.2020.12.003_b18) 1994; 7
Belouadah (10.1016/j.neunet.2020.12.003_b9) 2020
Lughofer (10.1016/j.neunet.2020.12.003_b39) 2008; 41
Buda (10.1016/j.neunet.2020.12.003_b12) 2018; 106
Liu (10.1016/j.neunet.2020.12.003_b38) 2020
Boser (10.1016/j.neunet.2020.12.003_b11) 1992
He (10.1016/j.neunet.2020.12.003_b26) 2016
Roy (10.1016/j.neunet.2020.12.003_b59) 2020; 121
Zhao (10.1016/j.neunet.2020.12.003_b84) 2020
References_xml – reference: (pp. 5142–5151).
– volume: 33
  year: 2020
  ident: b68
  article-title: Calibrating cnns for lifelong learning
  publication-title: Advances in Neural Information Processing Systems
– year: 2017
  ident: b73
  article-title: Growing a brain: Fine-tuning by increasing model capacity
  publication-title: Conference on computer vision and pattern recognition
– start-page: 7765
  year: 2018
  end-page: 7773
  ident: b41
  article-title: Packnet: Adding multiple tasks to a single network by iterative pruning
  publication-title: 2018 IEEE conference on computer vision and pattern recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018
– start-page: 13205
  year: 2020
  end-page: 13214
  ident: b84
  article-title: Maintaining discrimination and fairness in class incremental learning
  publication-title: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020
– year: 2017
  ident: b56
  article-title: Icarl: Incremental classifier and representation learning
  publication-title: Conference on computer vision and pattern recognition
– reference: Phuong, M., & Lampert, C. (2019). Towards understanding knowledge distillation. In
– volume: 33
  start-page: 117
  year: 2011
  end-page: 128
  ident: b30
  article-title: Product quantization for nearest neighbor search
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– volume: 121
  start-page: 148
  year: 2020
  end-page: 160
  ident: b59
  article-title: Tree-cnn: A hierarchical deep convolutional neural network for incremental learning
  publication-title: Neural Networks
– year: 2018
  ident: b63
  article-title: Active learning for convolutional neural networks: A core-set approach
  publication-title: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, conference track proceedings
– start-page: 625
  year: 1994
  end-page: 632
  ident: b19
  article-title: A growing neural gas network learns topologies
  publication-title: Advances in neural information processing systems 7, [NIPS conference, Denver, Colorado, USA, 1994]
– year: 2016
  ident: b26
  article-title: Deep residual learning for image recognition
  publication-title: Conference on computer vision and pattern recognition
– start-page: 3967
  year: 2019
  end-page: 3976
  ident: b49
  article-title: Relational knowledge distillation
  publication-title: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019
– volume: 41
  start-page: 995
  year: 2008
  end-page: 1011
  ident: b39
  article-title: Extensions of vector quantization for incremental clustering
  publication-title: Pattern Recognition
– start-page: 427
  year: 1993
  end-page: 434
  ident: b42
  article-title: Competitive hebbian learning rule forms perfectly topology preserving maps
  publication-title: International conference on artificial neural networks
– reference: (pp. 12183–12192).
– start-page: 11254
  year: 2019
  end-page: 11263
  ident: b4
  article-title: Task-free continual learning
  publication-title: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019
– start-page: 67
  year: 2018
  end-page: 74
  ident: b13
  article-title: Vggface2: A dataset for recognising faces across pose and age
  publication-title: 13th IEEE international conference on automatic face & gesture recognition, FG 2018, Xi’an, China, May 15-19, 2018
– start-page: 144
  year: 1992
  end-page: 152
  ident: b11
  article-title: A training algorithm for optimal margin classifiers
  publication-title: Proceedings of the fifth annual ACM conference on computational learning theory, COLT 1992, Pittsburgh, PA, USA, July 27-29, 1992
– volume: vol. 12364
  start-page: 254
  year: 2020
  end-page: 270
  ident: b70
  article-title: Topology-preserving class-incremental learning
  publication-title: Computer vision - ECCV 2020 - 16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XIX
– start-page: 2994
  year: 2017
  end-page: 3003
  ident: b67
  article-title: Continual learning with deep generative replay
  publication-title: NIPS
– year: 2016
  ident: b36
  article-title: Learning without forgetting
  publication-title: European conference on computer vision
– start-page: 3476
  year: 2017
  end-page: 3485
  ident: b46
  article-title: Large-scale image retrieval with attentive deep local features
  publication-title: ICCV
– volume: vol. 11207
  start-page: 144
  year: 2018
  end-page: 161
  ident: b2
  article-title: Memory aware synapses: Learning what (not) to forget
  publication-title: Computer vision - ECCV 2018 - 15th European conference, Munich, Germany, September 8-14, 2018, Proceedings, Part III
– year: 2019
  ident: b23
  article-title: REMIND your neural network to prevent catastrophic forgetting
– year: 2019
  ident: b32
  article-title: Incremental learning with maximum entropy regularization: Rethinking forgetting and intransigence
– start-page: 7130
  year: 2017
  end-page: 7138
  ident: b79
  article-title: A gift from knowledge distillation: Fast optimization, network minimization and transfer learning
  publication-title: 2017 IEEE conference on computer vision and pattern recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017
– year: 2020
  ident: b9
  article-title: Initial classifier weights replay for memoryless class incremental learning
  publication-title: British machine vision conference (BMVC)
– volume: 5
  start-page: 3
  year: 2001
  end-page: 55
  ident: b65
  article-title: A mathematical theory of communication
  publication-title: ACM SIGMOBILE Mobile Computing and Communications Review
– year: 2011
  ident: b81
  article-title: Active learning with clustering
  publication-title: Workshop on active learning and experimental design
– year: 2020
  ident: b8
  article-title: Scail: Classifier weights scaling for class incremental learning
  publication-title: The IEEE winter conference on applications of computer vision (WACV)
– start-page: 1034
  year: 2017
  end-page: 1040
  ident: b5
  article-title: Online incremental supervised growing neural gas
  publication-title: 2017 international joint conference on neural networks, IJCNN 2017, Anchorage, AK, USA, May 14-19, 2017
– volume: 35
  start-page: 905
  year: 2005
  end-page: 914
  ident: b47
  article-title: Incremental linear discriminant analysis for classification of data streams
  publication-title: IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)
– year: 2017
  ident: b72
  article-title: A strategy for an uncompromising incremental learner
– volume: 106
  start-page: 249
  year: 2018
  end-page: 259
  ident: b12
  article-title: A systematic study of the class imbalance problem in convolutional neural networks
  publication-title: Neural Networks
– year: 2018
  ident: b29
  article-title: Revisiting distillation and incremental classifier learning
– volume: 7
  start-page: 625
  year: 1994
  end-page: 632
  ident: b18
  article-title: A growing neural gas network learns topologies
  publication-title: Advances in Neural Information Processing Systems
– year: 2019
  ident: b85
  article-title: M2kd: multi-model and multi-level knowledge distillation for incremental learning
– start-page: 8119
  year: 2018
  end-page: 8127
  ident: b55
  article-title: Efficient parametrization of multi-domain deep neural networks
  publication-title: 2018 IEEE conference on computer vision and pattern recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018
– year: 2017
  ident: b69
  article-title: Learning more universal representations for transfer-learning
– year: 2018
  ident: b33
  article-title: Do better imagenet models transfer better?
– year: 2020
  ident: b37
  article-title: Mnemonics training: Multi-class incremental learning without forgetting
  publication-title: The IEEE conference on computer vision and pattern recognition (CVPR)
– volume: 24
  start-page: 104
  year: 1989
  end-page: 169
  ident: b44
  article-title: Catastrophic interference in connectionist networks: The sequential learning problem
  publication-title: The Psychology of Learning and Motivation
– volume: vol. 12365
  start-page: 86
  year: 2020
  end-page: 102
  ident: b17
  article-title: Podnet: Pooled outputs distillation for small-tasks incremental learning
  publication-title: Computer vision - ECCV 2020 - 16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XX
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: b60
  article-title: Imagenet large scale visual recognition challenge
  publication-title: International Journal of Computer Vision
– year: 2019
  ident: b35
  article-title: Continual learning: A comparative study on how to defy forgetting in classification tasks
– reference: Welling, M. (2009). Herding dynamical weights to learn. In
– reference: Belouadah, E., & Popescu, A. (2019). Il2m: Class incremental learning with dual memory. In
– year: 2017
  ident: b58
  article-title: Incremental learning through deep adaptation
– year: 2018
  ident: b6
  article-title: Deesil: Deep-shallow incremental learning
  publication-title: TaskCV workshop @ ECCV 2018
– year: 2018
  ident: b75
  article-title: Incremental classifier learning with generative adversarial networks
– year: 2017
  ident: b50
  article-title: Automatic differentiation in pytorch
  publication-title: Advances in neural information processing systems workshops
– volume: 35
  start-page: 2624
  year: 2013
  end-page: 2637
  ident: b45
  article-title: Distance-based image classification: Generalizing to new classes at near-zero cost
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– year: 2009
  ident: b34
  article-title: Learning multiple layers of features from tiny images
– volume: vol. 80
  start-page: 1602
  year: 2018
  end-page: 1611
  ident: b20
  article-title: Born-again neural networks
  publication-title: Proceedings of the 35th international conference on machine learning, ICML 2018
– volume: 4
  start-page: 558
  year: 1993
  end-page: 569
  ident: b43
  article-title: ’neural-gas’ network for vector quantization and its application to time-series prediction
  publication-title: IEEE Transactions on Neural Networks
– reference: (pp. 1121–1128).
– start-page: 241
  year: 2018
  end-page: 257
  ident: b14
  article-title: End-to-end incremental learning
  publication-title: Computer vision - ECCV 2018 - 15th European conference, munich, Germany, September 8-14, 2018, proceedings, part XII
– year: 2017
  ident: b62
  article-title: Continual learning in generative adversarial nets
– year: 2018
  ident: b16
  article-title: Learning without memorizing
– year: 2013
  ident: b10
  article-title: DYNG: dynamic online growing neural gas for stream data classification
  publication-title: 21st European symposium on artificial neural networks, ESANN 2013, Bruges, Belgium, April 24-26, 2013
– start-page: 98
  year: 2018
  ident: b25
  article-title: Exemplar-supported generative reproduction for class incremental learning
  publication-title: British machine vision conference 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018
– year: 2017
  ident: b3
  article-title: Expert gate: Lifelong learning with a network of experts
  publication-title: Conference on computer vision and pattern recognition
– start-page: 12242
  year: 2020
  end-page: 12251
  ident: b38
  article-title: Mnemonics training: Multi-class incremental learning without forgetting
  publication-title: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020
– volume: 23
  start-page: 1499
  year: 2016
  end-page: 1503
  ident: b83
  article-title: Joint face detection and alignment using multitask cascaded convolutional networks
  publication-title: IEEE Signal Processing Letters
– volume: 8
  start-page: 382
  year: 1976
  end-page: 439
  ident: b57
  article-title: Basic objects in natural categories
  publication-title: Cognitive Psychology
– year: 2018
  ident: b48
  article-title: Continual lifelong learning with neural networks: A review
– year: 2015
  ident: b27
  article-title: Distilling the knowledge in a neural network
– start-page: 506
  year: 2017
  end-page: 516
  ident: b54
  article-title: Learning multiple visual domains with residual adapters
  publication-title: Advances in neural information processing systems 30: Annual conference on neural information processing systems 2017, 4-9 December 2017, Long Beach, CA, USA
– year: 2014
  ident: b53
  article-title: CNN features off-the-shelf: An astounding baseline for recognition
  publication-title: Conference on computer vision and pattern recognition workshop
– start-page: 6980
  year: 2020
  end-page: 6989
  ident: b80
  article-title: Semantic drift compensation for class-incremental learning
  publication-title: 2020 IEEE/CVF conference on computer vision and pattern recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020
– year: 2018
  ident: b31
  article-title: Fearnet: Brain-inspired model for incremental learning
  publication-title: 6th international conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings
– year: 2012
  ident: b51
  article-title: Scikit-learn: Machine learning in python
– start-page: 374
  year: 2019
  end-page: 382
  ident: b76
  article-title: Large scale incremental learning
  publication-title: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019
– reference: Tao, X., Hong, X., Chang, X., Dong, S., Wei, X., & Gong, Y. (2020). Few-shot class-incremental learning. In
– volume: 5476
  start-page: 1046
  year: 2009
  end-page: 1053
  ident: b78
  article-title: An online incremental learning vector quantization
  publication-title: Advances in knowledge discovery and data mining, 13th pacific-asia conference, PAKDD 2009, Bangkok, Thailand, April 27-30, 2009, Proceedings
– volume: 399
  start-page: 467
  year: 2020
  end-page: 478
  ident: b66
  article-title: Online semi-supervised learning with learning vector quantization
  publication-title: Neurocomputing
– start-page: 1120
  year: 2020
  end-page: 1129
  ident: b82
  article-title: Class-incremental learning via deep model consolidation
  publication-title: IEEE winter conference on applications of computer vision, WACV 2020
– start-page: 1
  year: 2010
  end-page: 6
  ident: b1
  article-title: Incremental learning for visual classification using neural gas
  publication-title: International joint conference on neural networks, IJCNN 2010, Barcelona, Spain, 18-23 July, 2010
– year: 2019
  ident: b24
  article-title: Lifelong machine learning with deep streaming linear discriminant analysis
– year: 2016
  ident: b61
  article-title: Progressive neural networks
– year: 2010
  ident: b64
  article-title: Active learning literature survey
– volume: 8
  start-page: 23090
  year: 2020
  end-page: 23099
  ident: b77
  article-title: Efficient incremental learning using dynamic correction vector
  publication-title: IEEE Access
– start-page: 153
  year: 2017
  end-page: 160
  ident: b21
  article-title: Incremental learning with self-organizing maps
  publication-title: 12th international workshop on self-organizing maps and learning vector quantization, clustering and data visualization, WSOM 2017, Nancy, France, June 28-30, 2017
– reference: (pp. 583–592).
– start-page: 831
  year: 2019
  end-page: 839
  ident: b28
  article-title: Learning a unified classifier incrementally via rebalancing
  publication-title: IEEE conference on computer vision and pattern recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019
– volume: vol. 11208
  start-page: 72
  year: 2018
  end-page: 88
  ident: b40
  article-title: Piggyback: Adapting a single network to multiple tasks by learning to mask weights
  publication-title: ECCV (4)
– year: 2020
  ident: b15
  article-title: GAN memory with no forgetting
– year: 2018
  ident: b22
  article-title: Multi-agent diverse generative adversarial networks
  publication-title: The IEEE conference on computer vision and pattern recognition (CVPR)
– year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b9
  article-title: Initial classifier weights replay for memoryless class incremental learning
– start-page: 98
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b25
  article-title: Exemplar-supported generative reproduction for class incremental learning
– start-page: 831
  year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b28
  article-title: Learning a unified classifier incrementally via rebalancing
– volume: 24
  start-page: 104
  year: 1989
  ident: 10.1016/j.neunet.2020.12.003_b44
  article-title: Catastrophic interference in connectionist networks: The sequential learning problem
  publication-title: The Psychology of Learning and Motivation
– start-page: 153
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b21
  article-title: Incremental learning with self-organizing maps
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b62
– volume: 8
  start-page: 23090
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b77
  article-title: Efficient incremental learning using dynamic correction vector
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2963461
– year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b15
– year: 2013
  ident: 10.1016/j.neunet.2020.12.003_b10
  article-title: DYNG: dynamic online growing neural gas for stream data classification
– year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b8
  article-title: Scail: Classifier weights scaling for class incremental learning
– start-page: 1
  year: 2010
  ident: 10.1016/j.neunet.2020.12.003_b1
  article-title: Incremental learning for visual classification using neural gas
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b16
– volume: 33
  start-page: 117
  issue: 1
  year: 2011
  ident: 10.1016/j.neunet.2020.12.003_b30
  article-title: Product quantization for nearest neighbor search
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2010.57
– volume: 5476
  start-page: 1046
  year: 2009
  ident: 10.1016/j.neunet.2020.12.003_b78
  article-title: An online incremental learning vector quantization
– start-page: 12242
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b38
  article-title: Mnemonics training: Multi-class incremental learning without forgetting
– year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b85
– start-page: 374
  year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b76
  article-title: Large scale incremental learning
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  ident: 10.1016/j.neunet.2020.12.003_b60
  article-title: Imagenet large scale visual recognition challenge
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-015-0816-y
– volume: 106
  start-page: 249
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b12
  article-title: A systematic study of the class imbalance problem in convolutional neural networks
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2018.07.011
– volume: vol. 80
  start-page: 1602
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b20
  article-title: Born-again neural networks
– volume: 35
  start-page: 905
  issue: 5
  year: 2005
  ident: 10.1016/j.neunet.2020.12.003_b47
  article-title: Incremental linear discriminant analysis for classification of data streams
  publication-title: IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)
  doi: 10.1109/TSMCB.2005.847744
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b73
  article-title: Growing a brain: Fine-tuning by increasing model capacity
– start-page: 1120
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b82
  article-title: Class-incremental learning via deep model consolidation
– year: 2009
  ident: 10.1016/j.neunet.2020.12.003_b34
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b58
– year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b24
– start-page: 625
  year: 1994
  ident: 10.1016/j.neunet.2020.12.003_b19
  article-title: A growing neural gas network learns topologies
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b3
  article-title: Expert gate: Lifelong learning with a network of experts
– volume: vol. 12365
  start-page: 86
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b17
  article-title: Podnet: Pooled outputs distillation for small-tasks incremental learning
– year: 2014
  ident: 10.1016/j.neunet.2020.12.003_b53
  article-title: CNN features off-the-shelf: An astounding baseline for recognition
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b48
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b50
  article-title: Automatic differentiation in pytorch
– ident: 10.1016/j.neunet.2020.12.003_b71
  doi: 10.1109/CVPR42600.2020.01220
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b6
  article-title: Deesil: Deep-shallow incremental learning
– ident: 10.1016/j.neunet.2020.12.003_b74
  doi: 10.1145/1553374.1553517
– year: 2011
  ident: 10.1016/j.neunet.2020.12.003_b81
  article-title: Active learning with clustering
– volume: 23
  start-page: 1499
  issue: 10
  year: 2016
  ident: 10.1016/j.neunet.2020.12.003_b83
  article-title: Joint face detection and alignment using multitask cascaded convolutional networks
  publication-title: IEEE Signal Processing Letters
  doi: 10.1109/LSP.2016.2603342
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b72
– start-page: 11254
  year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b4
  article-title: Task-free continual learning
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b75
– start-page: 2994
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b67
  article-title: Continual learning with deep generative replay
– year: 2015
  ident: 10.1016/j.neunet.2020.12.003_b27
– start-page: 7130
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b79
  article-title: A gift from knowledge distillation: Fast optimization, network minimization and transfer learning
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b69
– volume: 33
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b68
  article-title: Calibrating cnns for lifelong learning
  publication-title: Advances in Neural Information Processing Systems
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b63
  article-title: Active learning for convolutional neural networks: A core-set approach
– start-page: 13205
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b84
  article-title: Maintaining discrimination and fairness in class incremental learning
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b29
– volume: 121
  start-page: 148
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b59
  article-title: Tree-cnn: A hierarchical deep convolutional neural network for incremental learning
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2019.09.010
– year: 2016
  ident: 10.1016/j.neunet.2020.12.003_b26
  article-title: Deep residual learning for image recognition
– start-page: 3967
  year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b49
  article-title: Relational knowledge distillation
– volume: 8
  start-page: 382
  issue: 3
  year: 1976
  ident: 10.1016/j.neunet.2020.12.003_b57
  article-title: Basic objects in natural categories
  publication-title: Cognitive Psychology
  doi: 10.1016/0010-0285(76)90013-X
– year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b35
– volume: 399
  start-page: 467
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b66
  article-title: Online semi-supervised learning with learning vector quantization
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2020.03.025
– year: 2010
  ident: 10.1016/j.neunet.2020.12.003_b64
– volume: vol. 11208
  start-page: 72
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b40
  article-title: Piggyback: Adapting a single network to multiple tasks by learning to mask weights
– volume: vol. 11207
  start-page: 144
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b2
  article-title: Memory aware synapses: Learning what (not) to forget
– start-page: 67
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b13
  article-title: Vggface2: A dataset for recognising faces across pose and age
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b31
  article-title: Fearnet: Brain-inspired model for incremental learning
– year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b37
  article-title: Mnemonics training: Multi-class incremental learning without forgetting
– start-page: 144
  year: 1992
  ident: 10.1016/j.neunet.2020.12.003_b11
  article-title: A training algorithm for optimal margin classifiers
– year: 2016
  ident: 10.1016/j.neunet.2020.12.003_b61
– start-page: 241
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b14
  article-title: End-to-end incremental learning
– year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b32
– ident: 10.1016/j.neunet.2020.12.003_b52
– year: 2012
  ident: 10.1016/j.neunet.2020.12.003_b51
– start-page: 6980
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b80
  article-title: Semantic drift compensation for class-incremental learning
– year: 2019
  ident: 10.1016/j.neunet.2020.12.003_b23
– volume: 41
  start-page: 995
  issue: 3
  year: 2008
  ident: 10.1016/j.neunet.2020.12.003_b39
  article-title: Extensions of vector quantization for incremental clustering
  publication-title: Pattern Recognition
  doi: 10.1016/j.patcog.2007.07.019
– start-page: 8119
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b55
  article-title: Efficient parametrization of multi-domain deep neural networks
– volume: 7
  start-page: 625
  year: 1994
  ident: 10.1016/j.neunet.2020.12.003_b18
  article-title: A growing neural gas network learns topologies
  publication-title: Advances in Neural Information Processing Systems
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b33
– start-page: 506
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b54
  article-title: Learning multiple visual domains with residual adapters
– year: 2016
  ident: 10.1016/j.neunet.2020.12.003_b36
  article-title: Learning without forgetting
– start-page: 7765
  year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b41
  article-title: Packnet: Adding multiple tasks to a single network by iterative pruning
– start-page: 427
  year: 1993
  ident: 10.1016/j.neunet.2020.12.003_b42
  article-title: Competitive hebbian learning rule forms perfectly topology preserving maps
– start-page: 3476
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b46
  article-title: Large-scale image retrieval with attentive deep local features
– start-page: 1034
  year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b5
  article-title: Online incremental supervised growing neural gas
– year: 2018
  ident: 10.1016/j.neunet.2020.12.003_b22
  article-title: Multi-agent diverse generative adversarial networks
– volume: 5
  start-page: 3
  issue: 1
  year: 2001
  ident: 10.1016/j.neunet.2020.12.003_b65
  article-title: A mathematical theory of communication
  publication-title: ACM SIGMOBILE Mobile Computing and Communications Review
  doi: 10.1145/584091.584093
– year: 2017
  ident: 10.1016/j.neunet.2020.12.003_b56
  article-title: Icarl: Incremental classifier and representation learning
– volume: 35
  start-page: 2624
  issue: 11
  year: 2013
  ident: 10.1016/j.neunet.2020.12.003_b45
  article-title: Distance-based image classification: Generalizing to new classes at near-zero cost
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2013.83
– ident: 10.1016/j.neunet.2020.12.003_b7
  doi: 10.1109/ICCV.2019.00067
– volume: 4
  start-page: 558
  issue: 4
  year: 1993
  ident: 10.1016/j.neunet.2020.12.003_b43
  article-title: ’neural-gas’ network for vector quantization and its application to time-series prediction
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.238311
– volume: vol. 12364
  start-page: 254
  year: 2020
  ident: 10.1016/j.neunet.2020.12.003_b70
  article-title: Topology-preserving class-incremental learning
SSID ssj0006843
Score 2.670026
SecondaryResourceType review_article
Snippet The ability of artificial agents to increment their capabilities when confronted with new data is an open challenge in artificial intelligence. The main...
SourceID hal
proquest
pubmed
crossref
elsevier
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 38
SubjectTerms Algorithms
Artificial Intelligence - trends
Catastrophic forgetting
Computer Science
Convolutional neural networks
Humans
Image classification
Imbalanced learning
Incremental learning
Memory - physiology
Neural Networks, Computer
Psychomotor Performance - physiology
Reproducibility of Results
Visual Perception - physiology
Title A comprehensive study of class incremental learning algorithms for visual tasks
URI https://dx.doi.org/10.1016/j.neunet.2020.12.003
https://www.ncbi.nlm.nih.gov/pubmed/33341513
https://www.proquest.com/docview/2471536930
https://hal.science/hal-03493944
Volume 135
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwEB5ReuHSFwW2pcituIaN4zyPKwTavugBKnGz7FmbTQtZtMly5Lczk8dWqKqQevUr0fgxn-xvvgE4ZE2szFobJJ5FtWVRBKZADAyGKK0MvUKOd_5-lk5_xl8uk8sNOB5iYZhW2Z_93ZnentZ9ybi35vi2LMfnIbnalENF-XUvagUl4zhj_fyj-z80jzTvmHPUOODWQ_hcy_Gq3KpyzKiMwvZScEid9bd7ejZnnuS_QGjrjE5fwYseRYpJ96OvYcNVb-DlkKFB9Bt2G35MBHPGl27e8dRFqyYrFl4go2ZRVtjdD9Jgff6IK2GurxbLspnf1IIQrbgr6xVVN6b-Xb-Fi9OTi-Np0KdQCJCQVxPkM2WzOC68sRGiNwQ-DGZY-MgrgzMkvyWVTZAsQLUym7nIqNTOCpP7LPVqBzarReX2QBSsS29zm6QOYxOG1pgUfS5TH3sqSEagBsNp7OXFOcvFtR54ZL90Z27N5tYyYlnSEQTrXredvMYT7bNhTvSjZaLJAzzR8xNN4fojrKo9nXzTXMYSPRwffCdH8HGYYU3bjN9OTOUWq1pH5MQTxXkjR7DbTf16LKUICiRSvfvvn3sPWxFzZVpu2z5sNsuV-0Bgp7EH7Wo-gOeTz1-nZw9V6f-S
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LbxMxEB6VcoAL5VlCeRjEdcl6vc9jVLUKkJYDQerNsid2s1A2VXbTY397Z_YRhCpUiatfuxo_5pP9zTcAH1kTK7PWBolnUW1ZFIEpEAODIUorQ6-Q451PTtPpj_jLWXK2A4dDLAzTKvuzvzvT29O6Lxn31hxfluX4e0iuNuVQUX7di1hQ8n6cqIyX9qfrPzyPNO-oc9Q64OZD_FxL8qrcpnJMqYzC9lZwyJ112z_dWzJR8l8otPVGx4_hUQ8jxaT70yew46qnsDekaBD9jn0G3yaCSeNrt-yI6qKVkxUrL5Bhsygr7C4IabA-gcS5MBfnq3XZLH_XgiCtuCrrDVU3pv5VP4f58dH8cBr0ORQCJOjVBPlC2SyOC29shOgNoQ-DGRY-8srgAslxSWUTJAtQrcwWLjIqtYvC5D5LvXoBu9Wqci9BFCxMb3ObpA5jE4bWmBR9LlMfeypIRqAGw2ns9cU5zcWFHohkP3Vnbs3m1jJiXdIRBNtel52-xh3ts2FO9F_rRJMLuKPnB5rC7UdYVns6mWkuY40eDhC-kiN4P8ywpn3GjyemcqtNrSPy4onixJEj2O-mfjuWUoQFEqle_ffPvYMH0_nJTM8-n349gIcRE2daottr2G3WG_eGkE9j37Yr-wY1aAE3
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+comprehensive+study+of+class+incremental+learning+algorithms+for+visual+tasks&rft.jtitle=Neural+networks&rft.au=Belouadah%2C+Eden&rft.au=Popescu%2C+Adrian&rft.au=Kanellos%2C+Ioannis&rft.date=2021-03-01&rft.pub=Elsevier&rft.issn=0893-6080&rft.volume=135&rft.spage=38&rft.epage=54&rft_id=info:doi/10.1016%2Fj.neunet.2020.12.003&rft.externalDBID=HAS_PDF_LINK&rft.externalDocID=oai_HAL_hal_03493944v1
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon