Meta-Learning-Based Incremental Few-Shot Object Detection

Recent years have witnessed meaningful progress in the task of few-shot object detection. However, most of the existing models are not capable of incremental learning with a few samples, i.e. , the detector can't detect novel-class objects by using only a few samples of novel classes (without r...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 4; pp. 2158 - 2169
Main Authors Cheng, Meng, Wang, Hanli, Long, Yu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recent years have witnessed meaningful progress in the task of few-shot object detection. However, most of the existing models are not capable of incremental learning with a few samples, i.e. , the detector can't detect novel-class objects by using only a few samples of novel classes (without revisiting the original training samples) while maintaining the performances on base classes. This is largely because of catastrophic forgetting, which is a general phenomenon in few-shot learning that the incorporation of the unseen information ( e.g. , novel-class objects) will lead to a serious loss of the knowledge learnt before ( e.g. , base-class objects). In this paper, a new model is proposed for incremental few-shot object detection, which takes CenterNet as the fundamental framework and redesigns it by introducing a novel meta-learning method to make the model adapted to unseen knowledge while overcoming forgetting to a great extent. Specifically, a meta-learner is trained with the base-class samples, providing the object locator of the proposed model with a good weight initialization, and thus the proposed model can be fine-tuned easily with few novel-class samples. On the other hand, the filters correlated to base classes are preserved when fine-tuning the proposed model with the few samples of novel classes, which is a simple but effective solution to mitigate the problem of forgetting. The experiments on the benchmark MS COCO and PASCAL VOC datasets demonstrate that the proposed model outperforms the state-of-the-art methods by a large margin in the detection performances on base classes and all classes while achieving best performances when detecting novel-class objects in most cases. The project page can be found in https://mic.tongji.edu.cn/e6/d5/c9778a190165/page.htm .
AbstractList Recent years have witnessed meaningful progress in the task of few-shot object detection. However, most of the existing models are not capable of incremental learning with a few samples, i.e. , the detector can’t detect novel-class objects by using only a few samples of novel classes (without revisiting the original training samples) while maintaining the performances on base classes. This is largely because of catastrophic forgetting, which is a general phenomenon in few-shot learning that the incorporation of the unseen information ( e.g. , novel-class objects) will lead to a serious loss of the knowledge learnt before ( e.g. , base-class objects). In this paper, a new model is proposed for incremental few-shot object detection, which takes CenterNet as the fundamental framework and redesigns it by introducing a novel meta-learning method to make the model adapted to unseen knowledge while overcoming forgetting to a great extent. Specifically, a meta-learner is trained with the base-class samples, providing the object locator of the proposed model with a good weight initialization, and thus the proposed model can be fine-tuned easily with few novel-class samples. On the other hand, the filters correlated to base classes are preserved when fine-tuning the proposed model with the few samples of novel classes, which is a simple but effective solution to mitigate the problem of forgetting. The experiments on the benchmark MS COCO and PASCAL VOC datasets demonstrate that the proposed model outperforms the state-of-the-art methods by a large margin in the detection performances on base classes and all classes while achieving best performances when detecting novel-class objects in most cases. The project page can be found in https://mic.tongji.edu.cn/e6/d5/c9778a190165/page.htm .
Author Wang, Hanli
Cheng, Meng
Long, Yu
Author_xml – sequence: 1
  givenname: Meng
  orcidid: 0000-0003-1734-5550
  surname: Cheng
  fullname: Cheng, Meng
  email: chengmeng@tongji.edu.cn
  organization: Department of Computer Science and Technology, Tongji University, Shanghai, China
– sequence: 2
  givenname: Hanli
  orcidid: 0000-0002-9999-4871
  surname: Wang
  fullname: Wang, Hanli
  email: hanliwang@tongji.edu.cn
  organization: Department of Computer Science and Technology, Tongji University, Shanghai, China
– sequence: 3
  givenname: Yu
  surname: Long
  fullname: Long, Yu
  email: longyu@tongji.edu.cn
  organization: Department of Computer Science and Technology, Tongji University, Shanghai, China
BookMark eNp9kE1PAjEQhhuDiYD-Ab1s4rnY7-0eFUVJMBxAr027O9Ul0MVuifHfuwjx4MHTTDLvMzN5BqgXmgAIXVIyopQUN8vx4nU5YoTRESdaSyFPUJ9KqTFjRPa6nkiKNaPyDA3adkUIFVrkfVQ8Q7J4BjaGOrzhO9tClU1DGWEDIdl1NoFPvHhvUjZ3KyhTdg-pK3UTztGpt-sWLo51iF4mD8vxE57NH6fj2xkuWSET9sTmQB11vuKcac68ZpWGovRMcKe9zXk3kpV34JQmumSK5JbmwJVShXV8iK4Pe7ex-dhBm8yq2cXQnTRMiVwwWQjVpfQhVcambSN4U9bJ7v9M0dZrQ4nZizI_osxelDmK6lD2B93GemPj1__Q1QGqAeAXKIRkVAn-DdX2daU
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_TCSVT_2021_3138851
crossref_primary_10_1109_TCSVT_2023_3349007
crossref_primary_10_1016_j_cviu_2023_103774
crossref_primary_10_1016_j_eswa_2024_125557
crossref_primary_10_1109_TCSVT_2022_3219605
crossref_primary_10_1109_TCSVT_2023_3272612
crossref_primary_10_3390_s25010214
crossref_primary_10_1109_TCSVT_2023_3262739
crossref_primary_10_1109_TCSVT_2022_3173687
crossref_primary_10_1016_j_dsp_2025_105181
crossref_primary_10_1109_TCSVT_2024_3385444
crossref_primary_10_1016_j_neunet_2023_10_039
crossref_primary_10_1109_TCSVT_2024_3499937
crossref_primary_10_1109_TCSVT_2022_3193612
crossref_primary_10_1016_j_cviu_2025_104317
crossref_primary_10_1109_TCSVT_2024_3432152
crossref_primary_10_1109_TCSVT_2023_3285263
crossref_primary_10_1145_3576045
crossref_primary_10_1109_TCSVT_2023_3343397
crossref_primary_10_1109_TCSVT_2024_3435977
crossref_primary_10_1016_j_patcog_2024_110266
crossref_primary_10_1109_TCSVT_2024_3424566
crossref_primary_10_1109_TCSVT_2024_3412996
crossref_primary_10_1109_TCSVT_2024_3447066
crossref_primary_10_1109_TCSVT_2021_3125129
crossref_primary_10_1109_TCSVT_2024_3378978
crossref_primary_10_1109_TCSVT_2023_3313576
crossref_primary_10_1016_j_neucom_2024_127388
crossref_primary_10_1016_j_cviu_2025_104323
crossref_primary_10_1109_TCSVT_2023_3248798
crossref_primary_10_1049_ipr2_12935
crossref_primary_10_3390_app13137549
crossref_primary_10_1109_TCSVT_2023_3327605
crossref_primary_10_1109_TCSVT_2022_3164190
crossref_primary_10_1109_TPAMI_2024_3463709
crossref_primary_10_1007_s00371_023_03228_8
crossref_primary_10_1109_TCSVT_2023_3344574
crossref_primary_10_1109_TCSVT_2023_3325651
crossref_primary_10_1109_TGRS_2024_3475482
crossref_primary_10_1109_TCSVT_2023_3241651
crossref_primary_10_1109_TCSVT_2024_3350913
crossref_primary_10_1016_j_knosys_2024_111964
crossref_primary_10_1049_ipr2_70038
crossref_primary_10_3390_app13105958
crossref_primary_10_1155_2023_5337454
crossref_primary_10_1109_TCSVT_2024_3370600
crossref_primary_10_3390_rs14184581
crossref_primary_10_1109_ACCESS_2023_3347634
crossref_primary_10_1109_TCSVT_2023_3292519
crossref_primary_10_1109_TCSVT_2023_3238804
crossref_primary_10_1109_TCSVT_2022_3222305
crossref_primary_10_1109_TCSVT_2023_3301854
crossref_primary_10_1016_j_cja_2024_05_047
crossref_primary_10_1016_j_engappai_2023_107125
crossref_primary_10_1016_j_jvcir_2024_104228
crossref_primary_10_1109_TCSVT_2022_3165068
crossref_primary_10_1109_TCSVT_2024_3424302
crossref_primary_10_1109_TPAMI_2025_3529038
crossref_primary_10_1109_TASE_2024_3372711
crossref_primary_10_1016_j_future_2024_107690
crossref_primary_10_1109_TIM_2023_3288258
crossref_primary_10_1007_s10489_024_05556_1
crossref_primary_10_1109_TMM_2022_3142413
crossref_primary_10_3390_rs14225863
crossref_primary_10_1007_s10845_024_02475_3
crossref_primary_10_1109_TCSVT_2022_3197147
crossref_primary_10_1007_s10489_023_05245_5
crossref_primary_10_1016_j_neunet_2023_05_006
crossref_primary_10_1109_TCSVT_2023_3245584
crossref_primary_10_1109_TCSVT_2023_3304567
crossref_primary_10_1109_TNSM_2023_3347789
crossref_primary_10_1109_TCSVT_2024_3367666
crossref_primary_10_1109_TCSVT_2024_3477951
Cites_doi 10.1109/CVPR.2019.00534
10.1109/ICCV.2019.00851
10.1109/CVPR42600.2020.01238
10.1109/ICCV.2017.89
10.1007/978–3-319-10602-148
10.1109/CVPR42600.2020.01259
10.1109/CVPR.2019.00011
10.1109/CVPR.2018.00255
10.1145/3065386
10.1109/CVPR.2017.690
10.1109/ICCV.2019.00967
10.1109/CVPR.2019.00948
10.1109/TPAMI.2020.3007511
10.1109/TCSVT.2019.2920783
10.1007/s11263-014-0733-5
10.1609/aaai.v32i1.11716
10.1109/CVPR.2018.00610
10.1007/978-3-030-58520-4_12
10.1109/CVPR.2016.91
10.1109/CVPR42600.2020.01386
10.1007/978-3-030-01264-9_45
10.1109/TCSVT.2021.3052785
10.1109/CVPR.2018.00755
10.1109/CVPR.2016.90
10.1109/TPAMI.2016.2577031
10.1109/CVPR.2019.00641
10.1109/ICCV.2019.00815
10.l007/978-3-319-46448-0_2
10.1145/2998574
10.1109/CVPR.2014.81
10.1109/ICCV.2015.169
10.1109/TCSVT.2020.2995754
10.1109/CVPR.2017.106
10.1109/CVPR.2018.00459
10.1109/CVPR42600.2020.00407
10.1145/2733373.2806216
10.1109/ICEIEC49280.2020.9152261
10.5220/0010243202360242
10.1109/ICCV.2017.324
10.1109/CVPR.2018.00131
10.1109/CVPR.2018.00760
10.1109/ICCV.2017.322
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2021.3088545
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 2169
ExternalDocumentID 10_1109_TCSVT_2021_3088545
9452164
Genre orig-research
GrantInformation_xml – fundername: Shanghai Municipal Science and Technology Major Project
  grantid: 2021SHZDZX0100
– fundername: Shanghai Innovation Action Project of Science and Technology
  grantid: 20511100700
– fundername: National Natural Science Foundation of China
  grantid: 61976159
  funderid: 10.13039/501100001809
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-f0a7e1b1bfd332832f82d8e9cf243b8fa73bfd5dfbeb6808c2607a17e36669ab3
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 05:16:57 EDT 2025
Tue Jul 01 00:41:15 EDT 2025
Thu Apr 24 23:03:17 EDT 2025
Wed Aug 27 02:40:50 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 4
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-f0a7e1b1bfd332832f82d8e9cf243b8fa73bfd5dfbeb6808c2607a17e36669ab3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-1734-5550
0000-0002-9999-4871
PQID 2647425946
PQPubID 85433
PageCount 12
ParticipantIDs ieee_primary_9452164
crossref_citationtrail_10_1109_TCSVT_2021_3088545
crossref_primary_10_1109_TCSVT_2021_3088545
proquest_journals_2647425946
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-04-01
PublicationDateYYYYMMDD 2022-04-01
PublicationDate_xml – month: 04
  year: 2022
  text: 2022-04-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref14
Wang (ref45)
ref11
ref10
ref17
ref19
ref51
Santoro (ref36)
ref50
ref46
ref48
ref47
ref42
ref41
ref44
ref43
ref49
ref8
ref9
ref4
ref3
ref6
ref5
Kingma (ref52) 2014
ref40
ref35
Zhou (ref18) 2019
ref37
Redmon (ref15) 2018
ref31
ref30
ref33
ref32
ref2
ref1
ref39
Koch (ref22); 2
ref38
Vinyals (ref21)
Finn (ref7); 70
ref24
ref23
ref26
ref25
ref20
Nichol (ref34) 2018
Bochkovskiy (ref16) 2020
ref28
ref27
ref29
References_xml – ident: ref4
  doi: 10.1109/CVPR.2019.00534
– year: 2018
  ident: ref15
  article-title: YOLOv3: An incremental improvement
  publication-title: arXiv:1804.02767
– ident: ref2
  doi: 10.1109/ICCV.2019.00851
– start-page: 9919
  volume-title: Proc. 37th Int. Conf. Mach. Learn. (ICML)
  ident: ref45
  article-title: Frustratingly simple few-shot object detection
– ident: ref40
  doi: 10.1109/CVPR42600.2020.01238
– ident: ref49
  doi: 10.1109/ICCV.2017.89
– ident: ref9
  doi: 10.1007/978–3-319-10602-148
– ident: ref27
  doi: 10.1109/CVPR42600.2020.01259
– ident: ref30
  doi: 10.1109/CVPR.2019.00011
– ident: ref48
  doi: 10.1109/CVPR.2018.00255
– ident: ref51
  doi: 10.1145/3065386
– volume: 70
  start-page: 1126
  volume-title: Proc. 34th Int. Conf. Mach. Learn. (ICML)
  ident: ref7
  article-title: Model-agnostic meta-learning for fast adaptation of deep networks
– ident: ref14
  doi: 10.1109/CVPR.2017.690
– ident: ref3
  doi: 10.1109/ICCV.2019.00967
– ident: ref23
  doi: 10.1109/CVPR.2019.00948
– year: 2019
  ident: ref18
  article-title: Objects as points
  publication-title: arXiv:1904.07850
– ident: ref37
  doi: 10.1109/TPAMI.2020.3007511
– ident: ref41
  doi: 10.1109/TCSVT.2019.2920783
– ident: ref10
  doi: 10.1007/s11263-014-0733-5
– ident: ref1
  doi: 10.1609/aaai.v32i1.11716
– ident: ref31
  doi: 10.1109/CVPR.2018.00610
– ident: ref46
  doi: 10.1007/978-3-030-58520-4_12
– ident: ref13
  doi: 10.1109/CVPR.2016.91
– ident: ref6
  doi: 10.1109/CVPR42600.2020.01386
– ident: ref50
  doi: 10.1007/978-3-030-01264-9_45
– ident: ref35
  doi: 10.1109/TCSVT.2021.3052785
– ident: ref32
  doi: 10.1109/CVPR.2018.00755
– ident: ref47
  doi: 10.1109/CVPR.2016.90
– ident: ref8
  doi: 10.1109/TPAMI.2016.2577031
– ident: ref38
  doi: 10.1109/CVPR.2019.00641
– ident: ref39
  doi: 10.1109/ICCV.2019.00815
– year: 2018
  ident: ref34
  article-title: On first-order meta-learning algorithms
  publication-title: arXiv:1803.02999
– ident: ref17
  doi: 10.l007/978-3-319-46448-0_2
– year: 2014
  ident: ref52
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv:1412.6980
– start-page: 3630
  volume-title: Proc. Adv. Neural Inf. Process. Syst. (NIPS)
  ident: ref21
  article-title: Matching networks for one shot learning
– ident: ref42
  doi: 10.1145/2998574
– year: 2020
  ident: ref16
  article-title: YOLOv4: Optimal speed and accuracy of object detection
  publication-title: arXiv:2004.10934
– ident: ref11
  doi: 10.1109/CVPR.2014.81
– ident: ref12
  doi: 10.1109/ICCV.2015.169
– ident: ref28
  doi: 10.1109/TCSVT.2020.2995754
– volume: 2
  start-page: 1
  volume-title: Proc. 32nd. Int. Conf. Mach. Learn. Deep Learn. Workshop
  ident: ref22
  article-title: Siamese neural network for one-shot image recognition
– ident: ref20
  doi: 10.1109/CVPR.2017.106
– ident: ref29
  doi: 10.1109/CVPR.2018.00459
– ident: ref5
  doi: 10.1109/CVPR42600.2020.00407
– ident: ref43
  doi: 10.1145/2733373.2806216
– start-page: 1842
  volume-title: Proc. 33nd. Int. Conf. Mach. Learn. (ICML)
  ident: ref36
  article-title: Meta-learning with memory-augmented neural networks
– ident: ref24
  doi: 10.1109/ICEIEC49280.2020.9152261
– ident: ref33
  doi: 10.5220/0010243202360242
– ident: ref19
  doi: 10.1109/ICCV.2017.324
– ident: ref25
  doi: 10.1109/CVPR.2018.00131
– ident: ref26
  doi: 10.1109/CVPR.2018.00760
– ident: ref44
  doi: 10.1109/ICCV.2017.322
SSID ssj0014847
Score 2.6068008
Snippet Recent years have witnessed meaningful progress in the task of few-shot object detection. However, most of the existing models are not capable of incremental...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 2158
SubjectTerms Adaptation models
Data models
Detectors
Feature extraction
Few-shot learning
incremental learning
Learning
meta-learning
Object detection
Object recognition
Task analysis
Training
Title Meta-Learning-Based Incremental Few-Shot Object Detection
URI https://ieeexplore.ieee.org/document/9452164
https://www.proquest.com/docview/2647425946
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LSwMxEB7Ukx58i_XFHrxpapN9ZHPUailC9WAVb0uSnSgoregWwV_vJLstoiLeFjYDYSbJN18yD4BD8rmFdFww7WyHJcrSniOcZ4SlyNGVgic-33lwlfVvk8v79H4Ojme5MIgYgs-w7T_DW345thN_VXaiEgKbLJmHeSJuda7W7MUgyUMzMXIXOMsJx6YJMh11Muze3A2JCgrejmlTpT516QsIha4qP47igC-9FRhMZ1aHlTy1J5Vp249vRRv_O_VVWG4czei0XhlrMIejdVj6Un5wA9QAK82aCqsP7IwArYzowKivDEm4h-_s5nFcRdfGX9dE51iFyK3RJtz2LobdPmtaKTArVFox19ESueHGlXHsuxO5XJQ5KutEEpvcaRnTr7R0Bo1vxmGJ5kjNJcZEb5Q28RYsjMYj3IbIiExKLWVQuc5yzYW0sSnRyZzormoBn-q2sE2dcd_u4rkIfKOjimCPwtujaOzRgqOZzEtdZePP0RtewbORjW5bsDc1YdFsxLeC_D0i_6lKsp3fpXZhUfiMhhCMswcL1esE98nPqMxBWGCfzN3MiQ
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LbxMxEB6F9lB6AEqpGgiwB27Uaex9eH2EQJTSJBySVL2tbO-4lUBJ1W5UiV_P2LuJorZC3FZaj2TNePzNjOcB8IlsbiEdF0w722OJsqRzhPOMsBQ5ulLwxNc7jyfZcJ78uEwvW3CyqYVBxJB8hl3_Gd7yy6Vd-VDZqUoIbLLkGewS7qeirtbavBkkeRgnRgYDZzkh2bpEpqdOZ_3pxYycQcG7MalV6ouXtmAozFV5dBkHhBm8hPF6b3Viya_uqjJd--dB28b_3fwreNGYmtGX-mwcQAsXr2F_qwHhIagxVpo1PVav2FeCtDKiK6MOGhLxAO_Z9HpZRT-ND9hE37AKuVuLNzAffJ_1h6wZpsCsUGnFXE9L5IYbV8axn0_kclHmqKwTSWxyp2VMv9LSGTR-HIclR0dqLjEmB0dpEx_BzmK5wGOIjMik1FIGluss11xIG5sSnczJ4VVt4GveFrbpNO4HXvwugsfRU0WQR-HlUTTyaMPnDc1N3Wfjn6sPPYM3KxvetqGzFmHRqOJdQRYfuf-pSrK3T1N9hL3hbDwqRmeT83fwXPj6hpCa04Gd6naF78nqqMyHcNj-AqkPz9M
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Meta-Learning-Based+Incremental+Few-Shot+Object+Detection&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Cheng%2C+Meng&rft.au=Wang%2C+Hanli&rft.au=Long%2C+Yu&rft.date=2022-04-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=4&rft.spage=2158&rft_id=info:doi/10.1109%2FTCSVT.2021.3088545&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon