Few-Shot Learning Meets Transformer: Unified Query-Support Transformers for Few-Shot Classification
The goal of Few-shot classification (FSL) is to identify unseen classes with very limited samples has attracted more and more attention. Usually, it is formulated as a metric learning problem. The core issue of few-shot classification is how to learn (1) consistent representations for images in both...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 33; no. 12; p. 1 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The goal of Few-shot classification (FSL) is to identify unseen classes with very limited samples has attracted more and more attention. Usually, it is formulated as a metric learning problem. The core issue of few-shot classification is how to learn (1) consistent representations for images in both support and query sets and (2) effective metric learning for images between support and query sets. In this paper, we show that the two challenges can be well modeled simultaneously via a unified Query-Support TransFormer (QSFormer) model. To be specific, the proposed QSFormer involves global query-support sample Transformer (sampleFormer) branch and local patch Transformer (patchFormer) learning branch. sampleFormer aims to capture the dependence of samples in support and query sets for image representation. It adopts the Encoder, QS-Decoder and Cross-Attention to respectively model the Support, Query (image) representation and Metric learning for few-shot classification task. Also, as a complementary to global learning branch, we adopt a local patch Transformer to extract structural representation for each image sample by capturing the long-range dependence of local image patches. In addition, we introduce a novel Cross-scale Interactive Feature Extractor (CIFE) to extract and fuse different scale CNN features as an effective backbone module for the proposed few-shot learning method. We integrate these into a unified framework and train it in an end-to-end way. A large number of experiments are conducted on four popular datasets to validate the superiority and effectiveness of the proposed QSFormer. |
---|---|
AbstractList | The goal of Few-shot classification (FSL) is to identify unseen classes with very limited samples has attracted more and more attention. Usually, it is formulated as a metric learning problem. The core issue of few-shot classification is how to learn (1) consistent representations for images in both support and query sets and (2) effective metric learning for images between support and query sets. In this paper, we show that the two challenges can be well modeled simultaneously via a unified Query-Support TransFormer (QSFormer) model. To be specific, the proposed QSFormer involves global query-support sample Transformer (sampleFormer) branch and local patch Transformer (patchFormer) learning branch. sampleFormer aims to capture the dependence of samples in support and query sets for image representation. It adopts the Encoder, QS-Decoder and Cross-Attention to respectively model the Support, Query (image) representation and Metric learning for few-shot classification task. Also, as a complementary to global learning branch, we adopt a local patch Transformer to extract structural representation for each image sample by capturing the long-range dependence of local image patches. In addition, we introduce a novel Cross-scale Interactive Feature Extractor (CIFE) to extract and fuse different scale CNN features as an effective backbone module for the proposed few-shot learning method. We integrate these into a unified framework and train it in an end-to-end way. A large number of experiments are conducted on four popular datasets to validate the superiority and effectiveness of the proposed QSFormer. |
Author | Jiang, Bo Wang, Xixi Luo, Bin Wang, Xiao |
Author_xml | – sequence: 1 givenname: Xixi orcidid: 0000-0001-8510-0964 surname: Wang fullname: Wang, Xixi organization: School of Computer Science and Technology, Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University, Hefei, China – sequence: 2 givenname: Xiao surname: Wang fullname: Wang, Xiao organization: School of Computer Science and Technology, Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University, Hefei, China – sequence: 3 givenname: Bo orcidid: 0000-0002-6238-1596 surname: Jiang fullname: Jiang, Bo organization: School of Computer Science and Technology, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China – sequence: 4 givenname: Bin orcidid: 0000-0002-1414-3307 surname: Luo fullname: Luo, Bin organization: School of Computer Science and Technology, Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, Anhui University, Hefei, China |
BookMark | eNp9kF9LwzAUxYMouE2_gPhQ8LkzSZsm8U2KU2Eiss7XkLW32rElNUmRfXu7P8jwwadz4Z7fuZczRKfGGkDoiuAxIVjeFvnsvRhTTJNxQgXlnJ-gAWFMxJRidtrPmJFYUMLO0dD7JcYkFSkfoHIC3_Hs04ZoCtqZxnxELwDBR4XTxtfWrcHdRXPT1A1U0VsHbhPPura1LhxbfNRr9JuVr7T3PVLq0Fhzgc5qvfJwedARmk8eivwpnr4-Puf307ikMguxTgipMkGZhEVCKiF5RRPGgS-Ai1SU1WK7rIUkiag0hUoTjGtKM5nVDEtIRuhmn9s6-9WBD2ppO2f6k4oKKdOM8Z4dIbF3lc5676BWZRN2fwanm5UiWG0rVbtK1bZSdai0R-kftHXNWrvN_9D1HmoA4AggaYo5TX4ArCSE_w |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1016_j_neunet_2024_107083 crossref_primary_10_1109_TCSVT_2024_3480279 crossref_primary_10_1007_s10489_025_06361_0 crossref_primary_10_1109_TCSVT_2024_3456127 crossref_primary_10_1016_j_inffus_2024_102611 crossref_primary_10_1109_TCSVT_2024_3432753 crossref_primary_10_1109_TGRS_2024_3407812 crossref_primary_10_1007_s11263_024_02284_4 crossref_primary_10_1109_ACCESS_2024_3406018 crossref_primary_10_1109_TCSVT_2024_3484530 crossref_primary_10_1016_j_patcog_2024_110736 crossref_primary_10_1109_TCSVT_2024_3486455 crossref_primary_10_1109_TCSVT_2024_3499937 crossref_primary_10_1109_TCSVT_2024_3432596 crossref_primary_10_1109_TCSVT_2024_3435003 crossref_primary_10_1109_TMM_2024_3453055 crossref_primary_10_1109_TMM_2023_3283132 crossref_primary_10_1007_s00371_025_03804_0 crossref_primary_10_1016_j_patrec_2023_12_023 crossref_primary_10_3390_electronics14010130 crossref_primary_10_1007_s00138_024_01529_z crossref_primary_10_1016_j_eswa_2024_124811 crossref_primary_10_1016_j_neucom_2025_130056 crossref_primary_10_1007_s00371_024_03650_6 crossref_primary_10_1109_TIM_2024_3381270 crossref_primary_10_1007_s11263_024_02175_8 crossref_primary_10_1016_j_compeleceng_2024_110004 crossref_primary_10_1016_j_neunet_2025_107339 crossref_primary_10_1109_LSENS_2024_3500785 crossref_primary_10_1109_ACCESS_2024_3501475 |
Cites_doi | 10.1109/TGRS.2021.3116349 10.3390/rs15020331 10.1109/TCSVT.2021.3076523 10.1109/CVPR.2019.00011 10.1007/978-3-7908-2604-3_16 10.1109/CVPR.2019.01091 10.1007/s11263-015-0816-y 10.1109/CVPR.2018.00131 10.1109/CVPR.2018.00755 10.1007/978-3-030-58452-8_13 10.1109/CVPR42600.2020.00883 10.1609/aaai.v35i10.17047 10.1109/tcsvt.2023.3241651 10.1109/tmm.2022.3150169 10.1109/TCSVT.2022.3202563 10.1109/CVPR.2019.00743 10.1109/tpami.2022.3174072 10.1109/CVPR42600.2020.01222 10.1109/TIP.2022.3148867 10.1109/ICCV.2019.00676 10.1007/978-3-030-87193-2_4 10.1109/TCSVT.2020.2995754 10.1109/ICCV48922.2021.00893 10.1007/978-3-030-87589-3_28 10.1109/TGRS.2023.3271218 10.1109/CVPR52688.2022.00525 10.1109/CVPR52688.2022.00781 10.1109/CVPR46437.2021.00162 10.1016/j.neucom.2021.09.070 10.1109/ICCV.2019.00042 10.1109/CVPR.2019.00948 10.1016/j.patrec.2022.03.022 10.1016/j.neucom.2022.02.073 10.1109/CVPR.2019.01199 10.1109/TCSVT.2021.3132912 10.1109/TGRS.2022.3147198 10.1109/ICCV48922.2021.01474 10.1109/CVPR46437.2021.00792 10.1002/sapm1941201224 10.1109/CVPR.2019.00844 10.1109/LSP.2022.3155991 10.1109/CVPR.2019.00049 10.1109/CVPR46437.2021.01621 10.1109/ICCV48922.2021.00290 10.1016/j.neucom.2021.11.074 10.1109/WACV51458.2022.00235 10.1109/CVPR52688.2022.00891 10.1109/TMM.2022.3141267 10.1109/ICCV48922.2021.00042 10.1109/TCSVT.2022.3181490 10.1609/aaai.v34i07.6916 10.1109/CVPR52688.2022.01415 10.1007/978-3-031-20044-1_19 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2023.3282777 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1 |
ExternalDocumentID | 10_1109_TCSVT_2023_3282777 10144072 |
Genre | orig-research |
GrantInformation_xml | – fundername: Anhui Provincial Key Research and Development Program grantid: 2022i01020014 – fundername: National Natural Science Foundation of China grantid: 62076004; 62102205 funderid: 10.13039/501100001809 – fundername: Natural Science Foundation of Anhui Province grantid: 2108085Y23 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 5VS AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD H~9 ICLAB IFJZH RIG VH1 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c296t-a311d68259eb31d897d2357e7be7848cdb6825f89138da2eda100f22696f509e3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 07:04:34 EDT 2025 Tue Jul 01 00:41:22 EDT 2025 Thu Apr 24 23:03:48 EDT 2025 Wed Aug 27 02:57:13 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c296t-a311d68259eb31d897d2357e7be7848cdb6825f89138da2eda100f22696f509e3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-1414-3307 0000-0002-6238-1596 0000-0001-8510-0964 0000-0001-6117-6745 |
PQID | 2899465791 |
PQPubID | 85433 |
PageCount | 1 |
ParticipantIDs | ieee_primary_10144072 proquest_journals_2899465791 crossref_citationtrail_10_1109_TCSVT_2023_3282777 crossref_primary_10_1109_TCSVT_2023_3282777 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-12-01 |
PublicationDateYYYYMMDD | 2023-12-01 |
PublicationDate_xml | – month: 12 year: 2023 text: 2023-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | Rusu (ref7) Chen (ref32) Hou (ref42); 32 ref57 ref12 Liao (ref56); 34 ref58 ref53 ref52 Ravi (ref83) ref11 ref55 ref10 ref54 Snell (ref28); 30 Luketina (ref14) Xu (ref90); 34 Wah (ref81) 2011 Koch (ref27); 2 ref51 ref50 Zhou (ref89) 2018 Finn (ref88) ref46 ref85 ref44 ref43 Raghu (ref17) 2019 Rajeswaran (ref21); 32 Xiao (ref48) 2022 Zhang (ref86) Munkhdalai (ref19) Park (ref22); 32 ref8 Dosovitskiy (ref35) 2020 ref9 ref4 ref3 ref6 ref5 ref82 Santoro (ref15) ref40 ref84 Ye (ref16) 2018 ref79 ref34 ref78 ref37 ref36 ref31 ref75 ref30 ref74 Metz (ref18) ref33 ref77 Vaswani (ref49); 30 ref76 Ren (ref80) ref2 ref1 Zhmoginov (ref41) 2022 Oreshkin (ref29); 31 ref39 ref38 Li (ref23); 32 Vinyals (ref13); 29 ref71 Van der Maaten (ref91) 2008; 9 ref70 ref73 Liu (ref59) 2021 ref68 Antoniou (ref24) ref25 ref69 ref20 ref64 Liu (ref45) Doersch (ref47); 33 ref63 ref66 Liu (ref65) van den Oord (ref72) 2018 Park (ref87) ref60 Flennerhag (ref26) ref62 ref61 Fei (ref67) |
References_xml | – ident: ref1 doi: 10.1109/TGRS.2021.3116349 – year: 2019 ident: ref17 article-title: Rapid learning or feature reuse? Towards understanding the effectiveness of MAML publication-title: arXiv:1909.09157 – volume: 34 start-page: 1992 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref56 article-title: TransMatcher: Deep image matching through transformers for generalizable person re-identification – volume: 29 start-page: 3637 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref13 article-title: Matching networks for one shot learning – ident: ref3 doi: 10.3390/rs15020331 – ident: ref10 doi: 10.1109/TCSVT.2021.3076523 – ident: ref79 doi: 10.1109/CVPR.2019.00011 – ident: ref84 doi: 10.1007/978-3-7908-2604-3_16 – ident: ref78 doi: 10.1109/CVPR.2019.01091 – ident: ref82 doi: 10.1007/s11263-015-0816-y – year: 2022 ident: ref48 article-title: Semantic cross attention for few-shot learning publication-title: arXiv:2210.06311 – volume: 30 start-page: 6000 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref49 article-title: Attention is all you need – ident: ref30 doi: 10.1109/CVPR.2018.00131 – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref65 article-title: A universal representation transformer layer for few-shot image classification – year: 2018 ident: ref72 article-title: Representation learning with contrastive predictive coding publication-title: arXiv:1807.03748 – start-page: 1126 volume-title: Proc. IEEE/CVF Int. Conf. Mach. Learn. ident: ref88 article-title: Model-agnostic meta-learning for fast adaptation of deep networks – start-page: 3664 volume-title: Proc. IEEE/CVF Int. Conf. Mach. Learn. ident: ref19 article-title: Rapid adaptation with conditionally shifted neurons – ident: ref20 doi: 10.1109/CVPR.2018.00755 – volume: 33 start-page: 21981 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref47 article-title: CrossTransformers: Spatially-aware few-shot transfer – ident: ref52 doi: 10.1007/978-3-030-58452-8_13 – ident: ref64 doi: 10.1109/CVPR42600.2020.00883 – ident: ref73 doi: 10.1609/aaai.v35i10.17047 – ident: ref12 doi: 10.1109/tcsvt.2023.3241651 – start-page: 2952 volume-title: Proc. IEEE/CVF Int. Conf. Mach. Learn. ident: ref14 article-title: Scalable gradient-based tuning of continuous regularization hyperparameters – ident: ref51 doi: 10.1109/tmm.2022.3150169 – ident: ref54 doi: 10.1109/TCSVT.2022.3202563 – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref24 article-title: How to train your MAML – volume: 32 start-page: 113 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref21 article-title: Meta-learning with implicit gradients – ident: ref31 doi: 10.1109/CVPR.2019.00743 – ident: ref4 doi: 10.1109/tpami.2022.3174072 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref83 article-title: Optimization as a model for few-shot learning – start-page: 7510 volume-title: Proc. IEEE/CVF Int. Conf. Mach. Learn. ident: ref87 article-title: Meta variance transfer: Learning to augment from the others – ident: ref9 doi: 10.1109/CVPR42600.2020.01222 – ident: ref61 doi: 10.1109/TIP.2022.3148867 – year: 2018 ident: ref16 article-title: Few-shot learning via embedding adaptation with set-to-set functions publication-title: arXiv:1812.03664 – start-page: 1 volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref18 article-title: Meta-learning update rules for unsupervised representation learning – ident: ref46 doi: 10.1109/ICCV.2019.00676 – ident: ref62 doi: 10.1007/978-3-030-87193-2_4 – volume: 32 start-page: 3314 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref22 article-title: Meta-curvature – ident: ref43 doi: 10.1109/TCSVT.2020.2995754 – ident: ref33 doi: 10.1109/ICCV48922.2021.00893 – ident: ref63 doi: 10.1007/978-3-030-87589-3_28 – ident: ref2 doi: 10.1109/TGRS.2023.3271218 – start-page: 1 volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref67 article-title: MELR: Meta-learning via modeling episode-level relationships for few-shot learning – ident: ref5 doi: 10.1109/CVPR52688.2022.00525 – volume: 31 start-page: 719 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref29 article-title: TADAM: Task dependent adaptive metric for improved few-shot learning – volume: 2 start-page: 8 volume-title: Proc. ICML Deep Learn. Workshop ident: ref27 article-title: Siamese neural networks for one-shot image recognition – ident: ref44 doi: 10.1109/CVPR52688.2022.00781 – ident: ref50 doi: 10.1109/CVPR46437.2021.00162 – ident: ref74 doi: 10.1016/j.neucom.2021.09.070 – volume: 34 start-page: 28522 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref90 article-title: ViTAE: Vision transformer advanced by exploring intrinsic inductive bias – ident: ref75 doi: 10.1109/ICCV.2019.00042 – ident: ref77 doi: 10.1109/CVPR.2019.00948 – ident: ref6 doi: 10.1016/j.patrec.2022.03.022 – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref26 article-title: Meta-learning with warped gradient descent – year: 2022 ident: ref41 article-title: HyperTransformer: Model generation for supervised and semi-supervised few-shot learning publication-title: arXiv:2201.04182 – volume: 9 start-page: 1 issue: 11 year: 2008 ident: ref91 article-title: Visualizing data using t-SNE publication-title: J. Mach. Learn. Res. – volume: 32 start-page: 10276 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref23 article-title: Learning to self-train for semi-supervised few-shot classification – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref86 article-title: IEPT: Instance-level and episode-level pretext tasks for few-shot learning – ident: ref85 doi: 10.1016/j.neucom.2022.02.073 – start-page: 6907 volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref7 article-title: Meta-learning with latent embedding optimization – ident: ref8 doi: 10.1109/CVPR.2019.01199 – ident: ref34 doi: 10.1109/TCSVT.2021.3132912 – ident: ref70 doi: 10.1109/TGRS.2022.3147198 – year: 2020 ident: ref35 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: arXiv:2010.11929 – ident: ref58 doi: 10.1109/ICCV48922.2021.01474 – ident: ref11 doi: 10.1109/CVPR46437.2021.00792 – start-page: 1842 volume-title: Proc. IEEE/CVF Int. Conf. Mach. Learn. ident: ref15 article-title: Meta-learning with memory-augmented neural networks – ident: ref71 doi: 10.1002/sapm1941201224 – ident: ref25 doi: 10.1109/CVPR.2019.00844 – ident: ref66 doi: 10.1109/LSP.2022.3155991 – ident: ref76 doi: 10.1109/CVPR.2019.00049 – year: 2021 ident: ref59 article-title: Query2Label: A simple transformer way to multi-label classification publication-title: arXiv:2107.10834 – ident: ref60 doi: 10.1109/CVPR46437.2021.01621 – ident: ref53 doi: 10.1109/ICCV48922.2021.00290 – ident: ref68 doi: 10.1016/j.neucom.2021.11.074 – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref32 article-title: A closer look at few-shot classification – ident: ref55 doi: 10.1109/WACV51458.2022.00235 – ident: ref37 doi: 10.1109/CVPR52688.2022.00891 – start-page: 1 volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref80 article-title: Meta-learning for semi-supervised few-shot classification – volume: 30 start-page: 4080 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref28 article-title: Prototypical networks for few-shot learning – ident: ref57 doi: 10.1109/TMM.2022.3141267 – volume-title: The Caltech-UCSD birds-200–2011 dataset year: 2011 ident: ref81 – ident: ref36 doi: 10.1109/ICCV48922.2021.00042 – ident: ref40 doi: 10.1109/TCSVT.2022.3181490 – volume: 32 start-page: 4003 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref42 article-title: Cross attention network for few-shot classification – ident: ref69 doi: 10.1609/aaai.v34i07.6916 – ident: ref39 doi: 10.1109/CVPR52688.2022.01415 – ident: ref38 doi: 10.1007/978-3-031-20044-1_19 – volume-title: Proc. IEEE/CVF Int. Conf. Learn. Represent. ident: ref45 article-title: Learning to propagate labels: Transductive propagation network for few-shot learning – year: 2018 ident: ref89 article-title: Deep meta-learning: Learning to learn in the concept space publication-title: arXiv:1802.03596 |
SSID | ssj0014847 |
Score | 2.6302333 |
Snippet | The goal of Few-shot classification (FSL) is to identify unseen classes with very limited samples has attracted more and more attention. Usually, it is... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | Classification Computational modeling Deep Learning Feature extraction Few-Shot Learning Image representation Learning Measurement Metric Learning Queries Representation learning Representations Task analysis Transformer Transformers |
Title | Few-Shot Learning Meets Transformer: Unified Query-Support Transformers for Few-Shot Classification |
URI | https://ieeexplore.ieee.org/document/10144072 https://www.proquest.com/docview/2899465791 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5uJz34c-J0Sg7eJF27dU3jTYZjCBvIOtmttOmrgrLJ1iL61_te2o6pKJ5aaBIKL02-r3nv-xi7JJMiXBdB2FJJ4cY4jf1IaRG5Kbgd24tkl6qRR2NvOHXvZr1ZWaxuamEAwCSfgUW35iw_WeicfpW1yVeWBL1qrIbMrSjWWh8ZuL5xE0O84AgfN7KqQsZW7aA_eQgsMgq3ukgxpJRfdiFjq_JjLTYbzGCPjatXK_JKnq08iy398U218d_vvs92S6jJb4q5ccC2YH7IdjYECI-YHsCbmDwtMl7qrD7yEUC24kGFZ2F5zRGXpohU-X0Oy3dBPqCI2TebrDhe-XosY7RJKUgm6g02HdwG_aEobReE7igvE1HXcRIPmaNCou0kvpIJaeKAjEH6rq-TmB6mdL7pJ1EHksix7RRhnPJShB_QPWb1-WIOJ4wjXZKKRPPSXoS8U8fa6SS4I8ZSI41JZZM5VRhCXWqSkzXGS2i4ia1CE7qQQheWoWuyq3Wf10KR48_WDYrFRssiDE3WqsIdll_tKiTy6Xo9qZzTX7qdsW0avchnabF6tszhHFFJFl-Y2fgJcmjcxQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3BTttAEB1Remg5QFtABGi7h_aE1tiO4_UicUC0USgECWEqbq69HoMESlDiCMG_8Ct8GzNrO0pb0RtST7bkXVveGe-8552dB_CFRYpoXkTpKq1kkJEbR6k2Mg0KDHw3TFWbdyP3j8PeWfDjvHM-Bw_TvTCIaJPP0OFTu5afD82Ef5Vts64sF_SqcygP8e6WGNp49-AbmfOr73e_x_s9WYsISOPrsJRp2_PykHiQJtro5ZFWOVd4QZWhioLI5BlfLHi1LspTH_PUc92CQIkOCwqm2Kb7voLXBDQ6frU9bLpIEURWv4wQiicjCp3NnhxXb8f7pz9jh6XJnTaRGqXUb3HPCrn8NfvbkNZdgsdmMKpMlitnUmaOuf-jTuR_O1rvYLEG02Kv8v73MIeDD7AwU2JxGUwXb-Xp5bAUdSXZC9FHLMcibhA7jnYEIe-CsLg4meDoTrLSKbGS2SZjQUcxvZeVEuUkK-vXK3D2Im-5CvOD4QDXQBAhVJrLAhadlJi1yYzn5xTzM2WIqBWqBV5j9sTUVddZ_OM6sezL1Yl1lYRdJaldpQVb0z43Vc2Rf7ZeYdvPtKzM3oLNxr2Sel4aJ0yvg7CjtLf-TLfP8KYX94-So4Pjww14y0-qsnc2Yb4cTfAjYbAy-2S_BAG_XtqZngAT-zim |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Few-Shot+Learning+Meets+Transformer%3A+Unified+Query-Support+Transformers+for+Few-Shot+Classification&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Wang%2C+Xixi&rft.au=Wang%2C+Xiao&rft.au=Jiang%2C+Bo&rft.au=Luo%2C+Bin&rft.date=2023-12-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=12&rft.spage=7789&rft.epage=7802&rft_id=info:doi/10.1109%2FTCSVT.2023.3282777&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2023_3282777 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |