Rich Embedding Features for One-Shot Semantic Segmentation

One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance. Thus, how to effectively construct robust feature representations from the guidance image is crucial to the success of one-shot semantic segmenta...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 33; no. 11; pp. 6484 - 6493
Main Authors Zhang, Xiaolin, Wei, Yunchao, Li, Zhao, Yan, Chenggang, Yang, Yi
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.11.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance. Thus, how to effectively construct robust feature representations from the guidance image is crucial to the success of one-shot semantic segmentation. To this end, we propose in this article a simple, yet effective approach named rich embedding features (REFs). Given a reference image accompanied with its annotated mask, our REF constructs rich embedding features of the support object from three perspectives: 1) global embedding to capture the general characteristics; 2) peak embedding to capture the most discriminative information; 3) adaptive embedding to capture the internal long-range dependencies. By combining these informative features, we can easily harvest sufficient and rich guidance even from a single reference image. In addition to REF, we further propose a simple depth-priority context module to obtain useful contextual cues from the query image. This successfully raises the performance of one-shot semantic segmentation to a new level. We conduct experiments on pattern analysis, statical modeling and computational learning (Pascal) visual object classes (VOC) 2012 and common object in context (COCO) to demonstrate the effectiveness of our approach.
AbstractList One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance. Thus, how to effectively construct robust feature representations from the guidance image is crucial to the success of one-shot semantic segmentation. To this end, we propose in this article a simple, yet effective approach named rich embedding features (REFs). Given a reference image accompanied with its annotated mask, our REF constructs rich embedding features of the support object from three perspectives: 1) global embedding to capture the general characteristics; 2) peak embedding to capture the most discriminative information; 3) adaptive embedding to capture the internal long-range dependencies. By combining these informative features, we can easily harvest sufficient and rich guidance even from a single reference image. In addition to REF, we further propose a simple depth-priority context module to obtain useful contextual cues from the query image. This successfully raises the performance of one-shot semantic segmentation to a new level. We conduct experiments on pattern analysis, statical modeling and computational learning (Pascal) visual object classes (VOC) 2012 and common object in context (COCO) to demonstrate the effectiveness of our approach.One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance. Thus, how to effectively construct robust feature representations from the guidance image is crucial to the success of one-shot semantic segmentation. To this end, we propose in this article a simple, yet effective approach named rich embedding features (REFs). Given a reference image accompanied with its annotated mask, our REF constructs rich embedding features of the support object from three perspectives: 1) global embedding to capture the general characteristics; 2) peak embedding to capture the most discriminative information; 3) adaptive embedding to capture the internal long-range dependencies. By combining these informative features, we can easily harvest sufficient and rich guidance even from a single reference image. In addition to REF, we further propose a simple depth-priority context module to obtain useful contextual cues from the query image. This successfully raises the performance of one-shot semantic segmentation to a new level. We conduct experiments on pattern analysis, statical modeling and computational learning (Pascal) visual object classes (VOC) 2012 and common object in context (COCO) to demonstrate the effectiveness of our approach.
One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance. Thus, how to effectively construct robust feature representations from the guidance image is crucial to the success of one-shot semantic segmentation. To this end, we propose in this article a simple, yet effective approach named rich embedding features (REFs). Given a reference image accompanied with its annotated mask, our REF constructs rich embedding features of the support object from three perspectives: 1) global embedding to capture the general characteristics; 2) peak embedding to capture the most discriminative information; 3) adaptive embedding to capture the internal long-range dependencies. By combining these informative features, we can easily harvest sufficient and rich guidance even from a single reference image. In addition to REF, we further propose a simple depth-priority context module to obtain useful contextual cues from the query image. This successfully raises the performance of one-shot semantic segmentation to a new level. We conduct experiments on pattern analysis, statical modeling and computational learning (Pascal) visual object classes (VOC) 2012 and common object in context (COCO) to demonstrate the effectiveness of our approach.
Author Yan, Chenggang
Zhang, Xiaolin
Wei, Yunchao
Li, Zhao
Yang, Yi
Author_xml – sequence: 1
  givenname: Xiaolin
  surname: Zhang
  fullname: Zhang, Xiaolin
  email: xiaolin.zhang-3@student.uts.edu.au
  organization: Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
– sequence: 2
  givenname: Yunchao
  orcidid: 0000-0002-2812-8781
  surname: Wei
  fullname: Wei, Yunchao
  email: wychao1987@gmail.com
  organization: Institute of Information Science, Beijing Jiaotong University, Beijing, China
– sequence: 3
  givenname: Zhao
  surname: Li
  fullname: Li, Zhao
  email: liz@sdas.org
  organization: Shandong Computer Science Center (National Supercomputer Center in Jinan), Shandong Artificial Intelligence Institute, Jinan, China
– sequence: 4
  givenname: Chenggang
  orcidid: 0000-0003-1204-0512
  surname: Yan
  fullname: Yan, Chenggang
  email: cgyan@hdu.edu.cn
  organization: Institute of Information and Control, Hangzhou Dianzi University, Hangzhou, China
– sequence: 5
  givenname: Yi
  orcidid: 0000-0002-0512-880X
  surname: Yang
  fullname: Yang, Yi
  email: yee.i.yang@gmail.com
  organization: CCAI, College of Computer Science and Technology, Zhejiang University, Hangzhou, China
BookMark eNp9kLtOAzEQRS0E4v0D0KxEQ7PB7wcdQgSQokQiINFZjncMRlkv2JuCv2chiCIF08wU54xm7gHaTl0ChE4IHhGCzcXjdDqZjyimZMSwJtKwLbRPiaQ1ZVpv_83qeQ8dl_KGh5JYSG520R7jRBLK-T66fIj-tbppF9A0Mb1UY3D9KkOpQperWYJ6_tr11Rxal_roh-GlhdS7PnbpCO0Etyxw_NsP0dP45vH6rp7Mbu-vrya1Z4b3tXSSOuUDpgF4UAqC1osgmDMNaOGYWUgusAOmjNcL6oURZoBEIJwa1TTsEJ2v977n7mMFpbdtLB6WS5egWxVLBedaY0XFgJ5toG_dKqfhOksVNZIpgdlA6TXlc1dKhmB9XL_UZxeXlmD7HbH9idh-R2x_Ix5UuqG-59i6_Pm_dLqWIgD8CYZLxoxmX3Yfhhw
CODEN ITNNAL
CitedBy_id crossref_primary_10_1007_s00371_024_03582_1
crossref_primary_10_1109_TMM_2023_3270637
crossref_primary_10_1016_j_patcog_2022_109292
crossref_primary_10_1109_TMM_2024_3352921
crossref_primary_10_1117_1_JEI_32_6_063035
crossref_primary_10_1109_TII_2022_3216900
crossref_primary_10_1007_s11760_024_03531_4
crossref_primary_10_1109_ACCESS_2025_3529812
crossref_primary_10_1007_s00521_023_08758_9
crossref_primary_10_17979_ja_cea_2024_45_10772
crossref_primary_10_1007_s00521_022_07654_y
crossref_primary_10_1007_s10489_023_04937_2
crossref_primary_10_1109_TNNLS_2024_3383039
crossref_primary_10_1007_s10489_023_04922_9
crossref_primary_10_1109_JAS_2023_123207
crossref_primary_10_1007_s00521_022_07494_w
crossref_primary_10_1080_07038992_2024_2426597
crossref_primary_10_1109_TIM_2023_3246519
crossref_primary_10_1109_TNNLS_2023_3252578
crossref_primary_10_1109_TNNLS_2023_3274127
crossref_primary_10_3390_s24154975
Cites_doi 10.1109/TNNLS.2018.2885591
10.1609/aaai.v35i2.16252
10.1007/978-3-319-10602-1_48
10.1109/ICCV.2015.191
10.1109/ICCV.2017.322
10.1007/978-3-030-01237-3_4
10.1109/CVPR.2019.00536
10.1007/978-3-319-46478-7_34
10.1109/TNNLS.2017.2690453
10.1109/TNNLS.2019.2957187
10.1109/CVPR.2017.687
10.1109/TNNLS.2013.2270314
10.1109/CVPR42600.2020.00422
10.5244/C.31.167
10.1109/ICCV.2019.00968
10.1109/CVPR.2017.181
10.1109/TPAMI.2016.2636150
10.1109/TCYB.2020.2992433
10.1109/CVPR.2015.7298965
10.1109/CVPR.2018.00813
10.1109/ICCV.2019.00929
10.1109/CVPR.2016.90
10.1109/CVPR.2017.64
10.1109/CVPR.2019.00971
10.1109/CVPR.2018.00759
10.1016/j.media.2019.101631
10.1109/ICCV.2019.00535
10.1109/ICCV.2011.6126343
10.1109/TNNLS.2015.2505181
10.1109/CVPR.2016.344
10.1109/CVPR.2018.00770
10.1109/TNNLS.2018.2870182
10.1007/978-3-319-24574-4_28
10.1007/978-3-030-01234-2_49
10.1109/ICCV.2019.00071
10.1109/CVPR.2017.660
10.1109/CVPR42600.2020.00856
10.1109/CVPR42600.2020.01222
10.1109/TNNLS.2017.2742058
10.1109/ICCV.2019.00073
10.1109/CVPR.2017.565
10.1109/CVPR.2018.00195
10.1109/ICCV.2017.81
10.1109/TNNLS.2019.2963282
10.1109/TMM.2020.3001510
10.1007/s11263-014-0733-5
10.1109/TNNLS.2017.2787781
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
DOI 10.1109/TNNLS.2021.3081693
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
DatabaseTitle CrossRef
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

Materials Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 6493
ExternalDocumentID 10_1109_TNNLS_2021_3081693
9463398
Genre orig-research
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
RIG
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ID FETCH-LOGICAL-c394t-6a62a7cf02fe4f77ef88bf53a9de85a39b6450ae379c8b2c59597ef5f14297dd3
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Thu Jul 10 23:43:41 EDT 2025
Mon Jun 30 04:13:39 EDT 2025
Tue Jul 01 00:27:40 EDT 2025
Thu Apr 24 23:09:35 EDT 2025
Wed Aug 27 02:14:42 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 11
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c394t-6a62a7cf02fe4f77ef88bf53a9de85a39b6450ae379c8b2c59597ef5f14297dd3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-0512-880X
0000-0002-2812-8781
0000-0003-1204-0512
PMID 34161244
PQID 2729637503
PQPubID 85436
PageCount 10
ParticipantIDs proquest_journals_2729637503
crossref_citationtrail_10_1109_TNNLS_2021_3081693
crossref_primary_10_1109_TNNLS_2021_3081693
proquest_miscellaneous_2544880725
ieee_primary_9463398
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-11-01
PublicationDateYYYYMMDD 2022-11-01
PublicationDate_xml – month: 11
  year: 2022
  text: 2022-11-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref14
ref53
ref55
ref54
ref17
ref19
Rakelly (ref10)
ref18
Vaswani (ref52)
Dong (ref11)
ref51
Chen (ref25) 2014
ref50
Simonyan (ref16)
ref45
ref48
ref47
ref42
ref41
Koch (ref46); 2
ref44
ref43
ref49
ref8
Finn (ref31) 2017
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ref35
ref37
Annadani (ref33)
ref36
ref30
Liu (ref34)
ref2
ref1
ref39
ref38
ref24
ref23
ref26
ref20
ref22
ref21
ref28
ref27
ref29
Vinyals (ref32)
References_xml – ident: ref20
  doi: 10.1109/TNNLS.2018.2885591
– ident: ref30
  doi: 10.1609/aaai.v35i2.16252
– start-page: 1
  volume-title: Proc. BMVC
  ident: ref11
  article-title: Few-shot semantic segmentation with prototype learning
– ident: ref57
  doi: 10.1007/978-3-319-10602-1_48
– ident: ref7
  doi: 10.1109/ICCV.2015.191
– ident: ref26
  doi: 10.1109/ICCV.2017.322
– volume-title: arXiv:1412.7062
  year: 2014
  ident: ref25
  article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs
– ident: ref37
  doi: 10.1007/978-3-030-01237-3_4
– ident: ref50
  doi: 10.1109/CVPR.2019.00536
– ident: ref8
  doi: 10.1007/978-3-319-46478-7_34
– ident: ref21
  doi: 10.1109/TNNLS.2017.2690453
– ident: ref28
  doi: 10.1109/TNNLS.2019.2957187
– ident: ref2
  doi: 10.1109/CVPR.2017.687
– ident: ref42
  doi: 10.1109/TNNLS.2013.2270314
– start-page: 1
  volume-title: Proc. ICLR
  ident: ref34
  article-title: Learning to propagate labels: Transductive propagation network for few-shot learning
– ident: ref54
  doi: 10.1109/CVPR42600.2020.00422
– ident: ref9
  doi: 10.5244/C.31.167
– ident: ref49
  doi: 10.1109/ICCV.2019.00968
– ident: ref6
  doi: 10.1109/CVPR.2017.181
– ident: ref3
  doi: 10.1109/TPAMI.2016.2636150
– ident: ref47
  doi: 10.1109/TCYB.2020.2992433
– ident: ref23
  doi: 10.1109/CVPR.2015.7298965
– ident: ref53
  doi: 10.1109/CVPR.2018.00813
– ident: ref14
  doi: 10.1109/ICCV.2019.00929
– ident: ref17
  doi: 10.1109/CVPR.2016.90
– ident: ref44
  doi: 10.1109/CVPR.2017.64
– ident: ref45
  doi: 10.1109/CVPR.2019.00971
– ident: ref1
  doi: 10.1109/CVPR.2018.00759
– ident: ref39
  doi: 10.1016/j.media.2019.101631
– ident: ref48
  doi: 10.1109/ICCV.2019.00535
– ident: ref56
  doi: 10.1109/ICCV.2011.6126343
– ident: ref18
  doi: 10.1109/TNNLS.2015.2505181
– ident: ref4
  doi: 10.1109/CVPR.2016.344
– start-page: 7603
  volume-title: Proc. CVPR
  ident: ref33
  article-title: Preserving semantic relations for zero-shot learning
– ident: ref38
  doi: 10.1109/CVPR.2018.00770
– volume-title: arXiv:1703.03400
  year: 2017
  ident: ref31
  article-title: Model-agnostic meta-learning for fast adaptation of deep networks
– ident: ref19
  doi: 10.1109/TNNLS.2018.2870182
– ident: ref24
  doi: 10.1007/978-3-319-24574-4_28
– volume: 2
  start-page: 1
  volume-title: Proc. ICMLW
  ident: ref46
  article-title: Siamese neural networks for one-shot image recognition
– ident: ref12
  doi: 10.1007/978-3-030-01234-2_49
– ident: ref15
  doi: 10.1109/ICCV.2019.00071
– start-page: 1
  volume-title: Proc. ICLR
  ident: ref16
  article-title: Very deep convolutional networks for large-scale image recognition
– start-page: 3630
  volume-title: Proc. NIPS
  ident: ref32
  article-title: Matching networks for one shot learning
– ident: ref13
  doi: 10.1109/CVPR.2017.660
– start-page: 1
  volume-title: Proc. ICLRW
  ident: ref10
  article-title: Conditional networks for few-shot semantic segmentation
– ident: ref27
  doi: 10.1109/CVPR42600.2020.00856
– ident: ref35
  doi: 10.1109/CVPR42600.2020.01222
– ident: ref40
  doi: 10.1109/TNNLS.2017.2742058
– ident: ref51
  doi: 10.1109/ICCV.2019.00073
– ident: ref36
  doi: 10.1109/CVPR.2017.565
– ident: ref5
  doi: 10.1109/CVPR.2018.00195
– ident: ref43
  doi: 10.1109/ICCV.2017.81
– ident: ref41
  doi: 10.1109/TNNLS.2019.2963282
– start-page: 5998
  volume-title: Proc. NIPS
  ident: ref52
  article-title: Attention is all you need
– ident: ref29
  doi: 10.1109/TMM.2020.3001510
– ident: ref55
  doi: 10.1007/s11263-014-0733-5
– ident: ref22
  doi: 10.1109/TNNLS.2017.2787781
SSID ssj0000605649
Score 2.5375202
Snippet One-shot semantic segmentation poses the challenging task of segmenting object regions from unseen categories with only one annotated example as guidance....
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 6484
SubjectTerms Computer applications
Context
Deep learning
Embedding
Feature extraction
few shot segmentation
Image segmentation
object segmentation
Pattern analysis
Prototypes
Pulse modulation
Semantic segmentation
Semantics
Siamese network
Support vector machines
Task analysis
Visual discrimination learning
Title Rich Embedding Features for One-Shot Semantic Segmentation
URI https://ieeexplore.ieee.org/document/9463398
https://www.proquest.com/docview/2729637503
https://www.proquest.com/docview/2544880725
Volume 33
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PXGhQEEsFBQkbpBtYscvbgi1qhBdDttKe4v8GAOCzSJILvx6xt4kEg8hLpGlOFEy4_F8Y_ubAXguye3EYFmZsnXTxarScmlLK2trnAqMYVqHvFrJy5vm7UZsDuDlzIVBxHz4DJepmffyw84PaanszDSSc6MP4ZACtz1Xa15PqQiXy4x2WS1ZybjaTByZypxdr1bv1hQNsnrJU6kJk-rn8ATuyb394pJyjZU_JubsbS6O4Wr6zv0hk8_LoXdL_-O3FI7_-yN34PYIO4vX-3FyFw6wuwfHU0mHYrTwE3iViPbF-dZhSE6tSAhxoIi8IGxbvO-wXH_c9cUat6SQT54aH7Yjeam7DzcX59dvLsuxvELpuWn6UlrJrPKxYhGbqBRGrV0U3JqAWlhunGxEZZEr47VjXhgKPjCKWJMPUyHwB3DU7Tp8CAWzBEtEaHRsSAdSW6aFctph44KPTi6gniTc-jH3eCqB8aXNMUhl2qygNimoHRW0gBfzM1_3mTf-2fskiXnuOUp4AaeTItvROL-3jAIKydMG7gKezbfJrNJeie1wN1Af0aSpTTHx6O9vfgy3WGJCZFriKRz13wZ8Qvikd0_zwPwJ_zzd3Q
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV3NbtQwEB6VcoBLCxTEQoEgwQllm9ixHSNxQNBqS7fLYbfS3oKdjGlVNou6iaryLLwK78Y4m0TiR9wqcYksZWIpnvHMN_b8ALyQZHZcYVjoq3XTw6jQcGlCI2OjrSoYQ38OeTyRo5Pkw1zMN-B7nwuDiE3wGQ79sLnLL5Z57Y_K9nQiOddpG0J5hFeX5KCt3hy-J26-ZOxgf_ZuFLY9BMKc66QKpZHMqNxFzGHilEKXptYJbnSBqTBcW5mIyCBXOk8ty4UmhI1OuJgUtSoKTvPegJuEMwRbZ4f1JzgReQKywdcslixkXM27rJxI780mk_GU_E8WD7lvbqF9xx7u3QkyqL8Ywaaryx-moLFvB9vwo1uZdVjL-bCu7DD_9lvRyP916e7AVgusg7frnXAXNrC8B9td04qg1WE78NqXEgj2FxYLb7YDj4HrC1wFhN6DjyWG09NlFUxxQSJ3ltPg86JNzyrvw8m1_MED2CyXJT6EgBkCXqJIUpcQz2VqWCqUTS0mtsidlQOIO45meVtd3Tf5-JI1Xlaks0YgMi8QWSsQA3jVf_N1XVvkn9Q7nq09ZcvRAex2gpO16meVMXKZJPdX1AN43r8mxeFvg0yJy5poROKVt2Li0d9nfga3RrPjcTY-nBw9htvM5300SZi7sFld1PiE0FhlnzabIoBP1y1KPwFWeTzH
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Rich+Embedding+Features+for+One-Shot+Semantic+Segmentation&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhang%2C+Xiaolin&rft.au=Wei%2C+Yunchao&rft.au=Li%2C+Zhao&rft.au=Yan%2C+Chenggang&rft.date=2022-11-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=33&rft.issue=11&rft.spage=6484&rft.epage=6493&rft_id=info:doi/10.1109%2FTNNLS.2021.3081693&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TNNLS_2021_3081693
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon