Learning to Discover Multi-Class Attentional Regions for Multi-Label Image Recognition
Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient...
Saved in:
Published in | IEEE transactions on image processing Vol. 30; pp. 5920 - 5932 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient two-stream framework to recognize multi-category objects from global image to local regions, similar to how human beings perceive objects. To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. Over three benchmarks on multi-label image classification, our method achieves new state-of-the-art results with a single model only using image semantics without label dependency. In addition, the effectiveness of the proposed method is extensively demonstrated under different factors such as global pooling strategy, input size and network architecture. Code has been made available at https://github.com/gaobb/MCAR . |
---|---|
AbstractList | Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient two-stream framework to recognize multi-category objects from global image to local regions, similar to how human beings perceive objects. To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. Over three benchmarks on multi-label image classification, our method achieves new state-of-the-art results with a single model only using image semantics without label dependency. In addition, the effectiveness of the proposed method is extensively demonstrated under different factors such as global pooling strategy, input size and network architecture. Code has been made available at https://github.com/gaobb/MCAR . Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient two-stream framework to recognize multi-category objects from global image to local regions, similar to how human beings perceive objects. To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. Over three benchmarks on multi-label image classification, our method achieves new state-of-the-art results with a single model only using image semantics without label dependency. In addition, the effectiveness of the proposed method is extensively demonstrated under different factors such as global pooling strategy, input size and network architecture. Code has been made available at https://github.com/gaobb/MCAR.Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal because of a great number of object proposals or complex attentional region generation modules. In this paper, we propose a simple but efficient two-stream framework to recognize multi-category objects from global image to local regions, similar to how human beings perceive objects. To bridge the gap between global and local streams, we propose a multi-class attentional region module which aims to make the number of attentional regions as small as possible and keep the diversity of these regions as high as possible. Our method can efficiently and effectively recognize multi-class objects with an affordable computation cost and a parameter-free region localization module. Over three benchmarks on multi-label image classification, our method achieves new state-of-the-art results with a single model only using image semantics without label dependency. In addition, the effectiveness of the proposed method is extensively demonstrated under different factors such as global pooling strategy, input size and network architecture. Code has been made available at https://github.com/gaobb/MCAR. |
Author | Gao, Bin-Bin Zhou, Hong-Yu |
Author_xml | – sequence: 1 givenname: Bin-Bin orcidid: 0000-0003-2572-8156 surname: Gao fullname: Gao, Bin-Bin email: gaobb@lamda.nju.edu.cn organization: Tencent YouTu Lab, Shenzhen, China – sequence: 2 givenname: Hong-Yu surname: Zhou fullname: Zhou, Hong-Yu email: whuzhouhongyu@gmail.com organization: Department of Computer Science, The University of Hong Kong, Hong Kong |
BookMark | eNp9kM1LwzAYh4NMcJveBS8FL14689WsOY75NZgoMr2WLH1bMrpmJqngf2_KpocdPOWF3_NL8j4jNGhtCwhdEjwhBMvb1eJ1QjElE4bzXODsBA2J5CTFmNNBnHE2TaeEyzM08n6DMeEZEUP0sQTlWtPWSbDJnfHafoFLnrsmmHTeKO-TWQjQBmNb1SRvUMfBJ5X9ZZZqDU2y2KoaYqpt3ZqePUenlWo8XBzOMXp_uF_Nn9Lly-NiPlummlEeUkaxFiovpaSMZcC1KqWgLNdaxkByLqeiyjKtNWE5FiUv1zmtYp5LDutSsDG62d-7c_azAx-KbdwBmka1YDtf0IwLgemU5BG9PkI3tnNxqwMlqWQkUnhPaWe9d1AVO2e2yn0XBBe96CKKLnrRxUF0rIijijZB9RaCU6b5r3i1LxoA-HtHxr9wTNkPZCyLXg |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1049_ipr2_13068 crossref_primary_10_1016_j_patcog_2025_111584 crossref_primary_10_1016_j_compbiomed_2024_108228 crossref_primary_10_1016_j_eswa_2023_119632 crossref_primary_10_1142_S0129065725500108 crossref_primary_10_14358_PERS_23_00055R2 crossref_primary_10_3348_jksr_2022_0155 crossref_primary_10_1007_s00521_021_06803_z crossref_primary_10_1016_j_neunet_2025_107309 crossref_primary_10_1109_LRA_2022_3148454 crossref_primary_10_1016_j_cviu_2024_104062 crossref_primary_10_1109_TIM_2024_3400345 crossref_primary_10_1007_s10489_024_05845_9 crossref_primary_10_1109_TCSVT_2023_3288205 crossref_primary_10_1016_j_media_2023_102772 crossref_primary_10_1038_s41467_024_48972_0 crossref_primary_10_1145_3550278 crossref_primary_10_1016_j_future_2023_05_028 crossref_primary_10_1109_TMM_2022_3222657 crossref_primary_10_1016_j_eswa_2024_123526 crossref_primary_10_1016_j_fss_2024_109143 crossref_primary_10_3233_JCM_247185 crossref_primary_10_1109_TII_2023_3342442 crossref_primary_10_1109_TMM_2023_3277279 crossref_primary_10_1049_ipr2_13070 crossref_primary_10_1109_TGRS_2024_3478817 crossref_primary_10_1109_TCSVT_2023_3284812 crossref_primary_10_1155_2022_8110695 crossref_primary_10_1109_TIP_2023_3293776 crossref_primary_10_1007_s10489_023_04865_1 crossref_primary_10_1016_j_neunet_2023_08_052 crossref_primary_10_32604_cmc_2025_059102 crossref_primary_10_3390_app15052845 crossref_primary_10_3390_s22145433 crossref_primary_10_1109_LSP_2022_3215611 crossref_primary_10_1016_j_jvcir_2024_104098 crossref_primary_10_1016_j_patrec_2024_12_012 crossref_primary_10_1109_TIP_2023_3318958 crossref_primary_10_1109_TMM_2023_3324132 crossref_primary_10_3390_computation11020032 crossref_primary_10_1016_j_patcog_2022_109203 crossref_primary_10_3233_IDA_230239 crossref_primary_10_1109_TIM_2023_3300408 crossref_primary_10_1364_OE_541716 crossref_primary_10_1007_s10994_023_06440_8 crossref_primary_10_1017_S0263574724000195 crossref_primary_10_1016_j_patrec_2024_08_020 crossref_primary_10_1007_s10489_024_05968_z crossref_primary_10_1016_j_neucom_2023_126605 crossref_primary_10_1145_3570166 crossref_primary_10_1007_s11263_023_01855_1 crossref_primary_10_1109_TNNLS_2023_3333542 crossref_primary_10_1109_TGRS_2024_3517672 crossref_primary_10_1007_s10994_024_06678_w crossref_primary_10_1109_TMM_2021_3121559 crossref_primary_10_1109_TIM_2025_3548188 crossref_primary_10_1016_j_jksuci_2024_101916 crossref_primary_10_3390_app12031742 crossref_primary_10_1016_j_jksuci_2024_102245 crossref_primary_10_1007_s11263_023_01849_z crossref_primary_10_1016_j_knosys_2023_111126 crossref_primary_10_1117_1_JRS_19_016505 crossref_primary_10_1016_j_engappai_2025_110379 crossref_primary_10_1016_j_ipm_2024_103800 crossref_primary_10_1109_TIP_2023_3266161 |
Cites_doi | 10.1109/CVPR.2015.7299097 10.1109/TMM.2020.3002185 10.1109/ICCV.2015.123 10.1109/CVPR.2017.219 10.1109/TIP.2020.2991527 10.1016/j.patcog.2019.03.006 10.3389/fpsyg.2014.00277 10.1109/ICCV.2017.58 10.1109/CVPR.2017.476 10.1007/978-3-319-46466-4_41 10.1109/TIP.2016.2545300 10.1007/978-3-319-10602-1_26 10.1109/CVPR.2019.00082 10.1109/CVPR.2016.37 10.1109/CVPR.2016.90 10.1109/ICCV.2019.00061 10.1109/CVPR.2019.00532 10.1109/TKDE.2013.39 10.1109/CVPR.2014.81 10.1007/s11263-009-0275-4 10.1109/CVPR.2018.00139 10.1109/TPAMI.2015.2389824 10.1109/CVPR.2017.660 10.1109/CVPR.2016.349 10.1109/CVPR.2016.251 10.1109/CVPR.2018.00170 10.1109/ICCV.2015.425 10.1109/CVPR.2018.00636 10.1109/CVPR.2006.68 10.1109/TPAMI.2021.3063496 10.1109/TPAMI.2015.2491929 10.1109/TPAMI.2020.3025814 10.1109/TIP.2016.2549459 10.1016/j.pneurobio.2007.09.001 10.1109/CVPR.2018.00474 10.1016/j.artint.2011.10.002 10.1109/CVPR.2016.319 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2021.3088605 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 5932 |
ExternalDocumentID | 10_1109_TIP_2021_3088605 9466402 |
Genre | orig-research |
GrantInformation_xml | – fundername: Tencent funderid: 10.13039/100015803 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c324t-320c6a8d992335e4cad96238cc90c6944976f55ccc13806d4db82f38c894ebd63 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 09:08:08 EDT 2025 Mon Jun 30 10:25:37 EDT 2025 Thu Apr 24 23:01:24 EDT 2025 Tue Jul 01 02:03:26 EDT 2025 Wed Aug 27 02:26:42 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c324t-320c6a8d992335e4cad96238cc90c6944976f55ccc13806d4db82f38c894ebd63 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-2572-8156 |
PQID | 2546692931 |
PQPubID | 85429 |
PageCount | 13 |
ParticipantIDs | crossref_primary_10_1109_TIP_2021_3088605 proquest_journals_2546692931 ieee_primary_9466402 proquest_miscellaneous_2546602718 crossref_citationtrail_10_1109_TIP_2021_3088605 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210000 2021-00-00 20210101 |
PublicationDateYYYYMMDD | 2021-01-01 |
PublicationDate_xml | – year: 2021 text: 20210000 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref12 ref15 ref14 ref11 ref10 ref19 ref18 bazzani (ref27) 2011 navon (ref17) 1969; 5 cheng (ref26) 2014 ref45 chen (ref24) 2017 ref48 ref47 ref41 ref44 ref43 ref49 ref7 ref9 ref4 ref3 ref6 ref5 simonyan (ref50) 2015 ref40 lin (ref46) 2014 ref35 ref34 ref37 ref31 ref30 ref2 chen (ref13) 2018 ref1 ref39 ref38 xu (ref29) 2015 simonyan (ref42) 2014 mnih (ref32) 2014 yazici (ref36) 2020 ref23 ref25 ref20 ref22 ref21 ref28 jaderberg (ref33) 2015 chen (ref8) 2018 chen (ref16) 2021 |
References_xml | – ident: ref2 doi: 10.1109/CVPR.2015.7299097 – start-page: 2017 year: 2015 ident: ref33 article-title: Spatial transformer networks publication-title: Proc Adv Neural Inf Process Syst (NIPS) – start-page: 6714 year: 2018 ident: ref13 article-title: Order-free RNN with visual attention for multi-label classification publication-title: Proc 32nd AAAI Conf Artif Intell (AAAI) – ident: ref35 doi: 10.1109/TMM.2020.3002185 – start-page: 937 year: 2011 ident: ref27 article-title: Learning attentional policies for tracking and recognition in video with deep networks publication-title: Proc 28th Int Conf Mach Learn (ICML) – ident: ref1 doi: 10.1109/ICCV.2015.123 – ident: ref10 doi: 10.1109/CVPR.2017.219 – ident: ref39 doi: 10.1109/TIP.2020.2991527 – ident: ref34 doi: 10.1016/j.patcog.2019.03.006 – ident: ref19 doi: 10.3389/fpsyg.2014.00277 – ident: ref9 doi: 10.1109/ICCV.2017.58 – ident: ref28 doi: 10.1109/CVPR.2017.476 – start-page: 2048 year: 2015 ident: ref29 article-title: Show, attend and tell: Neural image caption generation with visual attention publication-title: Proc Int Conf Mach Learn (ICLR) – ident: ref4 doi: 10.1007/978-3-319-46466-4_41 – ident: ref37 doi: 10.1109/TIP.2016.2545300 – year: 2017 ident: ref24 article-title: Rethinking atrous convolution for semantic image segmentation publication-title: arXiv 1706 05587 – ident: ref25 doi: 10.1007/978-3-319-10602-1_26 – ident: ref11 doi: 10.1109/CVPR.2019.00082 – ident: ref7 doi: 10.1109/CVPR.2016.37 – ident: ref48 doi: 10.1109/CVPR.2016.90 – ident: ref15 doi: 10.1109/ICCV.2019.00061 – ident: ref14 doi: 10.1109/CVPR.2019.00532 – ident: ref6 doi: 10.1109/TKDE.2013.39 – ident: ref22 doi: 10.1109/CVPR.2014.81 – ident: ref47 doi: 10.1007/s11263-009-0275-4 – start-page: 3286 year: 2014 ident: ref26 article-title: BING: Binarized normed gradients for objectness estimation at 300fps publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref45 doi: 10.1109/CVPR.2018.00139 – start-page: 13440 year: 2020 ident: ref36 article-title: Orderless recurrent models for multi-label classification publication-title: Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) – ident: ref21 doi: 10.1109/TPAMI.2015.2389824 – ident: ref23 doi: 10.1109/CVPR.2017.660 – start-page: 568 year: 2014 ident: ref42 article-title: Two-stream convolutional networks for action recognition in videos publication-title: Proc Adv Neural Inf Process Syst (NIPS) – ident: ref31 doi: 10.1109/CVPR.2016.349 – start-page: 2204 year: 2014 ident: ref32 article-title: Recurrent models of visual attention publication-title: Proc Adv Neural Inf Process Syst (NIPS) – ident: ref12 doi: 10.1109/CVPR.2016.251 – ident: ref44 doi: 10.1109/CVPR.2018.00170 – year: 2015 ident: ref50 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Represent (ICLR) – ident: ref3 doi: 10.1109/ICCV.2015.425 – volume: 5 start-page: 197 year: 1969 ident: ref17 article-title: Forest before trees: The precedence of global features in visual perception publication-title: Perception Psychophys – ident: ref30 doi: 10.1109/CVPR.2018.00636 – ident: ref20 doi: 10.1109/CVPR.2006.68 – start-page: 6730 year: 2018 ident: ref8 article-title: Recurrent attentional reinforcement learning for multi-label image recognition publication-title: Proc 32nd AAAI Conf Artif Intell (AAAI) – year: 2021 ident: ref16 article-title: Learning graph convolutional networks for multi-label recognition and applications publication-title: IEEE Trans Pattern Anal Mach Intell doi: 10.1109/TPAMI.2021.3063496 – ident: ref5 doi: 10.1109/TPAMI.2015.2491929 – ident: ref40 doi: 10.1109/TPAMI.2020.3025814 – ident: ref38 doi: 10.1109/TIP.2016.2549459 – ident: ref18 doi: 10.1016/j.pneurobio.2007.09.001 – ident: ref49 doi: 10.1109/CVPR.2018.00474 – ident: ref41 doi: 10.1016/j.artint.2011.10.002 – ident: ref43 doi: 10.1109/CVPR.2016.319 – start-page: 740 year: 2014 ident: ref46 article-title: Microsoft COCO: Common objects in context publication-title: Proc Eur Conf Comput Vis (ECCV) |
SSID | ssj0014516 |
Score | 2.6078544 |
Snippet | Multi-label image recognition is a practical and challenging task compared to single-label image classification. However, previous works may be suboptimal... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 5920 |
SubjectTerms | attentional region Computer architecture global to local Image classification Image recognition Modules multi-class Multi-label Object recognition Proposals Reinforcement learning Semantics Streaming media Task analysis two-stream Visualization |
Title | Learning to Discover Multi-Class Attentional Regions for Multi-Label Image Recognition |
URI | https://ieeexplore.ieee.org/document/9466402 https://www.proquest.com/docview/2546692931 https://www.proquest.com/docview/2546602718 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8QwEB7Ukx5cn7i6SgQvgt3ttukjR_GBiorIrngrTZqKuLbidi_-emfStIiKeCvMNC1MJvkmM_kG4ABNLpVWVCWec4fY79ClJAauUrgeSnQU0UXhm9vwYsyvHoPHOThq78JorU3xme7To8nlZ6Wa0VHZwHChE3PkPAZu9V2tNmNADWdNZjOInAhhf5OSdMVgdHmHgaA37PvoUiE1qvuyBZmeKj8WYrO7nHfgpvmvuqjkpT-rZF99fKNs_O-Pr8CyhZnsuJ4XqzCnizXoWMjJrENP12DpCx_hOjxYttUnVpXs9HmqqL6TmTu6jumeyY6rqqlRZ_eaapmnDGGv1blOpZ6wy1dco9h9U5lUFhswPj8bnVw4tvGCoxBfVY7vuSpM40wg-vMDzVWaCYRJsVICBYJzxDB5ECilhn7shhnPZOzlKI8F1zIL_U1YKMpCbwHzvVznCDmiQGmO4FBmkuhnAuGmOAzPuzBobJEoy0pOzTEmiYlOXJGg9RKyXmKt14XD9o23mpHjD911MkarZ-3QhV5j7sS67DShxgAhgkV_2IX9VozORhmUtNDlzOpgID-Mt38feQcW6fv1GU0PFqr3md5F1FLJPTNdPwHJfOdD |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1PT9VAEJ8gHNSDIGh8gLgmevDQ9_q22z974EBA8p48iCEPw612t1NjxNbQvhj4LHwVv5uz221DxHgj8dZkp5t2Z3bmNzuzMwBviOVKozZZ4oXwTPU72lKKHFclfU4jGMfmovDxSTQ5Ex_Ow_MluOnvwiCiTT7DoXm0sfy80gtzVDaytdB97lIoj_DqJzlo9e70gLj5lvPD9_P9ied6CHiaoELjBdzXUZbkkoBMEKLQWS7J4idaSxqQQpA5LsJQaz0OEj_KRa4SXtB4IgWqPApo3gewQjgj5O3tsD5GYVrc2lhqGHsxORpdENSXo_n0I7mefDwMaBNHpjXeLaNnu7jcUf3Wnh2uwq9uJdo0lm_DRaOG-vqPIpH_61KtwRMHpNleK_lPYQnLdVh1oJo5lVWvw-NbFRc34JOrJ_uFNRU7-Fprk8HK7C1kz_YHZXtN02Xhs1M02do1I2DvaGaZwgs2_U5amJ12uVdV-QzO7uVfn8NyWZX4AljACywIVMWhRkHwV-XKFNgJpZ_RNKIYwKjjfapd3XXT_uMitf6XL1OSltRIS-qkZQDv-jd-tDVH_kG7YZjf0zm-D2C7E6_UKaU6Na0PIoLDwXgAr_thUicmRpSVWC0cjc8JsWz-feZX8HAyP56ls-nJ0RY8Mt_Snkhtw3JzucCXhNEatWO3CoPP9y1wvwERs0Ou |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+to+Discover+Multi-Class+Attentional+Regions+for+Multi-Label+Image+Recognition&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Gao%2C+Bin-Bin&rft.au=Zhou%2C+Hong-Yu&rft.date=2021&rft.issn=1057-7149&rft.eissn=1941-0042&rft.volume=30&rft.spage=5920&rft.epage=5932&rft_id=info:doi/10.1109%2FTIP.2021.3088605&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TIP_2021_3088605 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |