LayerCAM: Exploring Hierarchical Class Activation Maps for Localization
The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest. These discovered object regions have been widely used for weakly-supervised tasks. However, due to the small spatial resolution of the final...
Saved in:
Published in | IEEE transactions on image processing Vol. 30; pp. 5875 - 5888 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest. These discovered object regions have been widely used for weakly-supervised tasks. However, due to the small spatial resolution of the final convolutional layer, such class activation maps often locate coarse regions of the target objects, limiting the performance of weakly-supervised tasks that need pixel-accurate object locations. Thus, we aim to generate more fine-grained object localization information from the class activation maps to locate the target objects more accurately. In this paper, by rethinking the relationships between the feature maps and their corresponding gradients, we propose a simple yet effective method, called LayerCAM. It can produce reliable class activation maps for different layers of CNN. This property enables us to collect object localization information from coarse (rough spatial localization) to fine (precise fine-grained details) levels. We further integrate them into a high-quality class activation map, where the object-related pixels can be better highlighted. To evaluate the quality of the class activation maps produced by LayerCAM, we apply them to weakly-supervised object localization and semantic segmentation. Experiments demonstrate that the class activation maps generated by our method are more effective and reliable than those by the existing attention methods. The code will be made publicly available. |
---|---|
AbstractList | The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest. These discovered object regions have been widely used for weakly-supervised tasks. However, due to the small spatial resolution of the final convolutional layer, such class activation maps often locate coarse regions of the target objects, limiting the performance of weakly-supervised tasks that need pixel-accurate object locations. Thus, we aim to generate more fine-grained object localization information from the class activation maps to locate the target objects more accurately. In this paper, by rethinking the relationships between the feature maps and their corresponding gradients, we propose a simple yet effective method, called LayerCAM. It can produce reliable class activation maps for different layers of CNN. This property enables us to collect object localization information from coarse (rough spatial localization) to fine (precise fine-grained details) levels. We further integrate them into a high-quality class activation map, where the object-related pixels can be better highlighted. To evaluate the quality of the class activation maps produced by LayerCAM, we apply them to weakly-supervised object localization and semantic segmentation. Experiments demonstrate that the class activation maps generated by our method are more effective and reliable than those by the existing attention methods. The code will be made publicly available. The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest. These discovered object regions have been widely used for weakly-supervised tasks. However, due to the small spatial resolution of the final convolutional layer, such class activation maps often locate coarse regions of the target objects, limiting the performance of weakly-supervised tasks that need pixel-accurate object locations. Thus, we aim to generate more fine-grained object localization information from the class activation maps to locate the target objects more accurately. In this paper, by rethinking the relationships between the feature maps and their corresponding gradients, we propose a simple yet effective method, called LayerCAM. It can produce reliable class activation maps for different layers of CNN. This property enables us to collect object localization information from coarse (rough spatial localization) to fine (precise fine-grained details) levels. We further integrate them into a high-quality class activation map, where the object-related pixels can be better highlighted. To evaluate the quality of the class activation maps produced by LayerCAM, we apply them to weakly-supervised object localization and semantic segmentation. Experiments demonstrate that the class activation maps generated by our method are more effective and reliable than those by the existing attention methods. The code will be made publicly available.The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest. These discovered object regions have been widely used for weakly-supervised tasks. However, due to the small spatial resolution of the final convolutional layer, such class activation maps often locate coarse regions of the target objects, limiting the performance of weakly-supervised tasks that need pixel-accurate object locations. Thus, we aim to generate more fine-grained object localization information from the class activation maps to locate the target objects more accurately. In this paper, by rethinking the relationships between the feature maps and their corresponding gradients, we propose a simple yet effective method, called LayerCAM. It can produce reliable class activation maps for different layers of CNN. This property enables us to collect object localization information from coarse (rough spatial localization) to fine (precise fine-grained details) levels. We further integrate them into a high-quality class activation map, where the object-related pixels can be better highlighted. To evaluate the quality of the class activation maps produced by LayerCAM, we apply them to weakly-supervised object localization and semantic segmentation. Experiments demonstrate that the class activation maps generated by our method are more effective and reliable than those by the existing attention methods. The code will be made publicly available. |
Author | Zhang, Chang-Bin Jiang, Peng-Tao Cheng, Ming-Ming Hou, Qibin Wei, Yunchao |
Author_xml | – sequence: 1 givenname: Peng-Tao orcidid: 0000-0002-1786-4943 surname: Jiang fullname: Jiang, Peng-Tao email: cmm@nankai.edu.cn organization: TKLNDST, CS, Nankai University, Tianjin, China – sequence: 2 givenname: Chang-Bin orcidid: 0000-0003-0043-8240 surname: Zhang fullname: Zhang, Chang-Bin organization: TKLNDST, CS, Nankai University, Tianjin, China – sequence: 3 givenname: Qibin orcidid: 0000-0002-8388-8708 surname: Hou fullname: Hou, Qibin organization: Department of Electrical and Computer Engineering, NUS, Singapore – sequence: 4 givenname: Ming-Ming orcidid: 0000-0001-5550-8758 surname: Cheng fullname: Cheng, Ming-Ming organization: TKLNDST, CS, Nankai University, Tianjin, China – sequence: 5 givenname: Yunchao orcidid: 0000-0002-2812-8781 surname: Wei fullname: Wei, Yunchao organization: Institute of Information Science, Beijing Jiaotong University, Beijing, China |
BookMark | eNp9kLtPwzAQhy0EohTYkVgisbCk-J2YrarKQ2oFQ5kj1z6DUYiLnSLKX0_6EEMHJp983-_u9PXRYRMaQOiC4AEhWN3MHp8HFFMyYLhUirMDdEIUJznGnB52NRZFXhCueqif0jvGhAsij1GPcSJkB56g-4leQRwNp7fZ-HtRh-ib1-zBQ9TRvHmj62xU65SyoWn9l259aLKpXqTMhZhNQtf3P5vfM3TkdJ3gfPeeope78Wz0kE-e7h9Hw0lumMJtzoACFYZgUjornGaFKTUvWFk4ADsn2Fph7dwZJQvLoRAa5lYLKrl0zlLKTtH1du4ihs8lpLb68MlAXesGwjJVVHDOJSn5Gr3aQ9_DMjbddWtKSsVkyTsKbykTQ0oRXLWI_kPHVUVwtZZcdZKrteRqJ7mLyL2I8e3GQhu1r_8LXm6DHgD-9iguKZeM_QKOLYke |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1109_TASLP_2024_3492793 crossref_primary_10_1109_TCC_2024_3398609 crossref_primary_10_1016_j_neucom_2023_126919 crossref_primary_10_1016_j_cose_2024_103746 crossref_primary_10_1016_j_eswa_2024_123732 crossref_primary_10_1016_j_inffus_2024_102355 crossref_primary_10_3390_drones8100523 crossref_primary_10_1016_j_asoc_2023_111094 crossref_primary_10_1109_OJITS_2024_3507917 crossref_primary_10_1109_ACCESS_2024_3506334 crossref_primary_10_3390_app14167073 crossref_primary_10_1080_01431161_2023_2297175 crossref_primary_10_1109_TMM_2022_3152388 crossref_primary_10_1007_s13349_025_00908_y crossref_primary_10_1145_3649458 crossref_primary_10_1016_j_neunet_2025_107346 crossref_primary_10_1007_s11069_023_06145_0 crossref_primary_10_3390_s23063176 crossref_primary_10_1145_3583777 crossref_primary_10_1093_nar_gkad801 crossref_primary_10_1016_j_knosys_2024_112602 crossref_primary_10_1109_JBHI_2024_3365051 crossref_primary_10_1186_s12916_022_02469_z crossref_primary_10_3390_info14120642 crossref_primary_10_1007_s10278_024_01090_1 crossref_primary_10_1016_j_bspc_2023_105871 crossref_primary_10_1016_j_bspc_2024_106466 crossref_primary_10_1016_j_compbiomed_2022_106467 crossref_primary_10_1109_JBHI_2023_3248139 crossref_primary_10_1109_JBHI_2024_3354712 crossref_primary_10_3389_fcomp_2024_1438126 crossref_primary_10_1109_TCAD_2024_3468016 crossref_primary_10_1109_TGRS_2022_3207171 crossref_primary_10_1002_jmri_29245 crossref_primary_10_1109_TNNLS_2024_3359269 crossref_primary_10_1109_JSEN_2024_3457312 crossref_primary_10_1016_j_matchar_2024_113701 crossref_primary_10_1038_s41598_024_84836_9 crossref_primary_10_1109_ACCESS_2022_3188394 crossref_primary_10_1016_j_saa_2024_125626 crossref_primary_10_1167_tvst_13_4_8 crossref_primary_10_1038_s41598_023_43871_8 crossref_primary_10_1016_j_ymssp_2024_111364 crossref_primary_10_1109_TDSC_2023_3315064 crossref_primary_10_1097_ICO_0000000000003701 crossref_primary_10_1109_ACCESS_2021_3116034 crossref_primary_10_1155_2024_2864052 crossref_primary_10_1038_s41598_023_42946_w crossref_primary_10_1016_j_eswa_2024_123756 crossref_primary_10_1016_j_patcog_2024_110517 crossref_primary_10_1007_s42235_024_00600_9 crossref_primary_10_1109_TIM_2023_3315392 crossref_primary_10_3390_rs16142640 crossref_primary_10_1109_JSTARS_2024_3520361 crossref_primary_10_3390_jmse12101885 crossref_primary_10_1016_j_bbe_2023_10_001 crossref_primary_10_1088_1742_6596_2650_1_012025 crossref_primary_10_1016_j_ins_2025_122046 crossref_primary_10_1016_j_wear_2023_205205 crossref_primary_10_1016_j_asej_2024_102722 crossref_primary_10_3389_fpls_2022_864486 crossref_primary_10_1016_j_media_2024_103288 crossref_primary_10_1016_j_patcog_2025_111358 crossref_primary_10_1088_1361_6501_ad4fb4 crossref_primary_10_1093_ehjdh_ztac014 crossref_primary_10_1109_TITS_2024_3420409 crossref_primary_10_1016_j_eswa_2024_123501 crossref_primary_10_1016_j_eswa_2024_123625 crossref_primary_10_1016_j_imavis_2024_104988 crossref_primary_10_1016_j_jag_2025_104385 crossref_primary_10_1007_s10489_024_06168_5 crossref_primary_10_1109_JBHI_2024_3373438 crossref_primary_10_1109_JBHI_2024_3482001 crossref_primary_10_1016_j_imavis_2025_105468 crossref_primary_10_1109_MGRS_2024_3467001 crossref_primary_10_1016_j_knosys_2024_112390 crossref_primary_10_1109_TIFS_2024_3518072 crossref_primary_10_3390_bioengineering12010082 crossref_primary_10_1109_TIP_2024_3364536 crossref_primary_10_1007_s00371_023_03112_5 crossref_primary_10_1111_coin_12660 crossref_primary_10_1109_JSTARS_2024_3469209 crossref_primary_10_1016_j_neucom_2023_126250 crossref_primary_10_1109_TGRS_2024_3507274 crossref_primary_10_1109_TCSVT_2023_3268997 crossref_primary_10_1016_j_cose_2023_103265 crossref_primary_10_1109_TIV_2023_3339673 crossref_primary_10_3389_fcomp_2022_1036934 crossref_primary_10_1109_TIP_2023_3346295 crossref_primary_10_1109_ACCESS_2022_3219879 crossref_primary_10_1016_j_jag_2023_103287 crossref_primary_10_1007_s11760_024_03757_2 crossref_primary_10_1016_j_apenergy_2024_123311 crossref_primary_10_1007_s00521_022_07428_6 crossref_primary_10_1016_j_nicl_2023_103441 crossref_primary_10_1109_TMM_2023_3331572 crossref_primary_10_1016_j_compbiomed_2023_107881 crossref_primary_10_1007_s12559_024_10387_w crossref_primary_10_3389_fncom_2022_1054421 crossref_primary_10_1016_j_neunet_2023_01_009 crossref_primary_10_1007_s00371_025_03803_1 crossref_primary_10_3390_app142311208 crossref_primary_10_1145_3721129 crossref_primary_10_1016_j_engappai_2023_106333 crossref_primary_10_1016_j_inffus_2024_102721 crossref_primary_10_1016_j_pss_2023_105802 crossref_primary_10_1016_j_jafr_2024_101148 crossref_primary_10_1016_j_engappai_2023_107542 crossref_primary_10_1038_s41598_024_81587_5 crossref_primary_10_1016_j_engappai_2024_108456 crossref_primary_10_1017_hpl_2023_85 crossref_primary_10_1016_j_cviu_2024_104078 crossref_primary_10_1016_j_engappai_2024_108458 crossref_primary_10_1109_TCSVT_2024_3418979 crossref_primary_10_1038_s41598_023_29665_y crossref_primary_10_1080_02713683_2022_2138917 crossref_primary_10_3390_electronics13010186 crossref_primary_10_1016_j_autcon_2024_105355 crossref_primary_10_1016_j_bspc_2023_105668 crossref_primary_10_1016_j_engappai_2024_109431 crossref_primary_10_1016_j_ipm_2023_103608 crossref_primary_10_1016_j_csl_2023_101600 crossref_primary_10_1016_j_imavis_2024_105062 crossref_primary_10_1109_TCSVT_2024_3361463 crossref_primary_10_1007_s00521_023_09185_6 crossref_primary_10_3390_diagnostics14171900 crossref_primary_10_1007_s11265_025_01948_9 crossref_primary_10_1109_TCSVT_2024_3405998 crossref_primary_10_1109_ACCESS_2024_3405788 crossref_primary_10_1002_mp_16378 crossref_primary_10_3390_jimaging8080215 crossref_primary_10_1007_s11633_023_1455_3 crossref_primary_10_1080_17686733_2023_2167459 crossref_primary_10_1109_JBHI_2023_3329231 crossref_primary_10_3103_S1060992X22030043 crossref_primary_10_3390_math12172668 crossref_primary_10_1109_JIOT_2024_3395466 crossref_primary_10_1007_s10845_024_02446_8 crossref_primary_10_1016_j_image_2024_117150 crossref_primary_10_1016_j_eswa_2024_126023 crossref_primary_10_1109_ACCESS_2022_3206379 crossref_primary_10_1007_s11263_022_01746_x crossref_primary_10_1016_j_patcog_2023_109550 crossref_primary_10_1109_TIM_2024_3382737 crossref_primary_10_1109_TMLCN_2024_3454019 crossref_primary_10_1007_s11760_025_03866_6 crossref_primary_10_1016_j_neunet_2024_107097 crossref_primary_10_1109_TPAMI_2024_3353528 crossref_primary_10_1007_s00138_024_01567_7 crossref_primary_10_1088_1742_6596_2919_1_012045 crossref_primary_10_5334_jcaa_163 crossref_primary_10_1109_TMI_2024_3459910 crossref_primary_10_1016_j_eswa_2023_120898 crossref_primary_10_1016_j_patcog_2022_108724 crossref_primary_10_3390_agriculture14071125 crossref_primary_10_1016_j_isprsjprs_2023_08_017 crossref_primary_10_1016_j_isci_2024_111558 crossref_primary_10_1016_j_ejmp_2025_104954 crossref_primary_10_1109_TFUZZ_2023_3318086 crossref_primary_10_1109_ACCESS_2024_3409843 crossref_primary_10_1016_j_patcog_2024_111221 crossref_primary_10_1109_TCDS_2023_3238181 crossref_primary_10_3390_math10244765 crossref_primary_10_3390_electronics12122697 crossref_primary_10_3390_electronics12234846 crossref_primary_10_1016_j_atech_2024_100730 crossref_primary_10_1016_j_jag_2023_103244 crossref_primary_10_3390_ijgi11030205 crossref_primary_10_1109_JSEN_2024_3430009 crossref_primary_10_1016_j_compeleceng_2024_109871 crossref_primary_10_1109_TIM_2023_3261939 crossref_primary_10_1016_j_knosys_2024_112204 crossref_primary_10_3390_plants13172444 crossref_primary_10_3390_app122412961 crossref_primary_10_1109_TIP_2023_3275913 crossref_primary_10_1016_j_eswa_2024_124775 crossref_primary_10_26599_TST_2024_9010182 crossref_primary_10_3390_electronics12092027 crossref_primary_10_1002_ird3_113 crossref_primary_10_1109_JSEN_2023_3298777 crossref_primary_10_1109_TPAMI_2024_3380604 crossref_primary_10_1016_j_wear_2025_205875 crossref_primary_10_3390_rs16050804 crossref_primary_10_1080_1206212X_2024_2404082 crossref_primary_10_15212_npt_2024_0007 crossref_primary_10_1093_eurheartj_ehad782 crossref_primary_10_3390_jimaging9100199 crossref_primary_10_1016_j_bspc_2023_104812 crossref_primary_10_1007_s11263_025_02347_0 crossref_primary_10_1016_j_measurement_2023_113708 crossref_primary_10_1109_TIP_2023_3336170 crossref_primary_10_1038_s41598_024_79701_8 crossref_primary_10_3389_fpls_2023_1128399 crossref_primary_10_5909_JBE_2024_29_2_198 crossref_primary_10_1016_j_compbiomed_2023_107332 crossref_primary_10_1016_j_compbiomed_2022_106065 crossref_primary_10_1109_IOTM_001_2400101 crossref_primary_10_1109_TIFS_2024_3402385 crossref_primary_10_1016_j_inffus_2023_101805 crossref_primary_10_1016_j_compbiomed_2024_108042 crossref_primary_10_1016_j_patcog_2023_109760 crossref_primary_10_1007_s10489_024_05916_x crossref_primary_10_1016_j_neucom_2021_11_084 crossref_primary_10_1109_TMM_2022_3184486 crossref_primary_10_32604_cmc_2024_058932 crossref_primary_10_1016_j_neunet_2024_106350 crossref_primary_10_1145_3674837 crossref_primary_10_1016_j_bspc_2022_104213 crossref_primary_10_1016_j_engappai_2025_110364 crossref_primary_10_1016_j_eswa_2022_118888 crossref_primary_10_1109_TETCI_2024_3358184 crossref_primary_10_1145_3705301 crossref_primary_10_3390_app14104124 crossref_primary_10_1016_j_engappai_2023_106991 crossref_primary_10_3390_app15010379 crossref_primary_10_1016_j_ins_2022_10_013 crossref_primary_10_3390_rs15235534 crossref_primary_10_1371_journal_pone_0303278 crossref_primary_10_3390_aerospace11060488 crossref_primary_10_1109_TIM_2024_3425490 crossref_primary_10_1038_s41598_025_93196_x crossref_primary_10_1016_j_knosys_2022_109474 crossref_primary_10_1109_TCSVT_2022_3186307 crossref_primary_10_3389_fpls_2022_1037655 crossref_primary_10_1016_j_patcog_2022_109298 crossref_primary_10_1145_3654665 crossref_primary_10_3390_computers10090117 crossref_primary_10_1007_s40747_024_01678_8 crossref_primary_10_1007_s10489_023_04956_z crossref_primary_10_1109_TAFFC_2023_3288885 crossref_primary_10_1016_j_patrec_2024_06_024 crossref_primary_10_1049_ipr2_12377 crossref_primary_10_1002_pld3_70047 crossref_primary_10_1109_TIFS_2024_3372797 crossref_primary_10_1371_journal_pone_0309126 crossref_primary_10_1016_j_cmpb_2024_108041 crossref_primary_10_1109_JSAC_2022_3221998 crossref_primary_10_3390_app12199484 crossref_primary_10_1016_j_compbiomed_2024_108502 crossref_primary_10_1016_j_cviu_2024_104101 crossref_primary_10_1016_j_jbi_2024_104673 crossref_primary_10_1109_TMI_2022_3222541 crossref_primary_10_1007_s11760_024_03486_6 crossref_primary_10_1109_LGRS_2023_3318375 crossref_primary_10_3390_jimaging10080186 crossref_primary_10_1016_j_aquaculture_2022_739175 crossref_primary_10_1016_j_jksuci_2023_101901 crossref_primary_10_3390_s24092695 crossref_primary_10_3390_computation11060113 crossref_primary_10_3390_s23115201 crossref_primary_10_5194_hess_28_4085_2024 crossref_primary_10_1016_j_patter_2024_101057 crossref_primary_10_1016_j_clinimag_2024_110101 crossref_primary_10_1016_j_watres_2024_123076 crossref_primary_10_1155_2021_2921737 crossref_primary_10_1016_j_displa_2022_102339 crossref_primary_10_1039_D3AN01797D crossref_primary_10_1016_j_neuroimage_2023_120164 crossref_primary_10_1109_TIM_2022_3224526 crossref_primary_10_1016_j_cviu_2023_103886 crossref_primary_10_3390_app13031711 crossref_primary_10_1109_ACCESS_2023_3235332 crossref_primary_10_1016_j_asoc_2024_112374 crossref_primary_10_3390_rs16071249 crossref_primary_10_3389_fnbot_2024_1490198 crossref_primary_10_1016_j_media_2025_103552 crossref_primary_10_1109_TIM_2025_3548240 crossref_primary_10_1007_s10278_023_00791_3 crossref_primary_10_1016_j_aap_2024_107636 crossref_primary_10_15212_RADSCI_2022_0002 crossref_primary_10_1016_j_eswa_2024_124489 crossref_primary_10_1016_j_patcog_2024_110677 crossref_primary_10_1038_s41598_024_68056_9 crossref_primary_10_3390_drones8060240 crossref_primary_10_1016_j_compag_2025_110107 crossref_primary_10_1016_j_patcog_2025_111607 crossref_primary_10_1007_s11263_024_02282_6 crossref_primary_10_1109_TIP_2024_3356174 crossref_primary_10_1109_ACCESS_2024_3423697 crossref_primary_10_1021_acs_jpclett_4c03650 crossref_primary_10_1016_j_neucom_2021_10_072 crossref_primary_10_1109_TIM_2025_3547122 crossref_primary_10_1016_j_cie_2025_111024 crossref_primary_10_3390_app132212219 crossref_primary_10_3390_rs15184554 crossref_primary_10_1109_TAI_2023_3333310 crossref_primary_10_1615_CritRevOncog_2023050852 crossref_primary_10_3390_bioengineering10080887 crossref_primary_10_3390_electronics12234751 crossref_primary_10_1007_s11571_023_09993_5 crossref_primary_10_1007_s11760_024_03320_z crossref_primary_10_1016_j_jag_2024_103878 crossref_primary_10_1109_TASLP_2024_3364100 crossref_primary_10_3389_fpls_2024_1418201 crossref_primary_10_1080_15230406_2024_2392795 crossref_primary_10_1016_j_cobme_2024_100567 crossref_primary_10_1109_ACCESS_2024_3486311 crossref_primary_10_1109_JSSC_2023_3346913 crossref_primary_10_1038_s41440_024_01938_7 crossref_primary_10_1109_JIOT_2024_3462954 crossref_primary_10_3390_rs14133230 crossref_primary_10_1088_1361_6501_ad8592 crossref_primary_10_1109_TKDE_2024_3443888 crossref_primary_10_1145_3674981 crossref_primary_10_3390_plants11233327 crossref_primary_10_1016_j_neucom_2024_128137 crossref_primary_10_1109_ACCESS_2023_3319068 crossref_primary_10_1038_s42256_023_00776_5 |
Cites_doi | 10.1007/978-3-030-01258-8_37 10.1109/TPAMI.2020.2966453 10.1109/CVPR.2018.00144 10.1007/s11263-014-0733-5 10.1007/s11263-015-0816-y 10.1109/TIP.2019.2926748 10.1109/TPAMI.2021.3063611 10.1109/CVPR.2017.106 10.1109/ICCV.2017.382 10.1109/WACV45572.2020.9093566 10.1007/978-3-540-88682-2_16 10.1109/TII.2020.2985159 10.1109/CVPR.2017.770 10.1109/ICCV.2015.164 10.1007/s11263-017-1059-x 10.1109/CVPR.2017.687 10.1109/TPAMI.2019.2938758 10.1109/CVPR42600.2020.01011 10.1007/s11432-020-3097-4 10.1109/CVPR.2017.457 10.1109/CVPR.2016.382 10.1109/TPAMI.2017.2699184 10.1109/TPAMI.2016.2644615 10.1109/TIP.2015.2396361 10.1007/978-3-319-10590-1_53 10.1109/CVPR.2019.00154 10.1109/ICCV.2019.00216 10.1109/CVPR.2016.319 10.1109/CVPR.2018.00960 10.1007/s11063-019-10124-7 10.1109/ICCV.2017.381 10.1109/TIP.2019.2901393 10.1109/CVPR.2018.00399 10.1109/CVPR.2019.00232 10.1109/TCYB.2019.2936503 10.1109/CVPR.2016.80 10.1109/CVPR.2019.00230 10.1109/ICCV.2019.00669 10.1109/CVPR.2014.49 10.5244/C.31.20 10.1109/TII.2020.3008021 10.5244/C.30.52 10.1109/TPAMI.2019.2899839 10.1109/CVPR42600.2020.00895 10.1109/TII.2018.2828811 10.1109/CVPR42600.2020.00431 10.5244/C.30.87 10.1109/CVPR.2017.549 10.1109/TPAMI.2018.2840724 10.1109/CVPR.2016.90 10.1109/TIP.2014.2344433 10.1109/TPAMI.2016.2645157 10.1109/CVPR.2014.309 10.1109/TPAMI.2016.2537320 10.1109/ICCV.2019.00754 10.1109/CVPR.2018.00759 10.1109/CVPR.2018.00523 10.1109/TPAMI.2016.2552172 10.1109/TPAMI.2016.2535231 10.1109/TII.2019.2958826 10.1109/CVPR.2016.308 10.1007/978-3-030-58536-5_21 10.1109/CVPR.2017.239 10.1109/CVPR.2017.634 10.1109/ICCV.2011.6126343 10.24963/ijcai.2018/120 10.1109/CVPR.2017.660 10.1109/CVPRW50498.2020.00020 10.1109/CVPR.2018.00733 10.1109/TPAMI.2020.2985395 10.1007/978-3-319-46448-0_2 10.1109/ICCV.2017.204 10.1109/ICCV.2015.209 10.1007/s11263-019-01228-7 10.1007/978-3-319-46454-1_22 10.1007/s11432-020-3065-4 10.1007/978-3-030-01237-3_8 10.1109/TMM.2013.2293424 10.1109/TIP.2019.2930874 10.1109/CVPR.2019.01268 10.1109/WACV.2018.00097 10.1109/CVPR.2017.243 10.1109/CVPR42600.2020.00886 10.1109/TPAMI.2015.2392769 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TIP.2021.3089943 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | Technology Research Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 5888 |
ExternalDocumentID | 10_1109_TIP_2021_3089943 9462463 |
Genre | orig-research |
GrantInformation_xml | – fundername: NSFC grantid: 61922046 funderid: 10.13039/501100001809 – fundername: Science and Technology Innovation Project from Chinese Ministry of Education – fundername: Fundamental Research Funds for the Central Universities (Nankai University) grantid: 63213090 funderid: 10.13039/501100012226 – fundername: National Key Research and Development Program of China grantid: 2018AAA0100400 funderid: 10.13039/501100012166 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c390t-3e2e25c1018fd5fa37c8a47387feedb10dd5ddbfc967d4e75aebda52646ffd223 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 05:43:54 EDT 2025 Mon Jun 30 10:19:08 EDT 2025 Thu Apr 24 23:02:56 EDT 2025 Tue Jul 01 02:03:26 EDT 2025 Wed Aug 27 02:26:42 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c390t-3e2e25c1018fd5fa37c8a47387feedb10dd5ddbfc967d4e75aebda52646ffd223 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-5550-8758 0000-0002-8388-8708 0000-0002-2812-8781 0000-0003-0043-8240 0000-0002-1786-4943 |
PMID | 34156941 |
PQID | 2546693684 |
PQPubID | 85429 |
PageCount | 14 |
ParticipantIDs | crossref_primary_10_1109_TIP_2021_3089943 proquest_miscellaneous_2544461842 crossref_citationtrail_10_1109_TIP_2021_3089943 proquest_journals_2546693684 ieee_primary_9462463 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210000 2021-00-00 20210101 |
PublicationDateYYYYMMDD | 2021-01-01 |
PublicationDate_xml | – year: 2021 text: 20210000 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationYear | 2021 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref56 ref59 ref58 ref53 simonyan (ref11) 2014 ref52 ref55 boykov (ref88) 2001; 1 ref54 ref51 ref50 li (ref81) 2020 ref46 ref45 ref48 ref47 ref42 ref41 ref43 ref49 ref8 ref7 ref9 ref4 ref3 wang (ref44) 2021 papandreou (ref71) 2015 ref5 ref40 ref35 ref34 ref37 ref36 ref31 ref33 ref32 ref39 ref38 springenberg (ref12) 2015 ref24 ref23 ref26 ref25 ref20 ref22 ref21 ref28 ref27 hou (ref6) 2018 ref13 ref15 ref14 ref97 ref17 ref16 ref19 ref18 ref93 ref92 ref95 ref94 ref91 ref90 simonyan (ref10) 2015 ref86 paszke (ref96) 2019 ref85 ref87 ref82 ref84 ref83 ref80 ref79 ref78 ref75 ref74 ref77 hou (ref30) 2017 ref76 ref2 ref1 ref70 ref73 ref72 meng (ref29) 2020 ref68 ref67 ref69 ref64 ref63 ref66 ref65 ref60 wieler (ref89) 2007 ref62 ref61 |
References_xml | – ident: ref60 doi: 10.1007/978-3-030-01258-8_37 – ident: ref25 doi: 10.1109/TPAMI.2020.2966453 – ident: ref8 doi: 10.1109/CVPR.2018.00144 – ident: ref86 doi: 10.1007/s11263-014-0733-5 – ident: ref87 doi: 10.1007/s11263-015-0816-y – ident: ref69 doi: 10.1109/TIP.2019.2926748 – ident: ref31 doi: 10.1109/TPAMI.2021.3063611 – ident: ref37 doi: 10.1109/CVPR.2017.106 – ident: ref61 doi: 10.1109/ICCV.2017.382 – ident: ref26 doi: 10.1109/WACV45572.2020.9093566 – start-page: 1742 year: 2015 ident: ref71 article-title: Weakly-and semi-supervised learning of a DCNN for semantic image segmentation publication-title: Proc Int Conf Comput Vis – ident: ref50 doi: 10.1007/978-3-540-88682-2_16 – ident: ref84 doi: 10.1109/TII.2020.2985159 – ident: ref75 doi: 10.1109/CVPR.2017.770 – ident: ref43 doi: 10.1109/ICCV.2015.164 – ident: ref34 doi: 10.1007/s11263-017-1059-x – ident: ref74 doi: 10.1109/CVPR.2017.687 – ident: ref17 doi: 10.1109/TPAMI.2019.2938758 – ident: ref22 doi: 10.1109/CVPR42600.2020.01011 – ident: ref97 doi: 10.1007/s11432-020-3097-4 – ident: ref54 doi: 10.1109/CVPR.2017.457 – ident: ref47 doi: 10.1109/CVPR.2016.382 – ident: ref94 doi: 10.1109/TPAMI.2017.2699184 – ident: ref90 doi: 10.1109/TPAMI.2016.2644615 – ident: ref59 doi: 10.1109/TIP.2015.2396361 – ident: ref13 doi: 10.1007/978-3-319-10590-1_53 – ident: ref40 doi: 10.1109/CVPR.2019.00154 – ident: ref78 doi: 10.1109/ICCV.2019.00216 – ident: ref1 doi: 10.1109/CVPR.2016.319 – year: 2015 ident: ref12 article-title: Striving for simplicity: The all convolutional net publication-title: Proc Int Conf Learn Represent Workshop – ident: ref4 doi: 10.1109/CVPR.2018.00960 – ident: ref56 doi: 10.1007/s11063-019-10124-7 – ident: ref63 doi: 10.1109/ICCV.2017.381 – ident: ref67 doi: 10.1109/TIP.2019.2901393 – ident: ref35 doi: 10.1109/CVPR.2018.00399 – ident: ref23 doi: 10.1109/CVPR.2019.00232 – ident: ref46 doi: 10.1109/TCYB.2019.2936503 – ident: ref39 doi: 10.1109/CVPR.2016.80 – year: 2019 ident: ref96 article-title: PyTorch: An imperative style, high-performance deep learning library publication-title: Proc Adv Neural Inform Process Syst – ident: ref52 doi: 10.1109/CVPR.2019.00230 – ident: ref62 doi: 10.1109/ICCV.2019.00669 – ident: ref92 doi: 10.1109/CVPR.2014.49 – year: 2021 ident: ref44 article-title: Hierarchical human semantic parsing with comprehensive part-relation modeling publication-title: IEEE Trans Pattern Anal Mach Intell – ident: ref95 doi: 10.5244/C.31.20 – ident: ref83 doi: 10.1109/TII.2020.3008021 – year: 2007 ident: ref89 article-title: Weakly supervised learning for industrial optical inspection publication-title: Proc DAGM Symp – volume: 1 start-page: 105 year: 2001 ident: ref88 article-title: Interactive graph cuts for optimal boundary & region segmentation of objects in nd images publication-title: Proc Int Conf Comput Vis – ident: ref55 doi: 10.5244/C.30.52 – year: 2015 ident: ref10 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Represent – ident: ref9 doi: 10.1109/TPAMI.2019.2899839 – ident: ref45 doi: 10.1109/CVPR42600.2020.00895 – ident: ref85 doi: 10.1109/TII.2018.2828811 – ident: ref73 doi: 10.1109/CVPR42600.2020.00431 – year: 2014 ident: ref11 article-title: Deep inside convolutional networks: Visualising image classification models and saliency maps publication-title: Proc Int Conf Learn Represent – ident: ref19 doi: 10.5244/C.30.87 – ident: ref91 doi: 10.1109/CVPR.2017.549 – ident: ref24 doi: 10.1109/TPAMI.2018.2840724 – start-page: 263 year: 2017 ident: ref30 article-title: Bottom-up top-down cues for weakly-supervised semantic segmentation publication-title: Proc Int Workshop Energy Minimization Methods Comput Vis Pattern Recognit – ident: ref16 doi: 10.1109/CVPR.2016.90 – ident: ref68 doi: 10.1109/TIP.2014.2344433 – ident: ref65 doi: 10.1109/TPAMI.2016.2645157 – ident: ref49 doi: 10.1109/CVPR.2014.309 – ident: ref57 doi: 10.1109/TPAMI.2016.2537320 – ident: ref15 doi: 10.1109/ICCV.2019.00754 – ident: ref76 doi: 10.1109/CVPR.2018.00759 – ident: ref5 doi: 10.1109/CVPR.2018.00523 – ident: ref70 doi: 10.1109/TPAMI.2016.2552172 – ident: ref48 doi: 10.1109/TPAMI.2016.2535231 – ident: ref82 doi: 10.1109/TII.2019.2958826 – ident: ref21 doi: 10.1109/CVPR.2016.308 – ident: ref79 doi: 10.1007/978-3-030-58536-5_21 – ident: ref7 doi: 10.1109/CVPR.2017.239 – ident: ref18 doi: 10.1109/CVPR.2017.634 – start-page: 515 year: 2020 ident: ref29 article-title: Weakly supervised 3D object detection from LiDAR point cloud publication-title: Proc Eur Conf Comput Vis – ident: ref93 doi: 10.1109/ICCV.2011.6126343 – ident: ref58 doi: 10.24963/ijcai.2018/120 – ident: ref41 doi: 10.1109/CVPR.2017.660 – year: 2020 ident: ref81 article-title: Group-wise semantic mining for weakly supervised semantic segmentation publication-title: arXiv 2012 05007 – ident: ref33 doi: 10.1109/CVPRW50498.2020.00020 – ident: ref77 doi: 10.1109/CVPR.2018.00733 – ident: ref28 doi: 10.1109/TPAMI.2020.2985395 – ident: ref38 doi: 10.1007/978-3-319-46448-0_2 – ident: ref53 doi: 10.1109/ICCV.2017.204 – ident: ref72 doi: 10.1109/ICCV.2015.209 – ident: ref2 doi: 10.1007/s11263-019-01228-7 – ident: ref51 doi: 10.1007/978-3-319-46454-1_22 – ident: ref80 doi: 10.1007/s11432-020-3065-4 – ident: ref14 doi: 10.1007/978-3-030-01237-3_8 – ident: ref64 doi: 10.1109/TMM.2013.2293424 – ident: ref66 doi: 10.1109/TIP.2019.2930874 – ident: ref32 doi: 10.1109/CVPR.2019.01268 – start-page: 549 year: 2018 ident: ref6 article-title: Self-erasing network for integral object attention publication-title: Proc Adv Neural Inform Process Syst – ident: ref3 doi: 10.1109/WACV.2018.00097 – ident: ref20 doi: 10.1109/CVPR.2017.243 – ident: ref36 doi: 10.1109/CVPR42600.2020.00886 – ident: ref27 doi: 10.1109/TPAMI.2015.2392769 – ident: ref42 doi: 10.1109/CVPR.2017.549 |
SSID | ssj0014516 |
Score | 2.7266836 |
Snippet | The class activation maps are generated from the final convolutional layer of CNN. They can highlight discriminative object regions for the class of interest.... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 5875 |
SubjectTerms | class activation maps Convolution Feature maps Image segmentation Localization Location awareness Pixels Reliability Semantic segmentation Semantics Spatial resolution Task analysis Weakly-supervised object localization |
Title | LayerCAM: Exploring Hierarchical Class Activation Maps for Localization |
URI | https://ieeexplore.ieee.org/document/9462463 https://www.proquest.com/docview/2546693684 https://www.proquest.com/docview/2544461842 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB61PcGBQgtioSAjcUEiu0n8irmtKsqCuohDK_UW-TGRUKvdit299NfX4zgRLyFukeI87LE938x45gN4y0WDTkosGu8UeaviPugpk0s2wZVd1VWB8p2XX9XiUny5kld78H7MhUHEdPgMp3SZYvlh7XfkKpsZoWqh-D7sR8Otz9UaIwZEOJsim1IXOsL-ISRZmtnF52_REKyrKacYlyDqHE52ixHVL9oo0av8sScnRXN2CMvhF_vzJdfT3dZN_d1v1Rv_tw-P4VFGnGzeT5EnsIerIzjM6JPltb05goc_lSY8hk_nNoLx0_nyAxuP6bHFd0pXTuwpNyzRabK5H_jR2NLebljEwOyc9GPO73wKl2cfL04XRSZdKDw35bbgWGMtPRXy6oLsLNe-sULzRndRnbqqDEGG4DpvlA4CtbTogpURV6muCxFsPIOD1XqFz4E5q0xlsDQhCFG6ypWoNUeUxteeOzuB2TD4rc8VyYkY46ZNlklp2ii5liTXZslN4N34xG1fjeMfbY9p9Md2eeAncDLIt83LddMSKYAyXDViAm_G23GhUfTErnC9S22i6RwN4vrF39_8Eh7Q93v_zAkcbH_s8FVELFv3Ok3Vexip5dI |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoAeKLSgLhQwEhckspvEr5jbqqJsYVNx2Eq9RX5MJNRqt-ruXvrraztOxEuIW6TYkTNje77xeOYDeE9ZhYZzzCprRDit8vugDZlcvHImb4u2cCHfuT4Xswv29ZJf7sDHIRcGEePlMxyHxxjLdyu7DUdlE8VEyQR9AA-93edFl601xAwC5WyMbXKZSQ_8-6BkriaLs-_eFSyLMQ1RLhbIc2jwXBQrfrFHkWDlj105mprTfaj7QXY3TK7G240Z27vf6jf-7188hScJc5JpN0mewQ4uD2A_4U-SVvf6APZ-Kk54CF_m2sPxk2n9iQwX9cjsR0hYjvwp1yQSapKp7RnSSK1v1sSjYDIPFjJleD6Hi9PPi5NZlmgXMktVvskollhyG0p5tY63mkpbaSZpJVtvUE2RO8edM61VQjqGkms0TnOPrETbOg83XsDucrXEIyBGC1UozJVzjOWmMDlKSRG5sqWlRo9g0gu_sakmeaDGuG6ib5KrxmuuCZprkuZG8GHocdPV4_hH28Mg_aFdEvwIjnv9NmnBrptACyAUFRUbwbvhtV9qIX6il7jaxjbeefYucfny719-C49mi3rezM_Ov72Cx2Es3WnNMexubrf42uOXjXkTp-09PoDpGw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LayerCAM%3A+Exploring+Hierarchical+Class+Activation+Maps+for+Localization&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Jiang%2C+Peng-Tao&rft.au=Zhang%2C+Chang-Bin&rft.au=Hou%2C+Qibin&rft.au=Cheng%2C+Ming-Ming&rft.date=2021&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=30&rft.spage=5875&rft.epage=5888&rft_id=info:doi/10.1109%2FTIP.2021.3089943&rft_id=info%3Apmid%2F34156941&rft.externalDocID=9462463 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |