FIMF score‐CAM: Fast score‐CAM based on local multi‐feature integration for visual interpretation of CNNS

The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good discrimination and gradient free calculation. It is a representative work in this field. However, it has the disadvantages of long calculation time and incomple...

Full description

Saved in:
Bibliographic Details
Published inIET image processing Vol. 17; no. 3; pp. 761 - 772
Main Authors Li, Jing, Zhang, Dongbo, Meng, Bumin, Li, Yongxing, Luo, Lufeng
Format Journal Article
LanguageEnglish
Published Wiley 01.02.2023
Subjects
Online AccessGet full text

Cover

Loading…
Abstract The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good discrimination and gradient free calculation. It is a representative work in this field. However, it has the disadvantages of long calculation time and incomplete heatmap coverage. Therefore, this paper proposes an improved Score‐CAM method named FIMF Score‐CAM, which can fast integrate multiple features. Its contribution is reflected in two aspects: The weight of the feature map is directly calculated by using the feature template. Unlike Score‐CAM, this model greatly saves computation time because it only requires one convolutional calculation. Another contribution is that the feature map used to generate the heatmap integrates various semantic features of the local space, so that the heatmap of the object of interest can be generated with more complete coverage and better interpretation. The FIMF Score‐CAM is superior to the existing mainstream models in interpreting the visual performance and fairness indicators of the decision‐making, having more complete explanations of object classes and the advantage of faster calculation speed.
AbstractList Abstract The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good discrimination and gradient free calculation. It is a representative work in this field. However, it has the disadvantages of long calculation time and incomplete heatmap coverage. Therefore, this paper proposes an improved Score‐CAM method named FIMF Score‐CAM, which can fast integrate multiple features. Its contribution is reflected in two aspects: The weight of the feature map is directly calculated by using the feature template. Unlike Score‐CAM, this model greatly saves computation time because it only requires one convolutional calculation. Another contribution is that the feature map used to generate the heatmap integrates various semantic features of the local space, so that the heatmap of the object of interest can be generated with more complete coverage and better interpretation. The FIMF Score‐CAM is superior to the existing mainstream models in interpreting the visual performance and fairness indicators of the decision‐making, having more complete explanations of object classes and the advantage of faster calculation speed.
The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good discrimination and gradient free calculation. It is a representative work in this field. However, it has the disadvantages of long calculation time and incomplete heatmap coverage. Therefore, this paper proposes an improved Score‐CAM method named FIMF Score‐CAM, which can fast integrate multiple features. Its contribution is reflected in two aspects: The weight of the feature map is directly calculated by using the feature template. Unlike Score‐CAM, this model greatly saves computation time because it only requires one convolutional calculation. Another contribution is that the feature map used to generate the heatmap integrates various semantic features of the local space, so that the heatmap of the object of interest can be generated with more complete coverage and better interpretation. The FIMF Score‐CAM is superior to the existing mainstream models in interpreting the visual performance and fairness indicators of the decision‐making, having more complete explanations of object classes and the advantage of faster calculation speed.
Author Meng, Bumin
Li, Jing
Li, Yongxing
Luo, Lufeng
Zhang, Dongbo
Author_xml – sequence: 1
  givenname: Jing
  orcidid: 0000-0003-4002-2311
  surname: Li
  fullname: Li, Jing
  organization: Xiangtan University
– sequence: 2
  givenname: Dongbo
  surname: Zhang
  fullname: Zhang, Dongbo
  organization: Xiangtan University
– sequence: 3
  givenname: Bumin
  orcidid: 0000-0002-1266-6913
  surname: Meng
  fullname: Meng, Bumin
  email: mengbm@163.com
  organization: Xiangtan University
– sequence: 4
  givenname: Yongxing
  surname: Li
  fullname: Li, Yongxing
  organization: Shuozhou Branch of Shanxi Provincial Highway Bureau
– sequence: 5
  givenname: Lufeng
  surname: Luo
  fullname: Luo, Lufeng
  organization: Foshan University
BookMark eNp9kMlKBDEURYMoOG78gqyF1sypcieNrQ1OOKxDknqRSNlpkmrFnZ_gN_olVlsiIuIq4b57z-JsotVZmgFCu5TsUyLqgzjPbJ8ypckK2qBa0lGtlF79_st6HW2W8kCIrEklN1CaTM8nuPiU4f31bXx0fogntnQ_E-xsgQanGW6Tty1-XLRd7E8BbLfIgOOsg_tsu9g3Qsr4KZZFX1vGeZ6hGy4p4PHFxc02Wgu2LbDz9W6hu8nx7fh0dHZ5Mh0fnY0815KMaua0rYIEziwBKbgDrppKC-m49547pZ2rGWs8o6AdVaFRQG0lPVASeMW30HTgNsk-mHmOjza_mGSj-QxSvjc2d9G3YIRUDjThog5cWKJq5aRglROS0N6S6Fl7A8vnVEqG8M2jxCy1m6V286m9L5NfZR8HBV22sf17QofJc2zh5R-4mV5ds2HzAVj7mPQ
CitedBy_id crossref_primary_10_3389_fnbot_2024_1490198
crossref_primary_10_2298_CSIS230310047L
crossref_primary_10_1007_s42979_024_03542_5
crossref_primary_10_1016_j_heliyon_2024_e30625
crossref_primary_10_1016_j_dsp_2025_105068
crossref_primary_10_1038_s41598_024_63659_8
Cites_doi 10.1109/CVPRW50498.2020.00020
10.1109/WACV48630.2021.00362
10.1109/CVPR.2016.90
10.1109/WACV48630.2021.00047
10.1109/ICCV.2015.9
10.1109/ICCV.2017.74
10.1007/s11263-015-0816-y
10.1109/ICCV.2015.279
10.1007/978-3-319-46448-0_2
10.1109/WACV.2018.00097
10.1109/CVPR.2015.7298594
10.1109/WACV45572.2020.9093360
10.1109/CVPR.2016.494
10.1109/CVPR.2015.7298935
10.1109/ICCV.2017.371
10.1007/978-3-030-01234-2_49
10.1016/j.imavis.2019.03.003
10.1109/CVPR.2016.308
10.1109/TIP.2015.2487833
10.1016/j.tins.2011.02.003
10.1145/2939672.2939778
10.1109/CVPR.2019.00931
10.1109/CVPR.2016.319
ContentType Journal Article
Copyright 2022 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
Copyright_xml – notice: 2022 The Authors. published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.
DBID 24P
AAYXX
CITATION
DOA
DOI 10.1049/ipr2.12670
DatabaseName Wiley Online Library Open Access
CrossRef
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList
CrossRef

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: 24P
  name: Wiley Online Library Open Access
  url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
EISSN 1751-9667
EndPage 772
ExternalDocumentID oai_doaj_org_article_456be70349f34a0696b5428b45015904
10_1049_ipr2_12670
IPR212670
Genre article
GrantInformation_xml – fundername: Joint Fund for Regional Innovation and Development of NSFC
  funderid: U19A2083
– fundername: Natural Science Foundation of China
  funderid: 62003288
– fundername: Key Project of Guangdong Provincial Basic and Applied Basic Research Fund Joint Fund
  funderid: 2020B1515120050
– fundername: Natural Science Foundation of Hunan
  funderid: 2020JJ4090; 2020JJ5553
GroupedDBID .DC
0R~
1OC
24P
29I
5GY
6IK
8VB
AAHHS
AAHJG
AAJGR
ABQXS
ACCFJ
ACCMX
ACESK
ACGFS
ACIWK
ACXQS
ADZOD
AEEZP
AENEX
AEQDE
AIWBW
AJBDE
ALMA_UNASSIGNED_HOLDINGS
ALUQN
AVUZU
CS3
DU5
EBS
ESX
GROUPED_DOAJ
HZ~
IAO
IFIPE
IPLJI
ITC
JAVBF
K1G
LAI
MCNEO
MS~
O9-
OCL
OK1
P2P
QWB
RIE
RNS
ROL
RUI
ZL0
4.4
8FE
8FG
AAYXX
ABJCF
AFKRA
ARAPS
BENPR
BGLVJ
CCPQU
CITATION
EJD
HCIFZ
IDLOA
L6V
M43
M7S
P62
PHGZM
PHGZT
PTHSS
S0W
WIN
ID FETCH-LOGICAL-c3750-92b7a8f5e32a0e543be36d8745b3ccc3b67bb922dc21e7b16fd6e1a85ce10f383
IEDL.DBID 24P
ISSN 1751-9659
IngestDate Wed Aug 27 01:31:44 EDT 2025
Tue Jul 01 04:55:45 EDT 2025
Thu Apr 24 22:51:54 EDT 2025
Wed Jan 22 16:16:27 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License Attribution-NonCommercial-NoDerivs
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3750-92b7a8f5e32a0e543be36d8745b3ccc3b67bb922dc21e7b16fd6e1a85ce10f383
ORCID 0000-0002-1266-6913
0000-0003-4002-2311
OpenAccessLink https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12670
PageCount 12
ParticipantIDs doaj_primary_oai_doaj_org_article_456be70349f34a0696b5428b45015904
crossref_primary_10_1049_ipr2_12670
crossref_citationtrail_10_1049_ipr2_12670
wiley_primary_10_1049_ipr2_12670_IPR212670
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2023-02-01
PublicationDateYYYYMMDD 2023-02-01
PublicationDate_xml – month: 02
  year: 2023
  text: 2023-02-01
  day: 01
PublicationDecade 2020
PublicationTitle IET image processing
PublicationYear 2023
Publisher Wiley
Publisher_xml – name: Wiley
References 2015; 24
2012
2019; 86
2015; 115
2021
2020
2019
2018
2017
2016
2015
2011; 34
2014
2013
e_1_2_10_23_1
e_1_2_10_24_1
e_1_2_10_21_1
e_1_2_10_22_1
e_1_2_10_20_1
e_1_2_10_4_1
e_1_2_10_18_1
e_1_2_10_3_1
e_1_2_10_19_1
e_1_2_10_6_1
e_1_2_10_16_1
e_1_2_10_5_1
e_1_2_10_17_1
e_1_2_10_14_1
e_1_2_10_7_1
e_1_2_10_15_1
e_1_2_10_12_1
e_1_2_10_9_1
e_1_2_10_13_1
e_1_2_10_10_1
e_1_2_10_33_1
e_1_2_10_11_1
e_1_2_10_32_1
e_1_2_10_31_1
e_1_2_10_30_1
Dabkowski P. (e_1_2_10_8_1) 2017
Krizhevsky A. (e_1_2_10_2_1) 2012
e_1_2_10_29_1
e_1_2_10_27_1
e_1_2_10_28_1
e_1_2_10_25_1
e_1_2_10_26_1
References_xml – start-page: 618
  year: 2017
  end-page: 626
  article-title: Grad‐cam: Visual explanations from deep networks via gradient‐based localization
– volume: 86
  start-page: 38
  year: 2019
  end-page: 44
  article-title: Image caption model of double LSTM with scene factors
  publication-title: Image Vision Comput.
– start-page: 21
  year: 2016
  end-page: 37
  article-title: Single shot multi‐box detector
– start-page: 6967
  year: 2017
  end-page: 6976
– volume: 24
  start-page: 5706
  issue: 12
  year: 2015
  end-page: 5722
  article-title: Salient object detection: A benchmark
  publication-title: IEEE Trans. Image Process.
– start-page: 801
  year: 2018
  end-page: 818
  article-title: Encoder‐decoder with atrous separable convolution for semantic image segmentation
– start-page: 9097
  year: 2019
  end-page: 9107
  article-title: Interpretable and fine‐grained visual explanations for convolutional neural networks
– start-page: 423
  year: 2021
  end-page: 432
  article-title: Superpixels weighted by average gradients for explanations of cnns
– year: 2018
– start-page: 770
  year: 2016
  end-page: 778
  article-title: Deep residual learning for image recognition
– year: 2014
– start-page: 1135
  year: 2016
  end-page: 1144
  article-title: Why should I trust you?”: Explaining the predictions of any classifier
– volume: 34
  start-page: 210
  issue: 4
  year: 2011
  end-page: 224
  article-title: Mechanisms of top‐down attention
  publication-title: Trends Neurosci.
– year: 2012
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  end-page: 252
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int. J. Comput. Vision
– start-page: 24
  year: 2020
  end-page: 25
  article-title: Score‐cam: Score‐weighted visual explanations for convolutional neural networks
– start-page: 972
  year: 2020
  end-page: 980
  article-title: Ablation‐cam: Visual explanations for deep convolutional network via gradient‐free localization
– start-page: 1
  year: 2015
  end-page: 9
  article-title: Going deeper with convolutions
– start-page: 1
  year: 2015
  end-page: 9
  article-title: Ask your neurons: A neural‐based approach to answering questions about images
– start-page: 2818
  year: 2016
  end-page: 2826
  article-title: Rethinking the inception architecture for computer vision
– start-page: 2296
  year: 2015
  end-page: 2304
– year: 2020
– start-page: 2921
  year: 2016
  end-page: 2929
  article-title: Learning deep features for discriminative localization
– start-page: 2425
  year: 2015
  end-page: 2433
  article-title: VQA: Visual question answering
– start-page: 3156
  year: 2015
  end-page: 3164
  article-title: Show and tell: A neural image caption generator
– year: 2014
  article-title: Very deep convolutional networks for large‐scale image recognition
– start-page: 3429
  year: 2017
  end-page: 3437
  article-title: Interpretable explanations of black boxes by meaningful perturbation
– start-page: 4565
  year: 2016
  end-page: 4574
  article-title: DenseCap: Fully convolutional localization networks for dense captioning
– start-page: 839
  year: 2018
  end-page: 847
  article-title: Grad‐cam++: Generalized gradient‐based visual explanations for deep convolutional networks
– start-page: 3579
  year: 2021
  end-page: 3587
  article-title: EVET: Enhancing visual explanations of deep neural networks using image transformations
– year: 2013
– start-page: 6967
  volume-title: Advances in Neural Information Processing Systems
  year: 2017
  ident: e_1_2_10_8_1
– ident: e_1_2_10_16_1
  doi: 10.1109/CVPRW50498.2020.00020
– ident: e_1_2_10_19_1
  doi: 10.1109/WACV48630.2021.00362
– ident: e_1_2_10_5_1
– ident: e_1_2_10_30_1
– ident: e_1_2_10_31_1
  doi: 10.1109/CVPR.2016.90
– ident: e_1_2_10_20_1
  doi: 10.1109/WACV48630.2021.00047
– ident: e_1_2_10_26_1
  doi: 10.1109/ICCV.2015.9
– ident: e_1_2_10_10_1
– ident: e_1_2_10_14_1
  doi: 10.1109/ICCV.2017.74
– ident: e_1_2_10_28_1
  doi: 10.1007/s11263-015-0816-y
– ident: e_1_2_10_24_1
  doi: 10.1109/ICCV.2015.279
– ident: e_1_2_10_3_1
  doi: 10.1007/978-3-319-46448-0_2
– ident: e_1_2_10_15_1
  doi: 10.1109/WACV.2018.00097
– ident: e_1_2_10_32_1
  doi: 10.1109/CVPR.2015.7298594
– ident: e_1_2_10_17_1
  doi: 10.1109/WACV45572.2020.9093360
– ident: e_1_2_10_22_1
  doi: 10.1109/CVPR.2016.494
– ident: e_1_2_10_23_1
  doi: 10.1109/CVPR.2015.7298935
– ident: e_1_2_10_9_1
  doi: 10.1109/ICCV.2017.371
– ident: e_1_2_10_18_1
– ident: e_1_2_10_4_1
  doi: 10.1007/978-3-030-01234-2_49
– ident: e_1_2_10_21_1
  doi: 10.1016/j.imavis.2019.03.003
– ident: e_1_2_10_33_1
  doi: 10.1109/CVPR.2016.308
– ident: e_1_2_10_7_1
– ident: e_1_2_10_29_1
  doi: 10.1109/TIP.2015.2487833
– ident: e_1_2_10_25_1
– ident: e_1_2_10_6_1
– start-page: 1097—1105
  volume-title: Advances in Neural Information Processing Systems
  year: 2012
  ident: e_1_2_10_2_1
– ident: e_1_2_10_27_1
  doi: 10.1016/j.tins.2011.02.003
– ident: e_1_2_10_11_1
  doi: 10.1145/2939672.2939778
– ident: e_1_2_10_12_1
  doi: 10.1109/CVPR.2019.00931
– ident: e_1_2_10_13_1
  doi: 10.1109/CVPR.2016.319
SSID ssj0059085
Score 2.3216817
Snippet The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good discrimination and...
Abstract The interpretability of the model is a hot issue in the field of computer vision. Score‐CAM is a kind of interpretable CAM method with good...
SourceID doaj
crossref
wiley
SourceType Open Website
Enrichment Source
Index Database
Publisher
StartPage 761
SubjectTerms class activation mapping
computer vision
deep network
model interpretation
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8NAEF6kJy--xfpiQS8Kscm-kvWmxWCFFlELvYV9TKAgbbHVsz_B3-gvcXeTSguiF29hM7BhZrLfDJl8H0KnQJmFkpFIKg6uQVE20szoSDKHBymxNguNYrcnbvvsbsAHC1JffiasogeuHNdyAK8h9SwqJWUqFlJo7kpmzbgDMlkxgTrMmzdT1Rnshbx5-BXSi8gLLufEpEy2hpMXcpEQ4eWJF6AoMPYvV6gBYvINtFbXhviqeqZNtAKjLbRe14m4fgun22icd7o5nnoCys_3j_ZV9xLnajpbXMEenSwej3AAKxzmBt2tEgKRJ56zRLioYFe24rfh9NWZDZdGEPG4xO1e73EH9fObp_ZtVAsnRIa6CiCSRKcqKzlQomLgjGqgwnpie02NMVSLVGtJiDUkgVQnorQCEpVxA0lcup51FzVG4xHsIaxdvABomYFUzNBE88ydkVbFsU1jRUkTnc19WJiaVdyLWzwX4es2k4X3dxH83UQn37aTikvjR6trH4pvC89_HRZcVhR1VhR_ZUUTnYdA_rJP0bl_IOFq_z92PECrXom-Gug-RI3ZyyscuXplpo9Dan4BhTvlqw
  priority: 102
  providerName: Directory of Open Access Journals
Title FIMF score‐CAM: Fast score‐CAM based on local multi‐feature integration for visual interpretation of CNNS
URI https://onlinelibrary.wiley.com/doi/abs/10.1049%2Fipr2.12670
https://doaj.org/article/456be70349f34a0696b5428b45015904
Volume 17
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NattAEF5CcsklbdKUum3MQnppQa20f9KGXBwTkQRsQtpAbmJ_RsFQrGA7PfcR-ox9ks6uJLeGUMhNrEYIdjQz32hnviHkA3DhoRYs0UYCJijGJ1Y4m2iB8SBn3hcxUZxM1cWtuLqTd1vktO-Fafkh1j_cgmVEfx0M3Nh2CgmCWlTi7GHBPmdM5Ziw74Te2lDQx8R174fDMG8Z2yHDIHkldU9OKvSXv89uhKPI2r-JUmOYKV-SvQ4f0lGr0H2yBfMD8qLDirSzxOUr0pSXk5IuAwnl75-_xqPJCS3NcvXvCg0RytNmTmPAorF2EG_VEMk8ac8UgZqhCF3pj9nyEcVmG2WItKnpeDr9ekhuy_Nv44ukG56QOI4oINHM5qaoJXBmUpCCW-DKB3J7y51z3KrcWs2YdyyD3Gaq9goyU0gHWVpj3vqabM-bObwh1KLOAHhdgDbC8czKAv2kN2nq89RwNiAf-z2sXMcsHgZcfK_iCbfQVdjvKu73gByvZR9aPo0npc6CKtYSgQM7LjSL-6ozqQqhn4U88OvUXJhUaWUlJlNWSIQ4OhUD8ikq8j_vqS6vb1i8evsc4XdkN0ydb4u335Pt1eIRjhCbrOwwfoLDmNn_AVTr4R4
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3NbtNAEF5V7aFcgBYQAVpWggtIBnv_7O0tjWol0EQIGlRxsfZnjCKhuEpSzjwCz8iTsLN2ApGqSr1Z67Es7ezMfLM7-w0hr4ELD7VgiTYSQoJifGKFs4kWIR7kzPsiJorjiRpOxYdLednV5uBdmJYfYrPhhpYR_TUaOG5ItwmnQJLM2dWCvcuYykPGvoewJizqvf7X6bfp2hVjP28Zb0RiL3kl9ZqfVOj3_77eikiRuH8bqMZIUz4k9zuISPutTg_IDswPyYMOLtLOGJePSFOOxiVdIg_ln1-_B_3xCS3NcvX_CMUg5WkzpzFm0Vg-GF7VEPk86ZosIiiHBvRKf86W10FstlWJSJuaDiaTL4_JtDy7GAyTrn9C4ngAAolmNjdFLYEzk4IU3AJXHvntLXfOcatyazVj3rEMcpup2ivITCEdZGkdUtcnZHfezOEpoTaoDYDXBWgjHM-sLIKr9CZNfZ4aznrkzXoOK9eRi2OPix9VPOQWusL5ruJ898irjexVS6lxo9QpqmIjgTTYcaBZfK86q6oC-rOQI8VOzYVJlVZWhnzKChlQjk5Fj7yNirzlP9Xo02cWn57dRfgl2R9ejM-r89Hk43NyD5vQt7XcL8juanENRwGqrOxxtyD_ApW85WQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV3da9RAEB_KFcQXtVbx_KgL9kUhmuxXstKX62no2d5R1JPiS9hPOZDLcXf12T_Bv7F_ibub5OyBFPoWNhMCM5mZ32RnfwNwaAk11lGcCMmsL1CkSRTVKhHU54McG1PEQnE84SdT-umCXezAUXcWpuGH2PxwC54R43Vw8IVxTb1JA0fmbLHEbzPMc1-w7zKfltIe7A6-Tb9Pu0gcxnmzeCAyjJLnTHT0pFS8-_f0VkKKvP3bODUmmvIB3GsRIho0Jt2DHTt_CPdbtIhaX1ztQ12OxiVaBRrKq99_hoPxe1TK1fr6Cgo5yqB6jmLKQrF70N9yNtJ5oo4rwtsGefCKfs1Wl15sttWIiGqHhpPJl0cwLT9-HZ4k7fiERBOPAxKBVS4LxyzBMrWMEmUJN4HeXhGtNVE8V0pgbDTObK4y7gy3mSyYtlnqfOX6GHrzem6fAFLeatYSV1ghqSaZYoWPlEamqclTSXAfXnc6rHTLLR5GXPys4h43FVXQdxX13YdXG9lFw6jxX6njYIqNRGDBjgv18kfVOlXlwZ-yeWDYcYTKlAuumC-nFGUe5IiU9uFNNOQN76lG559xvHp6G-GXcOf8Q1mdjSanz-BuGEHfdHI_h956eWlfeKCyVgft9_gXKhzkhA
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=FIMF+score%E2%80%90CAM%3A+Fast+score%E2%80%90CAM+based+on+local+multi%E2%80%90feature+integration+for+visual+interpretation+of+CNNS&rft.jtitle=IET+image+processing&rft.au=Li%2C+Jing&rft.au=Zhang%2C+Dongbo&rft.au=Meng%2C+Bumin&rft.au=Li%2C+Yongxing&rft.date=2023-02-01&rft.issn=1751-9659&rft.eissn=1751-9667&rft.volume=17&rft.issue=3&rft.spage=761&rft.epage=772&rft_id=info:doi/10.1049%2Fipr2.12670&rft.externalDBID=n%2Fa&rft.externalDocID=10_1049_ipr2_12670
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1751-9659&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1751-9659&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1751-9659&client=summon