Pruning by explaining: A novel criterion for deep neural network pruning

•A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.•The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.•This is the first report to link the...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 115; p. 107899
Main Authors Yeom, Seul-Ki, Seegerer, Philipp, Lapuschkin, Sebastian, Binder, Alexander, Wiedemann, Simon, Müller, Klaus-Robert, Samek, Wojciech
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.07.2021
Subjects
Online AccessGet full text
ISSN0031-3203
1873-5142
DOI10.1016/j.patcog.2021.107899

Cover

Loading…
Abstract •A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.•The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.•This is the first report to link the two disconnected lines of interpretability and model compression research.•The method is tested on two popular convolutional neural network families and a broad range of benchmark datasets under two different scenarios. The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning.
AbstractList The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning.
•A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is introduced.•The method is inspired by neural network interpretability: Layer-wise Relevance Propagation.•This is the first report to link the two disconnected lines of interpretability and model compression research.•The method is tested on two popular convolutional neural network families and a broad range of benchmark datasets under two different scenarios. The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs. Recent efforts to reduce these overheads involve pruning and compressing the weights of various layers while at the same time aiming to not sacrifice performance. In this paper, we propose a novel criterion for CNN pruning inspired by neural network interpretability: The most relevant units, i.e. weights or filters, are automatically found using their relevance scores obtained from concepts of explainable AI (XAI). By exploring this idea, we connect the lines of interpretability and model compression research. We show that our proposed method can efficiently prune CNN models in transfer-learning setups in which networks pre-trained on large corpora are adapted to specialized tasks. The method is evaluated on a broad range of computer vision datasets. Notably, our novel criterion is not only competitive or better compared to state-of-the-art pruning criteria when successive retraining is performed, but clearly outperforms these previous criteria in the resource-constrained application scenario in which the data of the task to be transferred to is very scarce and one chooses to refrain from fine-tuning. Our method is able to compress the model iteratively while maintaining or even improving accuracy. At the same time, it has a computational cost in the order of gradient computation and is comparatively simple to apply without the need for tuning hyperparameters for pruning.
ArticleNumber 107899
Author Binder, Alexander
Lapuschkin, Sebastian
Müller, Klaus-Robert
Yeom, Seul-Ki
Wiedemann, Simon
Seegerer, Philipp
Samek, Wojciech
Author_xml – sequence: 1
  givenname: Seul-Ki
  surname: Yeom
  fullname: Yeom, Seul-Ki
  email: yeom@tu-berlin.de
  organization: Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germany
– sequence: 2
  givenname: Philipp
  orcidid: 0000-0002-4707-7991
  surname: Seegerer
  fullname: Seegerer, Philipp
  email: philipp.seegerer@tu-berlin.de
  organization: Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germany
– sequence: 3
  givenname: Sebastian
  orcidid: 0000-0002-0762-7258
  surname: Lapuschkin
  fullname: Lapuschkin, Sebastian
  email: sebastian.lapuschkin@hhi.fraunhofer.de
  organization: Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany
– sequence: 4
  givenname: Alexander
  surname: Binder
  fullname: Binder, Alexander
  email: alexabin@uio.no
  organization: ISTD Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore
– sequence: 5
  givenname: Simon
  surname: Wiedemann
  fullname: Wiedemann, Simon
  email: simon.wiedemann@hhi.fraunhofer.de
  organization: Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany
– sequence: 6
  givenname: Klaus-Robert
  orcidid: 0000-0002-3861-7685
  surname: Müller
  fullname: Müller, Klaus-Robert
  email: klaus-robert.mueller@tu-berlin.de
  organization: Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germany
– sequence: 7
  givenname: Wojciech
  surname: Samek
  fullname: Samek, Wojciech
  email: wojciech.samek@hhi.fraunhofer.de
  organization: BIFOLD – Berlin Institute for the Foundations of Learning and Data, Berlin, Germany
BookMark eNqFUEFOwzAQtFCRKIUfIOEPpNixEyc9IFUIKFIlOMDZcuxN5RLsyDGF_h5XgQsHOI12d2a0M6do4rwDhC4omVNCy6vtvFdR-808JzlNK1HV9RGa0kqwrKA8n6ApIYxmLCfsBJ0Ow5YQKtJhilZP4d1Zt8HNHsNn3yl7mBZ4iZ3fQYd1sBGC9Q63PmAD0GMH70F1CeKHD6-4Hw3O0HGrugHOv3GGXu5un29W2frx_uFmuc40E2XMaMUZ4VVDykKZptS8LmhNlBBVUZmy5q3WLYWSk0aYpuKmAWW0AsFaU7REKDZDl6Nv-myI1knng5KUVEUua0rLIjH4D8MPQ4BW9sG-qbBPLHnoS27l2Jc89CXHvpJs8UumbVQxRY9B2e4_8fUohhR9ZyHIQVtwGowNoKM03v5t8AVAWoqC
CitedBy_id crossref_primary_10_1016_j_vlsi_2024_102299
crossref_primary_10_1080_09540091_2022_2111405
crossref_primary_10_1016_j_inffus_2024_102472
crossref_primary_10_1007_s10666_023_09918_w
crossref_primary_10_3390_app122111184
crossref_primary_10_1109_JIOT_2022_3219202
crossref_primary_10_1016_j_patcog_2021_108056
crossref_primary_10_3390_math12193032
crossref_primary_10_1088_1361_6501_ad7a1a
crossref_primary_10_1088_1741_2552_acae07
crossref_primary_10_1016_j_neucom_2024_127698
crossref_primary_10_1007_s10489_024_05615_7
crossref_primary_10_1016_j_ress_2025_110925
crossref_primary_10_1109_TMLCN_2024_3395419
crossref_primary_10_1007_s11227_023_05273_5
crossref_primary_10_3389_fpls_2023_1269371
crossref_primary_10_1109_TPAMI_2023_3323496
crossref_primary_10_1145_3551486
crossref_primary_10_1145_3670685
crossref_primary_10_3390_electronics12224589
crossref_primary_10_1007_s10994_023_06438_2
crossref_primary_10_3390_informatics8040077
crossref_primary_10_1016_j_compeleceng_2024_109349
crossref_primary_10_1016_j_ins_2022_07_134
crossref_primary_10_3389_fnins_2022_906290
crossref_primary_10_1016_j_inffus_2023_101805
crossref_primary_10_1016_j_renene_2022_07_125
crossref_primary_10_3390_s23042208
crossref_primary_10_1007_s10489_024_05747_w
crossref_primary_10_1016_j_inffus_2023_101883
crossref_primary_10_1016_j_measurement_2022_111655
crossref_primary_10_1111_exsy_13793
crossref_primary_10_1016_j_jisa_2023_103548
crossref_primary_10_1109_ACCESS_2024_3510746
crossref_primary_10_1007_s11042_023_17656_0
crossref_primary_10_1016_j_iot_2022_100599
crossref_primary_10_1109_TNNLS_2022_3217403
crossref_primary_10_3390_info14030164
crossref_primary_10_3389_fsysb_2024_1407994
crossref_primary_10_3390_make4040047
crossref_primary_10_1002_cpe_7351
crossref_primary_10_1038_s41598_023_35963_2
crossref_primary_10_1016_j_compbiomed_2023_106668
crossref_primary_10_3390_computation10090161
crossref_primary_10_1007_s10489_022_03779_8
crossref_primary_10_1016_j_patcog_2023_109463
crossref_primary_10_3390_e24020196
crossref_primary_10_1088_2631_8695_ad9afe
crossref_primary_10_1016_j_ress_2024_110562
crossref_primary_10_3390_s22176519
crossref_primary_10_1016_j_neucom_2022_09_129
crossref_primary_10_1002_cpe_7143
crossref_primary_10_4108_eetsis_4858
crossref_primary_10_1016_j_knosys_2022_109465
crossref_primary_10_3390_app13020891
crossref_primary_10_1016_j_engappai_2025_110025
crossref_primary_10_1109_TAI_2024_3455313
crossref_primary_10_1007_s10489_022_03783_y
crossref_primary_10_1109_TKDE_2024_3425268
crossref_primary_10_1016_j_jnca_2024_104034
crossref_primary_10_1371_journal_pone_0264783
crossref_primary_10_1016_j_inffus_2022_11_013
crossref_primary_10_1007_s11276_023_03449_8
crossref_primary_10_1002_widm_1554
crossref_primary_10_1016_j_aej_2024_07_049
crossref_primary_10_1038_s41598_024_68172_6
crossref_primary_10_1016_j_eij_2024_100503
crossref_primary_10_1016_j_ijpe_2024_109319
crossref_primary_10_3390_bdcc7020111
crossref_primary_10_1016_j_ins_2024_121265
crossref_primary_10_1016_j_advengsoft_2022_103339
crossref_primary_10_1016_j_neucom_2025_129661
crossref_primary_10_1016_j_inffus_2021_11_008
crossref_primary_10_1109_TPAMI_2023_3290213
crossref_primary_10_1365_s35764_024_00533_2
crossref_primary_10_1109_TNNLS_2022_3188799
crossref_primary_10_1007_s11227_024_06901_4
crossref_primary_10_1016_j_patcog_2023_109321
crossref_primary_10_1109_JAS_2023_123123
crossref_primary_10_1016_j_knosys_2021_107988
crossref_primary_10_1016_j_future_2024_01_021
crossref_primary_10_1109_JIOT_2023_3321299
crossref_primary_10_1016_j_inffus_2023_102094
crossref_primary_10_1109_TKDE_2023_3312109
crossref_primary_10_1002_aisy_202300644
crossref_primary_10_1109_JSTSP_2024_3431927
crossref_primary_10_54097_hset_v4i_920
crossref_primary_10_26599_TST_2024_9010039
crossref_primary_10_1016_j_asoc_2022_109558
crossref_primary_10_1007_s11263_024_02101_y
crossref_primary_10_3390_electronics12214405
crossref_primary_10_1016_j_inffus_2024_102782
crossref_primary_10_1109_TII_2023_3268421
crossref_primary_10_1016_j_inffus_2024_102301
crossref_primary_10_3390_informatics11030067
crossref_primary_10_1016_j_patcog_2023_110146
crossref_primary_10_1109_TPAMI_2024_3388275
crossref_primary_10_1016_j_cviu_2021_103220
crossref_primary_10_1016_j_ijcce_2023_07_001
crossref_primary_10_3390_s23052718
crossref_primary_10_1016_j_patcog_2024_110724
crossref_primary_10_3390_computation11050092
crossref_primary_10_1016_j_egyai_2023_100330
crossref_primary_10_1016_j_neucom_2022_09_049
crossref_primary_10_1016_j_compag_2024_109469
crossref_primary_10_3390_informatics10030072
crossref_primary_10_1088_1741_2552_ac6770
crossref_primary_10_1016_j_patcog_2022_108729
Cites_doi 10.1038/s41467-019-08987-4
10.1016/j.sigpro.2018.10.019
10.1109/72.329683
10.1007/s13042-019-01004-6
10.1016/j.patcog.2018.10.029
10.1109/TC.2019.2914438
10.1007/s11263-015-0816-y
10.1109/JPROC.2017.2761740
10.1016/j.patcog.2016.11.008
10.1109/TNNLS.2019.2910073
10.1109/TPAMI.2018.2858232
10.1109/ACCESS.2019.2913945
10.1016/j.patcog.2017.10.013
10.1109/TNNLS.2016.2599820
10.1109/JPROC.2021.3060483
10.1371/journal.pone.0130140
10.1109/MSP.2017.2765695
10.1109/TPAMI.2018.2886192
10.1016/j.dsp.2017.10.011
10.1109/ACCESS.2019.2947846
10.1016/j.patrec.2019.11.028
10.1038/s41598-020-62724-2
ContentType Journal Article
Copyright 2021 The Authors
info:eu-repo/semantics/openAccess
Copyright_xml – notice: 2021 The Authors
– notice: info:eu-repo/semantics/openAccess
DBID 6I.
AAFTH
AAYXX
CITATION
3HK
DOI 10.1016/j.patcog.2021.107899
DatabaseName ScienceDirect Open Access Titles
Elsevier:ScienceDirect:Open Access
CrossRef
NORA - Norwegian Open Research Archives
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1873-5142
ExternalDocumentID 10852_91165
10_1016_j_patcog_2021_107899
S0031320321000868
GroupedDBID --K
--M
-D8
-DT
-~X
.DC
.~1
0R~
123
1B1
1RT
1~.
1~5
29O
4.4
457
4G.
53G
5VS
6I.
7-5
71M
8P~
9JN
AABNK
AACTN
AAEDT
AAEDW
AAFTH
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXUO
AAYFN
ABBOA
ABEFU
ABFNM
ABFRF
ABHFT
ABJNI
ABMAC
ABTAH
ABXDB
ABYKQ
ACBEA
ACDAQ
ACGFO
ACGFS
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADJOM
ADMUD
ADMXK
ADTZH
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AGHFR
AGUBO
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FD6
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-Q
G8K
GBLVA
GBOLZ
HLZ
HVGLF
HZ~
H~9
IHE
J1W
JJJVA
KOM
KZ1
LG9
LMP
LY1
M41
MO0
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
RNS
ROL
RPZ
SBC
SDF
SDG
SDP
SDS
SES
SEW
SPC
SPCBC
SST
SSV
SSZ
T5K
TN5
UNMZH
VOH
WUQ
XJE
XPP
ZMT
ZY4
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AFXIZ
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
3HK
ID FETCH-LOGICAL-c376t-1843048b065adb6c495190a77858d694fccf1e640b7db84dbeadcae73fd5f07a3
IEDL.DBID .~1
ISSN 0031-3203
IngestDate Mon Jul 01 06:56:17 EDT 2024
Tue Jul 01 02:36:33 EDT 2025
Thu Apr 24 23:10:45 EDT 2025
Fri Feb 23 02:45:59 EST 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords Pruning
Interpretation of models
Convolutional neural network (CNN)
Layer-wise relevance propagation (LRP)
Explainable AI (XAI)
Language English
License This is an open access article under the CC BY license.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c376t-1843048b065adb6c495190a77858d694fccf1e640b7db84dbeadcae73fd5f07a3
Notes NFR/309439
ORCID 0000-0002-4707-7991
0000-0002-0762-7258
0000-0002-3861-7685
OpenAccessLink https://www.sciencedirect.com/science/article/pii/S0031320321000868
ParticipantIDs cristin_nora_10852_91165
crossref_primary_10_1016_j_patcog_2021_107899
crossref_citationtrail_10_1016_j_patcog_2021_107899
elsevier_sciencedirect_doi_10_1016_j_patcog_2021_107899
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-07-01
PublicationDateYYYYMMDD 2021-07-01
PublicationDate_xml – month: 07
  year: 2021
  text: 2021-07-01
  day: 01
PublicationDecade 2020
PublicationTitle Pattern recognition
PublicationYear 2021
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Molchanov, Mallya, Tyree, Frosio, Kautz (bib0020) 2019
Montavon, Samek, Müller (bib0012) 2018; 73
Bach, Binder, Montavon, Klauschen, Müller, Samek (bib0007) 2015; 10
Lapuschkin, Wäldchen, Binder, Montavon, Samek, Müller (bib0008) 2019; 10
Tung, Mori (bib0016) 2020; 42
Wen, Wu, Wang, Chen, Li (bib0028) 2016
Samek, Binder, Montavon, Lapuschkin, Müller (bib0035) 2017; 28
Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, Li (bib0040) 2015; 115
Murata, Yoshizawa, Amari (bib0045) 1994; 5
Alber, Lapuschkin, Seegerer, Hägele, Schütt, Montavon, Samek, Müller, Dähne, Kindermans (bib0014) 2019; 20
Wiedemann, Müller, Samek (bib0015) 2020; 31
Sze, Chen, Yang, Emer (bib0003) 2017; 105
Cheng, Wang, Zhou, Zhang (bib0006) 2018; 35
Hassibi, Stork (bib0021) 1992
Molchanov, Tyree, Karras, Aila, Kautz (bib0022) 2017
Elson, Douceur, Howell, Saul (bib0038) 2007
Zhang, Zhou, Lin, Sun (bib0019) 2018
Liu, Wu (bib0024) 2019; 156
Montavon, Lapuschkin, Binder, Samek, Müller (bib0011) 2017; 65
Liu, Wang, Qiao (bib0044) 2017
LeCun, Denker, Solla (bib0004) 1989
Yu, Wang, Chen, Qin (bib0023) 2019; 10
Guillemot, Heusele, Korichi, Schnebert, Chen (bib0043) 2020; abs/2002.11018
Nilsback, Zisserman (bib0039) 2008
Guo, Xie, Xu, Xing (bib0017) 2019; 7
Yu, Li, Chen, Lai, Morariu, Han, Gao, Lin, Davis (bib0030) 2018
Seegerer, Binder, Saitenmacher, Bockmayr, Alber, Jurmeister, Klauschen, Müller (bib0010) 2020
Denil, Shakibi, Dinh, Ranzato, de Freitas (bib0002) 2013
Samek, Montavon, Lapuschkin, Anders, Müller (bib0013) 2021; 109
Li, Kadav, Durdanovic, Samet, Graf (bib0029) 2017
Hägele, Seegerer, Lapuschkin, Bockmayr, Samek, Klauschen, Müller, Binder (bib0009) 2020; 10
Luo, Zhang, Zhou, Xie, Wu, Lin (bib0031) 2019; 41
Han, Liu, Mao, Pu, Pedram, Horowitz, Dally (bib0027) 2016
Wang, Zhang, Wang, Hu (bib0042) 2018
Li, Li (bib0037) 2007
Han, Pool, Tran, Dally (bib0026) 2015
Gan, Wang, Lu (bib0032) 2020; 129
He, Zhang, Ren, Sun (bib0041) 2016
Lazebnik, Schmid, Ponce (bib0036) 2006
Tu, Lin (bib0005) 2019; 7
Xu, Yang, Zhang, Liu (bib0018) 2019; 88
Sun, Ren, Ma, Wang (bib0025) 2017
Dai, Yin, Jha (bib0033) 2019; 68
(bib0034) 2019; 11700
Gu, Wang, Kuen, Ma, Shahroudy, Shuai, Liu, Wang, Wang, Cai, Chen (bib0001) 2018; 77
Murata (10.1016/j.patcog.2021.107899_bib0045) 1994; 5
Samek (10.1016/j.patcog.2021.107899_bib0035) 2017; 28
Cheng (10.1016/j.patcog.2021.107899_bib0006) 2018; 35
Lazebnik (10.1016/j.patcog.2021.107899_bib0036) 2006
Hassibi (10.1016/j.patcog.2021.107899_bib0021) 1992
Bach (10.1016/j.patcog.2021.107899_bib0007) 2015; 10
Seegerer (10.1016/j.patcog.2021.107899_bib0010) 2020
Dai (10.1016/j.patcog.2021.107899_bib0033) 2019; 68
Sze (10.1016/j.patcog.2021.107899_bib0003) 2017; 105
Hägele (10.1016/j.patcog.2021.107899_bib0009) 2020; 10
Han (10.1016/j.patcog.2021.107899_bib0027) 2016
Elson (10.1016/j.patcog.2021.107899_bib0038) 2007
Gan (10.1016/j.patcog.2021.107899_bib0032) 2020; 129
Li (10.1016/j.patcog.2021.107899_bib0037) 2007
Wiedemann (10.1016/j.patcog.2021.107899_bib0015) 2020; 31
Denil (10.1016/j.patcog.2021.107899_bib0002) 2013
Tung (10.1016/j.patcog.2021.107899_bib0016) 2020; 42
Luo (10.1016/j.patcog.2021.107899_bib0031) 2019; 41
Wen (10.1016/j.patcog.2021.107899_bib0028) 2016
Lapuschkin (10.1016/j.patcog.2021.107899_bib0008) 2019; 10
Zhang (10.1016/j.patcog.2021.107899_bib0019) 2018
Yu (10.1016/j.patcog.2021.107899_bib0023) 2019; 10
Alber (10.1016/j.patcog.2021.107899_bib0014) 2019; 20
Russakovsky (10.1016/j.patcog.2021.107899_bib0040) 2015; 115
Molchanov (10.1016/j.patcog.2021.107899_bib0022) 2017
Sun (10.1016/j.patcog.2021.107899_bib0025) 2017
Nilsback (10.1016/j.patcog.2021.107899_bib0039) 2008
Li (10.1016/j.patcog.2021.107899_bib0029) 2017
(10.1016/j.patcog.2021.107899_bib0034) 2019; 11700
Samek (10.1016/j.patcog.2021.107899_bib0013) 2021; 109
Wang (10.1016/j.patcog.2021.107899_bib0042) 2018
Xu (10.1016/j.patcog.2021.107899_bib0018) 2019; 88
Tu (10.1016/j.patcog.2021.107899_bib0005) 2019; 7
Han (10.1016/j.patcog.2021.107899_bib0026) 2015
Liu (10.1016/j.patcog.2021.107899_bib0044) 2017
Guillemot (10.1016/j.patcog.2021.107899_bib0043) 2020; abs/2002.11018
Molchanov (10.1016/j.patcog.2021.107899_bib0020) 2019
LeCun (10.1016/j.patcog.2021.107899_bib0004) 1989
Montavon (10.1016/j.patcog.2021.107899_bib0012) 2018; 73
Gu (10.1016/j.patcog.2021.107899_bib0001) 2018; 77
Guo (10.1016/j.patcog.2021.107899_bib0017) 2019; 7
Yu (10.1016/j.patcog.2021.107899_bib0030) 2018
Montavon (10.1016/j.patcog.2021.107899_bib0011) 2017; 65
Liu (10.1016/j.patcog.2021.107899_bib0024) 2019; 156
He (10.1016/j.patcog.2021.107899_bib0041) 2016
References_xml – start-page: 2169
  year: 2006
  end-page: 2178
  ident: bib0036
  article-title: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– volume: 105
  start-page: 2295
  year: 2017
  end-page: 2329
  ident: bib0003
  article-title: Efficient processing of deep neural networks: a tutorial and survey
  publication-title: Proc. IEEE
– start-page: 16
  year: 2020
  end-page: 37
  ident: bib0010
  article-title: Interpretable deep neural network to predict estrogen receptor status from haematoxylin-eosin images
  publication-title: Artificial Intelligence and Machine Learning for Digital Pathology: State-of-the-Art and Future Challenges
– volume: 7
  start-page: 150823
  year: 2019
  end-page: 150832
  ident: bib0017
  article-title: Compressing by learning in a low-rank and sparse decomposition form
  publication-title: IEEE Access
– year: 2017
  ident: bib0029
  article-title: Pruning filters for efficient convnets
  publication-title: International Conference on Learning Representations, (ICLR)
– volume: 41
  start-page: 2525
  year: 2019
  end-page: 2538
  ident: bib0031
  article-title: ThiNet: pruning CNN filters for a thinner net
  publication-title: IEEE Trans Pattern Anal Mach Intell
– volume: 10
  start-page: 3129
  year: 2019
  end-page: 3144
  ident: bib0023
  article-title: Transfer channel pruning for compressing deep domain adaptation models
  publication-title: Int. J. Mach. Learn. Cybern.
– volume: 10
  start-page: 1096
  year: 2019
  ident: bib0008
  article-title: Unmasking Clever Hans predictors and assessing what machines really learn
  publication-title: Nat Commun
– volume: 129
  start-page: 190
  year: 2020
  end-page: 197
  ident: bib0032
  article-title: Compressing the CNN architecture for in-air handwritten chinese character recognition
  publication-title: Pattern Recognit Lett
– volume: 65
  start-page: 211
  year: 2017
  end-page: 222
  ident: bib0011
  article-title: Explaining nonlinear classification decisions with deep taylor decomposition
  publication-title: Pattern Recognit
– start-page: 366
  year: 2007
  end-page: 374
  ident: bib0038
  article-title: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization
  publication-title: Proceedings of the 2007 ACM Conference on Computer and Communications Security (CCS)
– volume: 28
  start-page: 2660
  year: 2017
  end-page: 2673
  ident: bib0035
  article-title: Evaluating the visualization of what a deep neural network has learned
  publication-title: IEEE Trans Neural Netw Learn Syst
– volume: abs/2002.11018
  year: 2020
  ident: bib0043
  article-title: Breaking batch normalization for better explainability of deep neural networks through layer-wise relevance propagation
  publication-title: CoRR
– volume: 88
  start-page: 272
  year: 2019
  end-page: 284
  ident: bib0018
  article-title: LightweightNet: toward fast and lightweight convolutional neural networks via architecture distillation
  publication-title: Pattern Recognit
– start-page: 1
  year: 2007
  end-page: 8
  ident: bib0037
  article-title: What, where and who? Classifying events by scene and object recognition
  publication-title: IEEE International Conference on Computer Vision (ICCV)
– start-page: 149
  year: 2018
  ident: bib0042
  article-title: Structured probabilistic pruning for convolutional neural network acceleration
  publication-title: British Machine Vision Conference (BMVC)
– volume: 10
  start-page: 6423
  year: 2020
  ident: bib0009
  article-title: Resolving challenges in deep learning-based analyses of histopathological images using explanation methods
  publication-title: Sci Rep
– volume: 156
  start-page: 84
  year: 2019
  end-page: 91
  ident: bib0024
  article-title: Channel pruning based on mean gradient for accelerating convolutional neural networks
  publication-title: Signal Processing
– volume: 109
  start-page: 1
  year: 2021
  end-page: 32
  ident: bib0013
  article-title: Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
  publication-title: Proceedings of the IEEE
– volume: 5
  start-page: 865
  year: 1994
  end-page: 872
  ident: bib0045
  article-title: Network information criterion-determining the number of hidden units for an artificial neural network model
  publication-title: IEEE Trans. Neural Networks
– volume: 77
  start-page: 354
  year: 2018
  end-page: 377
  ident: bib0001
  article-title: Recent advances in convolutional neural networks
  publication-title: Pattern Recognit
– start-page: 11264
  year: 2019
  end-page: 11272
  ident: bib0020
  article-title: Importance estimation for neural network pruning
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– volume: 11700
  year: 2019
  ident: bib0034
  article-title: Explainable AI: interpreting, explaining and visualizing deep learning
  publication-title: Lecture Notes in Computer Science
– volume: 20
  start-page: 93:1
  year: 2019
  end-page: 93:8
  ident: bib0014
  article-title: iNNvestigate neural networks!
  publication-title: Journal of Machine Learning Research
– year: 2017
  ident: bib0022
  article-title: Pruning convolutional neural networks for resource efficient transfer learning
  publication-title: Proceedings of the International Conference on Learning Representations (ICLR)
– volume: 68
  start-page: 1487
  year: 2019
  end-page: 1497
  ident: bib0033
  article-title: Nest: a neural network synthesis tool based on a grow-and-prune paradigm
  publication-title: IEEE Trans. Comput.
– start-page: 2245
  year: 2017
  end-page: 2251
  ident: bib0044
  article-title: Sparse deep transfer learning for convolutional neural network
  publication-title: AAAI Conference on Artificial Intelligence
– volume: 7
  start-page: 58113
  year: 2019
  end-page: 58119
  ident: bib0005
  article-title: Deep neural network compression technique towards efficient digital signal modulation recognition in edge device
  publication-title: IEEE Access
– start-page: 3299
  year: 2017
  end-page: 3308
  ident: bib0025
  article-title: meprop: sparsified back propagation for accelerated deep learning with reduced overfitting
  publication-title: International Conference on Machine Learning (ICML)
– volume: 115
  start-page: 211
  year: 2015
  end-page: 252
  ident: bib0040
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int J Comput Vis
– start-page: 1135
  year: 2015
  end-page: 1143
  ident: bib0026
  article-title: Learning both weights and connections for efficient neural network
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– start-page: 770
  year: 2016
  end-page: 778
  ident: bib0041
  article-title: Deep residual learning for image recognition
  publication-title: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27–30, 2016
– volume: 73
  start-page: 1
  year: 2018
  end-page: 15
  ident: bib0012
  article-title: Methods for interpreting and understanding deep neural networks
  publication-title: Digit Signal Process
– volume: 31
  start-page: 772
  year: 2020
  end-page: 785
  ident: bib0015
  article-title: Compact and computationally efficient representation of deep neural networks
  publication-title: IEEE Trans Neural Netw Learn Syst
– start-page: 2074
  year: 2016
  end-page: 2082
  ident: bib0028
  article-title: Learning structured sparsity in deep neural networks
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– volume: 10
  start-page: e0130140
  year: 2015
  ident: bib0007
  article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation
  publication-title: PLoS ONE
– volume: 42
  start-page: 568
  year: 2020
  end-page: 579
  ident: bib0016
  article-title: Deep neural network compression by in-parallel pruning-quantization
  publication-title: IEEE Trans Pattern Anal Mach Intell
– start-page: 598
  year: 1989
  end-page: 605
  ident: bib0004
  article-title: Optimal brain damage
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– start-page: 243
  year: 2016
  end-page: 254
  ident: bib0027
  article-title: EIE: efficient inference engine on compressed deep neural network
  publication-title: International Symposium on Computer Architecture (ISCA)
– start-page: 164
  year: 1992
  end-page: 171
  ident: bib0021
  article-title: Second order derivatives for network pruning: Optimal brain surgeon
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– start-page: 2148
  year: 2013
  end-page: 2156
  ident: bib0002
  article-title: Predicting parameters in deep learning
  publication-title: Advances in Neural Information Processing Systems (NIPS)
– start-page: 9194
  year: 2018
  end-page: 9203
  ident: bib0030
  article-title: NISP: pruning networks using neuron importance score propagation
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– volume: 35
  start-page: 126
  year: 2018
  end-page: 136
  ident: bib0006
  article-title: Model compression and acceleration for deep neural networks: the principles, progress, and challenges
  publication-title: IEEE Signal Process Mag
– start-page: 6848
  year: 2018
  end-page: 6856
  ident: bib0019
  article-title: Shufflenet: An extremely efficient convolutional neural network for mobile devices
  publication-title: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
– start-page: 722
  year: 2008
  end-page: 729
  ident: bib0039
  article-title: Automated flower classification over a large number of classes
  publication-title: Sixth Indian Conference on Computer Vision, Graphics & Image Processing (ICVGIP)
– volume: 10
  start-page: 1096
  issue: 1
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0008
  article-title: Unmasking Clever Hans predictors and assessing what machines really learn
  publication-title: Nat Commun
  doi: 10.1038/s41467-019-08987-4
– start-page: 1135
  year: 2015
  ident: 10.1016/j.patcog.2021.107899_bib0026
  article-title: Learning both weights and connections for efficient neural network
– volume: 156
  start-page: 84
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0024
  article-title: Channel pruning based on mean gradient for accelerating convolutional neural networks
  publication-title: Signal Processing
  doi: 10.1016/j.sigpro.2018.10.019
– start-page: 2245
  year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0044
  article-title: Sparse deep transfer learning for convolutional neural network
– volume: 5
  start-page: 865
  issue: 6
  year: 1994
  ident: 10.1016/j.patcog.2021.107899_bib0045
  article-title: Network information criterion-determining the number of hidden units for an artificial neural network model
  publication-title: IEEE Trans. Neural Networks
  doi: 10.1109/72.329683
– volume: 10
  start-page: 3129
  issue: 11
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0023
  article-title: Transfer channel pruning for compressing deep domain adaptation models
  publication-title: Int. J. Mach. Learn. Cybern.
  doi: 10.1007/s13042-019-01004-6
– volume: 88
  start-page: 272
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0018
  article-title: LightweightNet: toward fast and lightweight convolutional neural networks via architecture distillation
  publication-title: Pattern Recognit
  doi: 10.1016/j.patcog.2018.10.029
– volume: 68
  start-page: 1487
  issue: 10
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0033
  article-title: Nest: a neural network synthesis tool based on a grow-and-prune paradigm
  publication-title: IEEE Trans. Comput.
  doi: 10.1109/TC.2019.2914438
– start-page: 9194
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0030
  article-title: NISP: pruning networks using neuron importance score propagation
– volume: 115
  start-page: 211
  issue: 3
  year: 2015
  ident: 10.1016/j.patcog.2021.107899_bib0040
  article-title: Imagenet large scale visual recognition challenge
  publication-title: Int J Comput Vis
  doi: 10.1007/s11263-015-0816-y
– start-page: 11264
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0020
  article-title: Importance estimation for neural network pruning
– start-page: 2169
  year: 2006
  ident: 10.1016/j.patcog.2021.107899_bib0036
  article-title: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories
– volume: 105
  start-page: 2295
  issue: 12
  year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0003
  article-title: Efficient processing of deep neural networks: a tutorial and survey
  publication-title: Proc. IEEE
  doi: 10.1109/JPROC.2017.2761740
– volume: 65
  start-page: 211
  year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0011
  article-title: Explaining nonlinear classification decisions with deep taylor decomposition
  publication-title: Pattern Recognit
  doi: 10.1016/j.patcog.2016.11.008
– volume: 31
  start-page: 772
  issue: 3
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0015
  article-title: Compact and computationally efficient representation of deep neural networks
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2019.2910073
– start-page: 6848
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0019
  article-title: Shufflenet: An extremely efficient convolutional neural network for mobile devices
– year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0022
  article-title: Pruning convolutional neural networks for resource efficient transfer learning
– start-page: 164
  year: 1992
  ident: 10.1016/j.patcog.2021.107899_bib0021
  article-title: Second order derivatives for network pruning: Optimal brain surgeon
– volume: 41
  start-page: 2525
  issue: 10
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0031
  article-title: ThiNet: pruning CNN filters for a thinner net
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2018.2858232
– volume: 7
  start-page: 58113
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0005
  article-title: Deep neural network compression technique towards efficient digital signal modulation recognition in edge device
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2913945
– start-page: 598
  year: 1989
  ident: 10.1016/j.patcog.2021.107899_bib0004
  article-title: Optimal brain damage
– start-page: 16
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0010
  article-title: Interpretable deep neural network to predict estrogen receptor status from haematoxylin-eosin images
– volume: 77
  start-page: 354
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0001
  article-title: Recent advances in convolutional neural networks
  publication-title: Pattern Recognit
  doi: 10.1016/j.patcog.2017.10.013
– start-page: 3299
  year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0025
  article-title: meprop: sparsified back propagation for accelerated deep learning with reduced overfitting
– start-page: 2148
  year: 2013
  ident: 10.1016/j.patcog.2021.107899_bib0002
  article-title: Predicting parameters in deep learning
– volume: 28
  start-page: 2660
  issue: 11
  year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0035
  article-title: Evaluating the visualization of what a deep neural network has learned
  publication-title: IEEE Trans Neural Netw Learn Syst
  doi: 10.1109/TNNLS.2016.2599820
– volume: 109
  start-page: 1
  issue: 3
  year: 2021
  ident: 10.1016/j.patcog.2021.107899_bib0013
  article-title: Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
  publication-title: Proceedings of the IEEE
  doi: 10.1109/JPROC.2021.3060483
– year: 2017
  ident: 10.1016/j.patcog.2021.107899_bib0029
  article-title: Pruning filters for efficient convnets
– volume: 20
  start-page: 93:1
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0014
  article-title: iNNvestigate neural networks!
  publication-title: Journal of Machine Learning Research
– volume: 10
  start-page: e0130140
  issue: 7
  year: 2015
  ident: 10.1016/j.patcog.2021.107899_bib0007
  article-title: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0130140
– start-page: 722
  year: 2008
  ident: 10.1016/j.patcog.2021.107899_bib0039
  article-title: Automated flower classification over a large number of classes
– volume: 35
  start-page: 126
  issue: 1
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0006
  article-title: Model compression and acceleration for deep neural networks: the principles, progress, and challenges
  publication-title: IEEE Signal Process Mag
  doi: 10.1109/MSP.2017.2765695
– start-page: 770
  year: 2016
  ident: 10.1016/j.patcog.2021.107899_bib0041
  article-title: Deep residual learning for image recognition
– volume: 42
  start-page: 568
  issue: 3
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0016
  article-title: Deep neural network compression by in-parallel pruning-quantization
  publication-title: IEEE Trans Pattern Anal Mach Intell
  doi: 10.1109/TPAMI.2018.2886192
– start-page: 366
  year: 2007
  ident: 10.1016/j.patcog.2021.107899_bib0038
  article-title: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization
– start-page: 2074
  year: 2016
  ident: 10.1016/j.patcog.2021.107899_bib0028
  article-title: Learning structured sparsity in deep neural networks
– volume: 11700
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0034
  article-title: Explainable AI: interpreting, explaining and visualizing deep learning
– start-page: 149
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0042
  article-title: Structured probabilistic pruning for convolutional neural network acceleration
– volume: 73
  start-page: 1
  year: 2018
  ident: 10.1016/j.patcog.2021.107899_bib0012
  article-title: Methods for interpreting and understanding deep neural networks
  publication-title: Digit Signal Process
  doi: 10.1016/j.dsp.2017.10.011
– start-page: 243
  year: 2016
  ident: 10.1016/j.patcog.2021.107899_bib0027
  article-title: EIE: efficient inference engine on compressed deep neural network
– volume: abs/2002.11018
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0043
  article-title: Breaking batch normalization for better explainability of deep neural networks through layer-wise relevance propagation
  publication-title: CoRR
– volume: 7
  start-page: 150823
  year: 2019
  ident: 10.1016/j.patcog.2021.107899_bib0017
  article-title: Compressing by learning in a low-rank and sparse decomposition form
  publication-title: IEEE Access
  doi: 10.1109/ACCESS.2019.2947846
– start-page: 1
  year: 2007
  ident: 10.1016/j.patcog.2021.107899_bib0037
  article-title: What, where and who? Classifying events by scene and object recognition
– volume: 129
  start-page: 190
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0032
  article-title: Compressing the CNN architecture for in-air handwritten chinese character recognition
  publication-title: Pattern Recognit Lett
  doi: 10.1016/j.patrec.2019.11.028
– volume: 10
  start-page: 6423
  year: 2020
  ident: 10.1016/j.patcog.2021.107899_bib0009
  article-title: Resolving challenges in deep learning-based analyses of histopathological images using explanation methods
  publication-title: Sci Rep
  doi: 10.1038/s41598-020-62724-2
SSID ssj0017142
Score 2.6841054
Snippet •A novel criterion to efficiently prune convolutional neural networks inspired by explaining nonlinear classification decisions in terms of input variables is...
The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage...
SourceID cristin
crossref
elsevier
SourceType Open Access Repository
Enrichment Source
Index Database
Publisher
StartPage 107899
SubjectTerms Convolutional neural network (CNN)
Explainable AI (XAI)
Interpretation of models
Layer-wise relevance propagation (LRP)
Pruning
Title Pruning by explaining: A novel criterion for deep neural network pruning
URI https://dx.doi.org/10.1016/j.patcog.2021.107899
http://hdl.handle.net/10852/91165
Volume 115
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF6KInjxLa2PsgevsWmz2U28lWKpCsWDhd6WfUUqJQ2lFb34253JJkVBFDxm2dmEyezM7PLNN4RcmdQwbRkLlGZwQDEsDrTq8SDpKZ6kWcKZKtEWYz6asPtpPG2QQV0Lg7DKyvd7n15662qkU2mzU8xmWOOLtIMhFqFgYo4Fv4wJtPLrjw3MA_t7e8bwqBvg7Lp8rsR4FeDuFs9wSux1YUgkyAC7Y8rdlf8cqL4En-EB2auyRtr3H3ZIGi4_Ivt1RwZabdBjMnpcrvGig-p36t6KuW__cEP7NF-8ujmFlyI38yKnkKtS61xBkdASls49HJwWfoETMhnePg1GQdUrITDgIlYBtm2Bzagho1BWcwPnHgj1SogkTixPWWZM1nWchVpYnTCrwYKMciLKbJyFQkWnZCtf5K5JaMQ1iKYWorlgzHKlkUWehypJhYpVt0WalYpkDmaKFKNxT6bI49MiUa0zaSqOcWx1MZc1mOxFep1L1Ln0Om-RYCNVeI6NP-aL-nfIb7YiIQz8Knn2b8lzsotPHqh7QbZWy7W7hHRkpdulvbXJdv_uYTT-BDUK3Zw
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3dS9xAEB_0RPSltlXxWlv3oa_By2Wzu_HtkEqs9vBBwbdlvyLKkQtyiv73ztxupAWx0NdNZhMm87mZ-Q3AD1c5bj3nmbEcExTHy8yascjU2AhVNUpws6y2mIr6iv-6Lq9X4LjvhaGyymT7o01fWuu0cpi4edjd3lKPL8EOjqgJhQJztQprhE5VDmBtcnpWT19_JsicR9DwIs-IoO-gW5Z5dWjx5jeYKI5zXJKKQGDX3VLB2rd91R_-5-QjfEiBI5vEd_sEK6H9DFv9UAaWdHQb6ov7BzrrYPaZhaduFidAHLEJa-ePYcbwoQTPPG8ZhqvMh9AxwrTErdtYEc66uMEOXJ38vDyuszQuIXNoJRYZTW5BfbQYVBhvhcPUB729kVKVyouKN841eRB8ZKW3inuLQuRMkEXjy2YkTbELg3behj1ghbBIWnl06JJzL4wlIHkxMqqSpjT5EPYSi3SLkkooo-VYVwTlM4Si55l2CWacpl3MdF9PdqcjzzXxXEeeDyF7peoizMY_7pf959B_iYtGT_Au5Zf_pjyAjfry97k-P52efYVNuhLrdvdhsLh_CN8wOlnY70n6XgDbLuBN
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Pruning+by+explaining%3A+A+novel+criterion+for+deep+neural+network+pruning&rft.jtitle=Pattern+recognition&rft.au=Yeom%2C+Seul-Ki&rft.au=Seegerer%2C+Philipp&rft.au=Lapuschkin%2C+Sebastian&rft.au=Binder%2C+Alexander&rft.date=2021-07-01&rft.pub=Elsevier+Ltd&rft.issn=0031-3203&rft.eissn=1873-5142&rft.volume=115&rft_id=info:doi/10.1016%2Fj.patcog.2021.107899&rft.externalDocID=S0031320321000868
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0031-3203&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0031-3203&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0031-3203&client=summon