ThiNet: Pruning CNN Filters for a Thinner Net
This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands...
Saved in:
Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 41; no. 10; pp. 2525 - 2538 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.10.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for “Thin Net”), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose “gcos” (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability. |
---|---|
AbstractList | This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for "Thin Net"), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose "gcos" (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability.This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for "Thin Net"), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose "gcos" (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability. This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus on filter level pruning, i.e., the whole filter will be discarded if it is less important. An effective and unified framework, ThiNet (stands for “Thin Net”), is proposed in this paper. We formally establish filter pruning as an optimization problem, and reveal that we need to prune filters based on statistics computed from its next layer, not the current layer, which differentiates ThiNet from existing methods. We also propose “gcos” (Group COnvolution with Shuffling), a more accurate group convolution scheme, to further reduce the pruned model size. Experimental results demonstrate the effectiveness of our method, which has advanced the state-of-the-art. Moreover, we show that the original VGG-16 model can be compressed into a very small model (ThiNet-Tiny) with only 2.66 MB model size, but still preserve AlexNet level accuracy. This small model is evaluated on several benchmarks with different vision tasks (e.g., classification, detection, segmentation), and shows excellent generalization ability. |
Author | Wu, Jianxin Luo, Jian-Hao Zhang, Hao Lin, Weiyao Zhou, Hong-Yu Xie, Chen-Wei |
Author_xml | – sequence: 1 givenname: Jian-Hao orcidid: 0000-0001-8105-805X surname: Luo fullname: Luo, Jian-Hao email: luojh@lamda.nju.edu.cn organization: National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 2 givenname: Hao orcidid: 0000-0001-5447-180X surname: Zhang fullname: Zhang, Hao email: zhangh@lamda.nju.edu.cn organization: National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 3 givenname: Hong-Yu surname: Zhou fullname: Zhou, Hong-Yu email: zhouhy@lamda.nju.edu.cn organization: National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 4 givenname: Chen-Wei surname: Xie fullname: Xie, Chen-Wei email: xiecw@lamda.nju.edu.cn organization: National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 5 givenname: Jianxin orcidid: 0000-0002-2085-7568 surname: Wu fullname: Wu, Jianxin email: wujx@lamda.nju.edu.cn organization: National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China – sequence: 6 givenname: Weiyao orcidid: 0000-0001-8307-7107 surname: Lin fullname: Lin, Weiyao email: wylin@sjtu.edu.cn organization: Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/30040622$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kD1PwzAQQC1UBOXjD4CEIrGwpJzPceKwoYovCQpDmS3HccBV6hQ7Gfj3GFo6MDB5ee_u_A7IyHXOEHJCYUIplJfzl-unhwkCFRMUXCDDHTJGmkNaYokjMgaaYyoEin1yEMICgGYc2B7ZZwAZ5Ihjks7f7cz0V8mLH5x1b8l0NktubdsbH5Km84lKIuGc8UnEjshuo9pgjjfvIXm9vZlP79PH57uH6fVjqhmnfVoXRcUqKsAgBVaVqGvMoC6ysm60anhGM6ayooICdN2wnGulCq4FLRQXSgh2SC7Wc1e--xhM6OXSBm3aVjnTDUEiFHn8bikgoud_0EU3eBevkxirMEZLmkfqbEMN1dLUcuXtUvlP-RsiArgGtO9C8KbZIhTkd235U1t-15ab2lESfyRte9XbzvVe2fZ_9XStWmPMdpfIaM55yb4A3kmIfg |
CODEN | ITPIDJ |
CitedBy_id | crossref_primary_10_1142_S021800142455019X crossref_primary_10_1002_cpe_6891 crossref_primary_10_3390_app122111184 crossref_primary_10_1007_s13042_023_02076_1 crossref_primary_10_1016_j_patcog_2022_109025 crossref_primary_10_3390_math12193032 crossref_primary_10_1016_j_sysarc_2019_02_008 crossref_primary_10_1109_ACCESS_2022_3171856 crossref_primary_10_3233_JHS_210662 crossref_primary_10_1155_2022_9312847 crossref_primary_10_1109_TPAMI_2023_3323496 crossref_primary_10_3390_en13164190 crossref_primary_10_3390_computers12080151 crossref_primary_10_1109_TNNLS_2022_3167466 crossref_primary_10_1109_TPAMI_2022_3212615 crossref_primary_10_1016_j_knosys_2025_113002 crossref_primary_10_1016_j_neunet_2023_09_002 crossref_primary_10_3390_informatics8040077 crossref_primary_10_1007_s10489_022_03893_7 crossref_primary_10_1016_j_jvcir_2020_102798 crossref_primary_10_3390_rs13030492 crossref_primary_10_1016_j_patcog_2021_108448 crossref_primary_10_1109_ACCESS_2022_3140807 crossref_primary_10_3390_rs14246245 crossref_primary_10_1109_TIP_2021_3093795 crossref_primary_10_1109_TCYB_2020_3034605 crossref_primary_10_1007_s12559_020_09815_4 crossref_primary_10_1063_5_0255692 crossref_primary_10_1016_j_patcog_2020_107461 crossref_primary_10_1109_TNNLS_2021_3059529 crossref_primary_10_3233_JIFS_202290 crossref_primary_10_1016_j_jisa_2023_103548 crossref_primary_10_1109_TPAMI_2022_3185317 crossref_primary_10_7717_peerj_cs_1529 crossref_primary_10_1088_2631_8695_ada72c crossref_primary_10_14778_3461535_3461547 crossref_primary_10_1109_TCSVT_2022_3175762 crossref_primary_10_1007_s11042_023_17656_0 crossref_primary_10_1007_s11760_021_01888_4 crossref_primary_10_1016_j_patcog_2020_107468 crossref_primary_10_1109_TNNLS_2022_3156047 crossref_primary_10_1007_s11633_022_1353_0 crossref_primary_10_1016_j_knosys_2025_113155 crossref_primary_10_1007_s10489_022_03260_6 crossref_primary_10_3390_s22218427 crossref_primary_10_1371_journal_pone_0292517 crossref_primary_10_1109_ACCESS_2024_3425166 crossref_primary_10_1109_TVLSI_2023_3307607 crossref_primary_10_1016_j_dsp_2024_104851 crossref_primary_10_1093_bioinformatics_btad138 crossref_primary_10_1109_JPROC_2022_3226481 crossref_primary_10_1109_JSTSP_2020_2992390 crossref_primary_10_1016_j_patcog_2021_107899 crossref_primary_10_1109_TPAMI_2020_3020300 crossref_primary_10_1109_TVT_2020_2983143 crossref_primary_10_1109_ACCESS_2023_3320642 crossref_primary_10_3389_fnbot_2023_1132679 crossref_primary_10_1016_j_asoc_2023_110229 crossref_primary_10_1109_TPAMI_2021_3086914 crossref_primary_10_1371_journal_pone_0271225 crossref_primary_10_1016_j_cviu_2022_103498 crossref_primary_10_1155_2022_4157511 crossref_primary_10_1109_JPHOTOV_2022_3195099 crossref_primary_10_1007_s11554_024_01511_y crossref_primary_10_1016_j_patrec_2023_10_020 crossref_primary_10_1109_JIOT_2020_3008931 crossref_primary_10_1016_j_asoc_2023_111166 crossref_primary_10_1145_3711846 crossref_primary_10_1007_s10489_022_03198_9 crossref_primary_10_1210_clinem_dgab870 crossref_primary_10_1109_ACCESS_2020_3016780 crossref_primary_10_1109_ACCESS_2021_3108545 crossref_primary_10_1109_ACCESS_2022_3153025 crossref_primary_10_1007_s10489_022_03383_w crossref_primary_10_1016_j_patcog_2023_109508 crossref_primary_10_1109_JSTSP_2020_2971418 crossref_primary_10_1109_TGRS_2020_2976694 crossref_primary_10_1016_j_neucom_2022_11_072 crossref_primary_10_1109_TNNLS_2022_3201846 crossref_primary_10_1038_s41598_023_43986_y crossref_primary_10_1016_j_neucom_2024_127817 crossref_primary_10_1109_LSP_2022_3164328 crossref_primary_10_3389_fmed_2022_894430 crossref_primary_10_1007_s11227_023_05150_1 crossref_primary_10_1109_TNNLS_2022_3165123 crossref_primary_10_1109_TPAMI_2023_3260903 crossref_primary_10_1360_SST_2021_0088 crossref_primary_10_1109_TNNLS_2020_2979517 crossref_primary_10_1109_TVLSI_2023_3297125 crossref_primary_10_1109_TPAMI_2023_3275159 crossref_primary_10_3390_bdcc7020111 crossref_primary_10_1016_j_patcog_2020_107610 crossref_primary_10_3390_s22052022 crossref_primary_10_1007_s10766_024_00760_5 crossref_primary_10_1364_OE_495425 crossref_primary_10_3390_computers12030060 crossref_primary_10_1109_ACCESS_2020_2990477 crossref_primary_10_3390_a13050125 crossref_primary_10_1016_j_compbiomed_2024_109296 crossref_primary_10_1109_TPAMI_2024_3406556 crossref_primary_10_1016_j_cviu_2022_103511 crossref_primary_10_1021_acsbiomaterials_3c01938 crossref_primary_10_1109_TCSVT_2020_3040367 crossref_primary_10_1109_TMM_2019_2950523 crossref_primary_10_3390_electronics9081209 crossref_primary_10_1016_j_jvcir_2020_102867 crossref_primary_10_1109_TCCN_2020_3024610 crossref_primary_10_1007_s10462_020_09816_7 crossref_primary_10_1002_aisy_202300644 crossref_primary_10_1109_TPAMI_2021_3066410 crossref_primary_10_1109_TEVC_2023_3261135 crossref_primary_10_1007_s00521_019_04532_y crossref_primary_10_1016_j_ins_2021_12_020 crossref_primary_10_1007_s11227_023_05487_7 crossref_primary_10_1016_j_cviu_2023_103828 crossref_primary_10_1016_j_future_2022_04_031 crossref_primary_10_1109_ACCESS_2023_3317293 crossref_primary_10_1109_ACCESS_2023_3305984 crossref_primary_10_1109_TCAD_2020_3012865 crossref_primary_10_23939_mmc2023_03_854 crossref_primary_10_1109_JIOT_2021_3116316 crossref_primary_10_1007_s00521_022_07594_7 crossref_primary_10_1016_j_cviu_2021_103220 crossref_primary_10_1016_j_ijcce_2023_07_001 crossref_primary_10_1134_S0965542521050109 crossref_primary_10_1007_s10489_022_03508_1 crossref_primary_10_1016_j_patcog_2022_108729 |
Cites_doi | 10.1109/TPAMI.2016.2572683 10.1109/ICCV.2017.541 10.1109/ICCV.2017.155 10.1109/TPAMI.2017.2699184 10.1109/ICCV.2015.169 10.1109/CVPR.2012.6248092 10.1109/CVPR.2017.690 10.1109/ACCESS.2015.2494536 10.1162/089976698300017124 10.1109/ICCV.2017.74 10.1109/CVPR.2016.90 10.1145/2647868.2654889 10.1109/TPAMI.2013.50 10.1109/CVPR.2018.00716 10.1016/j.cviu.2005.09.012 10.1007/s11263-015-0816-y 10.1109/CVPRW.2017.241 10.1109/CVPR.2016.280 10.1109/TIP.2017.2688133 10.1109/5.726791 10.1109/CVPR.2009.5206537 10.1007/s11263-014-0733-5 10.1109/CVPR.2016.319 10.1109/CVPR.2016.91 10.1109/ICCV.2017.167 10.1109/CVPR.2016.521 10.1109/ICCV.2011.6126386 10.1109/ICCV.2015.178 10.1109/ICCV.2017.298 10.1109/CVPR.2015.7298594 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TPAMI.2018.2858232 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Technology Research Database PubMed |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Computer Science |
EISSN | 2160-9292 1939-3539 |
EndPage | 2538 |
ExternalDocumentID | 30040622 10_1109_TPAMI_2018_2858232 8416559 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: Young Scholar Exchange grantid: 17510740100 – fundername: National Natural Science Foundation of China grantid: 61772256; 61422203 funderid: 10.13039/501100001809 |
GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNS RXW TAE TN5 UHB ~02 AAYXX CITATION RIG 5VS 9M8 AAYOK ABFSI ADRHT AETIX AGSQL AI. AIBXA ALLEH FA8 H~9 IBMZZ ICLAB IFJZH NPM PKN RIC RNI RZB VH1 XJT Z5M 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c351t-d77b3b180e2103b92cd240d749dfcaf54143a47b070cdf365caa75c817a58a883 |
IEDL.DBID | RIE |
ISSN | 0162-8828 1939-3539 |
IngestDate | Fri Jul 11 10:29:03 EDT 2025 Mon Jun 30 02:33:38 EDT 2025 Wed Feb 19 02:34:19 EST 2025 Tue Jul 01 03:18:24 EDT 2025 Thu Apr 24 23:09:36 EDT 2025 Wed Aug 27 02:46:12 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 10 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c351t-d77b3b180e2103b92cd240d749dfcaf54143a47b070cdf365caa75c817a58a883 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-8307-7107 0000-0001-8105-805X 0000-0001-5447-180X 0000-0002-2085-7568 |
PMID | 30040622 |
PQID | 2285331916 |
PQPubID | 85458 |
PageCount | 14 |
ParticipantIDs | crossref_primary_10_1109_TPAMI_2018_2858232 ieee_primary_8416559 proquest_miscellaneous_2076232980 pubmed_primary_30040622 proquest_journals_2285331916 crossref_citationtrail_10_1109_TPAMI_2018_2858232 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2019-10-01 |
PublicationDateYYYYMMDD | 2019-10-01 |
PublicationDate_xml | – month: 10 year: 2019 text: 2019-10-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: New York |
PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
PublicationTitleAbbrev | TPAMI |
PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
PublicationYear | 2019 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref15 ref53 ref52 ref11 han (ref43) 2017 ref54 hinton (ref14) 2012 molchanov (ref21) 2017 ref17 ref16 ref18 ref51 ref50 denton (ref13) 2014 chen (ref24) 2015 ref45 ref48 ref47 lin (ref31) 2013 ref41 denil (ref12) 2013 ref8 ref4 ref3 ref6 krizhevsky (ref1) 2012 ref5 ref40 iandola (ref42) 2016 sindhwani (ref26) 2015 li (ref19) 2017 ref34 ref37 ref36 ref30 ref33 krizhevsky (ref38) 2009 ref32 simonyan (ref2) 2015 lecun (ref7) 1990 liu (ref49) 2016 wah (ref35) 2011 ref22 ref27 ref29 hinton (ref39) 1986 howard (ref28) 0 han (ref25) 2016 han (ref9) 2015 li (ref44) 2007; 106 hu (ref20) 2016 li (ref46) 2011 wen (ref10) 2016 gong (ref23) 2014 |
References_xml | – ident: ref53 doi: 10.1109/TPAMI.2016.2572683 – ident: ref27 doi: 10.1109/ICCV.2017.541 – ident: ref41 doi: 10.1109/ICCV.2017.155 – ident: ref54 doi: 10.1109/TPAMI.2017.2699184 – start-page: 21 year: 2016 ident: ref49 article-title: SSD: Single shot multibox detector publication-title: Proc Eur Conf Comput Vis – ident: ref4 doi: 10.1109/ICCV.2015.169 – start-page: 2148 year: 2013 ident: ref12 article-title: Predicting parameters in deep learning publication-title: Proc Advances Neural Inf Process Syst – ident: ref48 doi: 10.1109/CVPR.2012.6248092 – ident: ref52 doi: 10.1109/CVPR.2017.690 – start-page: 1 year: 2016 ident: ref25 article-title: Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding publication-title: Proc Int Conf Learn Representations – ident: ref34 doi: 10.1109/ACCESS.2015.2494536 – ident: ref8 doi: 10.1162/089976698300017124 – start-page: 1 year: 2017 ident: ref19 article-title: Pruning filters for efficient ConvNets publication-title: Proc Int Conf Learn Representations – start-page: 1097 year: 2012 ident: ref1 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Advances Neural Inf Process Syst – ident: ref18 doi: 10.1109/ICCV.2017.74 – ident: ref3 doi: 10.1109/CVPR.2016.90 – start-page: 1 year: 2015 ident: ref2 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Representations – start-page: 1 year: 2013 ident: ref31 article-title: Network in network publication-title: Proc Int Conf Learn Representations – ident: ref33 doi: 10.1145/2647868.2654889 – start-page: 1 year: 2016 ident: ref42 article-title: SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and $<$<0.5 MB model size publication-title: arXiv 1602 07360 – ident: ref40 doi: 10.1109/TPAMI.2013.50 – ident: ref29 doi: 10.1109/CVPR.2018.00716 – start-page: 1 year: 2016 ident: ref20 article-title: Network trimming: A data-driven neuron pruning approach towards efficient deep architectures – start-page: 1 year: 2014 ident: ref23 article-title: Compressing deep convolutional networks using vector quantization publication-title: arXiv 1412 6115 – start-page: 3088 year: 2015 ident: ref26 article-title: Structured transforms for small-footprint deep learning publication-title: Proc Advances Neural Inf Process Syst – volume: 106 start-page: 59 year: 2007 ident: ref44 article-title: Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories publication-title: Comput Vis Image Understanding doi: 10.1016/j.cviu.2005.09.012 – ident: ref32 doi: 10.1007/s11263-015-0816-y – ident: ref16 doi: 10.1109/CVPRW.2017.241 – ident: ref15 doi: 10.1109/CVPR.2016.280 – ident: ref47 doi: 10.1109/TIP.2017.2688133 – start-page: 1269 year: 2014 ident: ref13 article-title: Exploiting linear structure within convolutional networks for efficient evaluation publication-title: Proc Advances Neural Inf Process Syst – ident: ref37 doi: 10.1109/5.726791 – ident: ref36 doi: 10.1109/CVPR.2009.5206537 – start-page: 2074 year: 2016 ident: ref10 article-title: Learning structured sparsity in deep neural networks publication-title: Proc Advances Neural Inf Process Syst – ident: ref50 doi: 10.1007/s11263-014-0733-5 – ident: ref17 doi: 10.1109/CVPR.2016.319 – year: 2009 ident: ref38 article-title: Learning multiple layers of features from tiny images – ident: ref51 doi: 10.1109/CVPR.2016.91 – start-page: 1 year: 2012 ident: ref14 article-title: Improving neural networks by preventing co-adaptation of feature detectors publication-title: arXiv 1207 0580 – start-page: 2285 year: 2015 ident: ref24 article-title: Compressing neural networks with the hashing trick publication-title: Proc Int Conf Mach Learn – start-page: 1 year: 2011 ident: ref46 article-title: What, where and who? Classifying events by scene and object recognition publication-title: Proc IEEE Int Conf Comput Vis – ident: ref6 doi: 10.1109/ICCV.2017.167 – start-page: 1 year: 1986 ident: ref39 article-title: Learning distributed representations of concepts publication-title: Proc 24th Annu Meeting Cognitive Science Soc – start-page: 598 year: 1990 ident: ref7 article-title: Optimal brain damage publication-title: Proc Advances Neural Inf Process Syst – year: 2011 ident: ref35 article-title: The Caltech-UCSD birds-200-2011 dataset publication-title: California Institute of Technology Pasadena CA Tech Rep CNS-TR-2011-001 – ident: ref11 doi: 10.1109/CVPR.2016.521 – ident: ref45 doi: 10.1109/ICCV.2011.6126386 – start-page: 1 year: 2017 ident: ref21 article-title: Pruning convolutional neural networks for resource efficient transfer learning publication-title: Proc Int Conf Learn Representations – ident: ref5 doi: 10.1109/ICCV.2015.178 – start-page: 1135 year: 2015 ident: ref9 article-title: Learning both weights and connections for efficient neural network publication-title: Proc Advances Neural Inf Process Syst – start-page: 1 year: 0 ident: ref28 article-title: MobileNets: Efficient convolutional neural networks for mobile vision applications publication-title: arXiv 1704 04861 – ident: ref22 doi: 10.1109/ICCV.2017.298 – start-page: 1 year: 2017 ident: ref43 article-title: DSD: Dense-sparse-dense training for deep neural networks publication-title: Proc Int Conf Learn Representations – ident: ref30 doi: 10.1109/CVPR.2015.7298594 |
SSID | ssj0014503 |
Score | 2.6654737 |
Snippet | This paper aims at accelerating and compressing deep neural networks to deploy CNN models into small devices like mobile phones or embedded gadgets. We focus... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2525 |
SubjectTerms | Acceleration Artificial neural networks Computational modeling Convolution Convolutional neural networks deep learning Electronic devices filter pruning Image coding Model accuracy model compression Neural networks Optimization Pruning Segmentation Task analysis Training |
Title | ThiNet: Pruning CNN Filters for a Thinner Net |
URI | https://ieeexplore.ieee.org/document/8416559 https://www.ncbi.nlm.nih.gov/pubmed/30040622 https://www.proquest.com/docview/2285331916 https://www.proquest.com/docview/2076232980 |
Volume | 41 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4BB0QPQKEtoVC5Um9tFseP2O4NIVZQaVccFolbZDteFbXKVpBc-PWM8xKtoOIWKWPH8cx4vvHYMwBfEMMrZpc8VcL5VDBqU82dSRGZBmFUSfMQ7w7P5vnFtfhxI2_W4Nt4FyaE0B4-C5P42Mbyy5Vv4lbZSQyRIQJeh3V03Lq7WmPEQMi2CjIiGNRwdCOGCzLUnCyuTmeX8RSXnjAtNWKILdiMmaZozthf9qgtsPIy1mxtznQHZsNou6MmvyZN7Sb-4Z9Ejq_9nV3Y7sEnOe2k5S2shWoPdobCDqTX8z148yRL4T6ki5-381B_J1d3TdxFIWfzOZnexjD7PUHMSyyJ1T8r7AHJ3sH19HxxdpH2VRZSz2VWp6VSjrtM04DeH7KJ-RKtfKmEKZfeLmOZcG6Fcrg2-HLJc-mtVdLrTFmprdb8PWxUqyocAHFCu9ILiyRKoN13VppMm1xahqaYhgSyYa4L36cgj5UwfhetK0JN0bKqiKwqelYl8HVs86dLwPFf6v04zyNlP8UJHA0sLXodvS8YtuG4AmV5Ap_H16hdMWRiq7BqkIaiseDMaJrAh04Uxr4HCTp8_psfYQtHZrqDf0ewUd814RgBTO0-tZL7COdj5cI |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VIkE5UGgpBAoYiRtk6_gR29yqitUWulEPW6m3yHa8ogJlqza58OsZ5yVAgLhFythxPGPPZ88L4C1ieMXsmqdKOJ8KRm2quTMpItMgjKpoHmLs8LLIFxfi06W83IL3UyxMCKFzPguz-NjZ8quNb-NV2VE0kSECvgN3Ue9L1kdrTTYDIbs6yIhhcI3jQWIMkaHmaHV-vDyNflx6xrTUiCJ24F7MNUVzxn7RSF2Jlb-jzU7rzHdhOY63dzb5OmsbN_Pff0vl-L8_9AgeDvCTHPfy8hi2Qr0Hu2NpBzKs9D148FOewn1IV1-uitB8IOc3bbxHISdFQeZX0dB-SxD1Ekti_c8ae0CyJ3Ax_7g6WaRDnYXUc5k1aaWU4y7TNOD5DxnFfIV6vlLCVGtv17FQOLdCOdwdfLXmufTWKul1pqzUVmt-ANv1pg7PgDihXeWFRRIlUPM7K02mTS4tQ2VMQwLZONelH5KQx1oY38ruMEJN2bGqjKwqB1Yl8G5qc92n4Pgn9X6c54lymOIEDkeWlsMqvS0ZtuG4B2V5Am-m17i-otHE1mHTIg1FdcGZ0TSBp70oTH2PEvT8z998DfcXq-VZeXZafH4BOzhK07sBHsJ2c9OGlwhnGveqk-IfE9HpDA |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ThiNet%3A+Pruning+CNN+Filters+for+a+Thinner+Net&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Luo%2C+Jian-Hao&rft.au=Zhang%2C+Hao&rft.au=Zhou%2C+Hong-Yu&rft.au=Xie%2C+Chen-Wei&rft.date=2019-10-01&rft.pub=IEEE&rft.issn=0162-8828&rft.volume=41&rft.issue=10&rft.spage=2525&rft.epage=2538&rft_id=info:doi/10.1109%2FTPAMI.2018.2858232&rft_id=info%3Apmid%2F30040622&rft.externalDocID=8416559 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |