"Learning-Compression" Algorithms for Neural Net Pruning
Pruning a neural net consists of removing weights without degrading its performance. This is an old problem of renewed interest because of the need to compress ever larger nets so they can run in mobile devices. Pruning has been traditionally done by ranking or penalizing weights according to some c...
Saved in:
Published in | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 8532 - 8541 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Pruning a neural net consists of removing weights without degrading its performance. This is an old problem of renewed interest because of the need to compress ever larger nets so they can run in mobile devices. Pruning has been traditionally done by ranking or penalizing weights according to some criterion (such as magnitude), removing low-ranked weights and retraining the remaining ones. We formulate pruning as an optimization problem of finding the weights that minimize the loss while satisfying a pruning cost condition. We give a generic algorithm to solve this which alternates "learning" steps that optimize a regularized, data-dependent loss and "compression" steps that mark weights for pruning in a data-independent way. Magnitude thresholding arises naturally in the compression step, but unlike existing magnitude pruning approaches, our algorithm explores subsets of weights rather than committing irrevocably to a specific subset from the beginning. It is also able to learn automatically the best number of weights to prune in each layer of the net without incurring an exponentially costly model selection. Using a single pruning-level user parameter, we achieve state-of-the-art pruning in LeNet and ResNets of various sizes. |
---|---|
AbstractList | Pruning a neural net consists of removing weights without degrading its performance. This is an old problem of renewed interest because of the need to compress ever larger nets so they can run in mobile devices. Pruning has been traditionally done by ranking or penalizing weights according to some criterion (such as magnitude), removing low-ranked weights and retraining the remaining ones. We formulate pruning as an optimization problem of finding the weights that minimize the loss while satisfying a pruning cost condition. We give a generic algorithm to solve this which alternates "learning" steps that optimize a regularized, data-dependent loss and "compression" steps that mark weights for pruning in a data-independent way. Magnitude thresholding arises naturally in the compression step, but unlike existing magnitude pruning approaches, our algorithm explores subsets of weights rather than committing irrevocably to a specific subset from the beginning. It is also able to learn automatically the best number of weights to prune in each layer of the net without incurring an exponentially costly model selection. Using a single pruning-level user parameter, we achieve state-of-the-art pruning in LeNet and ResNets of various sizes. |
Author | Carreira-Perpinan, Miguel A. Idelbayev, Yerlan |
Author_xml | – sequence: 1 givenname: Miguel A. surname: Carreira-Perpinan fullname: Carreira-Perpinan, Miguel A. – sequence: 2 givenname: Yerlan surname: Idelbayev fullname: Idelbayev, Yerlan |
BookMark | eNotjj1PwzAUAA0CiVIyM7BE3ROe7cR-HquILymCCgFr9eI4xShNKjsd-PcEwXTL6XSX7GwYB8fYNYecczC31cfmNRfAMQdAAycsMRp5KVGpQoA5ZQsOSmbKcHPBkhi_AEAolFiUC4ar2lEY_LDLqnF_CC5GPw6rdN3vxuCnz31MuzGkz-4YqJ8xpZtw_NWv2HlHfXTJP5fs_f7urXrM6peHp2pdZ14UfMrajjreWdEK6TSfr8qG9PxFBVnULVhDjWoklEIYAmuVs9jaVgsy5JQt5JLd_HW9c257CH5P4XuLpUaDKH8AQHhJNg |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IH CBEJK RIE RIO |
DOI | 10.1109/CVPR.2018.00890 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE/IET Electronic Library IEEE Proceedings Order Plans (POP) 1998-present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 9781538664209 1538664208 |
EISSN | 1063-6919 |
EndPage | 8541 |
ExternalDocumentID | 8578988 |
Genre | orig-research |
GroupedDBID | 6IE 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP OCL RIE RIL RIO |
ID | FETCH-LOGICAL-i241t-dfaf1fc2d23e711535ba7642a4ac87d0c9ab6b305229a0cc6ec8dcd72a9ae6c43 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:52:16 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i241t-dfaf1fc2d23e711535ba7642a4ac87d0c9ab6b305229a0cc6ec8dcd72a9ae6c43 |
PageCount | 10 |
ParticipantIDs | ieee_primary_8578988 |
PublicationCentury | 2000 |
PublicationDate | 2018-06 |
PublicationDateYYYYMMDD | 2018-06-01 |
PublicationDate_xml | – month: 06 year: 2018 text: 2018-06 |
PublicationDecade | 2010 |
PublicationTitle | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
PublicationTitleAbbrev | CVPR |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0002683845 ssj0003211698 |
Score | 2.51217 |
Snippet | Pruning a neural net consists of removing weights without degrading its performance. This is an old problem of renewed interest because of the need to compress... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 8532 |
SubjectTerms | Mobile handsets Neural networks Neurons Optimization Performance evaluation Quantization (signal) Training |
Title | "Learning-Compression" Algorithms for Neural Net Pruning |
URI | https://ieeexplore.ieee.org/document/8578988 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT8IwFH8BTp5QwfidhXi0wNqxvh4NkRATDTFiuJF-DYk4DGwX_3rbbUJiPHhqu3TLy-va9177e78C3MQ8EpxiSLQRSCKGA4JaSSKpsTqUKjSqYPt8isfT6GE2mNXgdpcLY60twGe266vFWb5Z69xvlfXQ_V4CsQ51F7iVuVq7_RQaI8PqhMy3mYtsYoEVm0_YF73h6-TZY7k8eBKLNXh_nUphTUZNePyRowSRvHfzTHX11y-Kxv8Kegjtfd5eMNlZpCOo2fQYmpWjGVTTeNsC7FS0qgvi14MSCpt2grvVYr1ZZm8f28D5soEn7pArV2Tuq7nv3obp6P5lOCbVFQpk6UxzRkwikzDR1FBmuXP-2EBJ7kIOGUmN3PS1kCpWbs5TKmRf69hqNNpw6km7Yx2xE2ik69SeQsAStCiMpKEMI6Ot4kYmNnKaNda9r86g5RUx_yxZMuaVDs7_fnwBB34oStDVJTSyTW6vnHnP1HUxrt9BhaUN |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT8IwFH9BPOgJFYzfLsSjQ9qN7fVoiAQVCDFguJF-DYk4DGwX_3rbbUJiPHjq2mxL8_rxe21_71eAmyD0WUiRuFIxdH0PWy5KwV1OlZaEC6JEpvY5CLpj_2nSmpTgdhMLo7XOyGe6YR-zs3y1lKndKrtD070Y4g7sGtxvkTxaa7OjQgP0sDgjs3nPrG0ChoWeD2myu_br8MWyuSx9ErNZeHuhSoYnnQr0f2qS00jeG2kiGvLrl0jjf6t6ALVt5J4z3GDSIZR0fASVwtV0ioG8rgLWC2HVmWtnhJwMG9ed-8VsuZonbx9rx3izjpXu4AuTJOavqX29BuPOw6jddYtLFNy5AefEVRGPSCSpop4OjfvntQQPzaKD-1xiqJqScREIM-opZbwpZaAlKqlCamW7A-l7x1COl7E-AceLUCNTnBJOfCW1CBWPtG8sq7T5XpxC1Rpi-pnrZEwLG5z9XXwNe91RvzftPQ6ez2HfNktOwbqAcrJK9aUB-0RcZW38DQ-OqFY |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2018+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition&rft.atitle=%22Learning-Compression%22+Algorithms+for+Neural+Net+Pruning&rft.au=Carreira-Perpinan%2C+Miguel+A.&rft.au=Idelbayev%2C+Yerlan&rft.date=2018-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=8532&rft.epage=8541&rft_id=info:doi/10.1109%2FCVPR.2018.00890&rft.externalDocID=8578988 |