Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance

Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high degree of sparsity while maintaining their performance. Such pruning is essential for performing real time sequence learning tasks using recur...

Full description

Saved in:
Bibliographic Details
Main Authors Kalra, Suryam Arnav, Biswas, Arindam, Mitra, Pabitra, Basu, Biswajit
Format Journal Article
LanguageEnglish
Published 17.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high degree of sparsity while maintaining their performance. Such pruning is essential for performing real time sequence learning tasks using recurrent neural networks in resource constrained platforms. We prune recurrent networks such as RNNs and LSTMs, maintaining a large spectral gap of the underlying graphs and ensuring their layerwise expansion properties. We also study the time unfolded recurrent network graphs in terms of the properties of their bipartite layers. Experimental results for the benchmark sequence MNIST, CIFAR-10, and Google speech command data show that expander graph properties are key to preserving classification accuracy of RNN and LSTM.
AbstractList Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high degree of sparsity while maintaining their performance. Such pruning is essential for performing real time sequence learning tasks using recurrent neural networks in resource constrained platforms. We prune recurrent networks such as RNNs and LSTMs, maintaining a large spectral gap of the underlying graphs and ensuring their layerwise expansion properties. We also study the time unfolded recurrent network graphs in terms of the properties of their bipartite layers. Experimental results for the benchmark sequence MNIST, CIFAR-10, and Google speech command data show that expander graph properties are key to preserving classification accuracy of RNN and LSTM.
Author Basu, Biswajit
Kalra, Suryam Arnav
Mitra, Pabitra
Biswas, Arindam
Author_xml – sequence: 1
  givenname: Suryam Arnav
  surname: Kalra
  fullname: Kalra, Suryam Arnav
– sequence: 2
  givenname: Arindam
  surname: Biswas
  fullname: Biswas, Arindam
– sequence: 3
  givenname: Pabitra
  surname: Mitra
  fullname: Mitra, Pabitra
– sequence: 4
  givenname: Biswajit
  surname: Basu
  fullname: Basu, Biswajit
BackLink https://doi.org/10.48550/arXiv.2403.11100$$DView paper in arXiv
BookMark eNotj71Ow0AQhK-AAgIPQMW9gM36_l2iKAQkK0QovbV21sIiOVvrOCRvjwlU3xSj0Xy34ip2kYR4yCA1wVp4Qj61x1QZ0GmWZQA3YrVk7D_l4tRjHNouyjbKNY-RtvKD6pGZ4kGuaGTcTTh8d_wlCzwTD1ONBuIjyTVx0_EeY0134rrB3UD3_5yJzctiM39Nivfl2_y5SNB5SKYr2lZGudw7l8O2seDr2oUp50FbDAYMQKbQBaMqQ-BVRToYr00TtFd6Jh7_Zi9CZc_tHvlc_oqVFzH9A8VeSHw
ContentType Journal Article
Copyright http://creativecommons.org/licenses/by/4.0
Copyright_xml – notice: http://creativecommons.org/licenses/by/4.0
DBID AKY
GOX
DOI 10.48550/arxiv.2403.11100
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2403_11100
GroupedDBID AKY
GOX
ID FETCH-LOGICAL-a670-55035b426976690df507cc686909835a84040012a6842b4e072be384734f83723
IEDL.DBID GOX
IngestDate Wed Mar 20 12:39:02 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-a670-55035b426976690df507cc686909835a84040012a6842b4e072be384734f83723
OpenAccessLink https://arxiv.org/abs/2403.11100
ParticipantIDs arxiv_primary_2403_11100
PublicationCentury 2000
PublicationDate 2024-03-17
PublicationDateYYYYMMDD 2024-03-17
PublicationDate_xml – month: 03
  year: 2024
  text: 2024-03-17
  day: 17
PublicationDecade 2020
PublicationYear 2024
Score 1.909799
SecondaryResourceType preprint
Snippet Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Title Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance
URI https://arxiv.org/abs/2403.11100
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1LSwMxEB5qT15EUalPcvAa3M1jsz2K9IHIWqTC3pZkN4FeVqmt9Od3JrtSL17zgGRCMt8kX74BeEht7jMdNM_HouFKmJjmJeU6SB2UFza3kSBbZPMP9VLqcgDs9y-MXe9WP50-sPt-JLE42tQJBuVHQhBla_ZWdo-TUYqrb39ohxgzFv1xEtNTOOnRHXvqluMMBr49h2JGotBsssONR3dTbNWyxXqL5xt7p9tu0kdiJJKBPYuOlc1eLUFhRgQJoiSyxYHffwHL6WT5POd9GgNuM5NwHKXUjn6MmgxD0SYgAqvrjDJBjRH-WIywFIEOSy9iTvnECOclOg2pAkaPQl7CsP1s_QhYWuvUaeFyLyXa0bhaBZF4lXgTjMibKxjFyVdfnVJFRXapol2u_6-6gWOBnpqIVam5heFmvfV36Gk37j6aew9wSnvP
link.rule.ids 228,230,786,891
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Graph+Expansion+in+Pruned+Recurrent+Neural+Network+Layers+Preserve+Performance&rft.au=Kalra%2C+Suryam+Arnav&rft.au=Biswas%2C+Arindam&rft.au=Mitra%2C+Pabitra&rft.au=Basu%2C+Biswajit&rft.date=2024-03-17&rft_id=info:doi/10.48550%2Farxiv.2403.11100&rft.externalDocID=2403_11100