Making Memristive Neural Network Accelerators Reliable

Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks as compared to other supervised learning models. DNNs often require a large amount of data movement, resulting in performance and energy overh...

Full description

Saved in:
Bibliographic Details
Published inProceedings - International Symposium on High-Performance Computer Architecture pp. 52 - 65
Main Authors Feinberg, Ben, Wang, Shibo, Ipek, Engin
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.02.2018
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks as compared to other supervised learning models. DNNs often require a large amount of data movement, resulting in performance and energy overheads. One promising way to address this problem is to design an accelerator based on in-situ analog computing that leverages the fundamental electrical properties of memristive circuits to perform matrix-vector multiplication. Recent work on analog neural network accelerators has shown great potential in improving both the system performance and the energy efficiency. However, detecting and correcting the errors that occur during in-memory analog computation remains largely unexplored. The same electrical properties that provide the performance and energy improvements make these systems especially susceptible to errors, which can severely hurt the accuracy of the neural network accelerators. This paper examines a new error correction scheme for analog neural network accelerators based on arithmetic codes. The proposed scheme encodes the data through multiplication by an integer, which preserves addition operations through the distributive property. Error detection and correction are performed through a modulus operation and a correction table lookup. This basic scheme is further improved by data-aware encoding to exploit the state dependence of the errors, and by knowledge of how critical each portion of the computation is to overall system accuracy. By leveraging the observation that a physical row that contains fewer 1s is less susceptible to an error, the proposed scheme increases the effective error correction capability with less than 4.5% area and less than 4.7% energy overheads. When applied to a memristive DNN accelerator performing inference on the MNIST and ILSVRC-2012 datasets, the proposed technique reduces the respective misclassification rates by 1.5x and 1.1x.
AbstractList Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks as compared to other supervised learning models. DNNs often require a large amount of data movement, resulting in performance and energy overheads. One promising way to address this problem is to design an accelerator based on in-situ analog computing that leverages the fundamental electrical properties of memristive circuits to perform matrix-vector multiplication. Recent work on analog neural network accelerators has shown great potential in improving both the system performance and the energy efficiency. However, detecting and correcting the errors that occur during in-memory analog computation remains largely unexplored. The same electrical properties that provide the performance and energy improvements make these systems especially susceptible to errors, which can severely hurt the accuracy of the neural network accelerators. This paper examines a new error correction scheme for analog neural network accelerators based on arithmetic codes. The proposed scheme encodes the data through multiplication by an integer, which preserves addition operations through the distributive property. Error detection and correction are performed through a modulus operation and a correction table lookup. This basic scheme is further improved by data-aware encoding to exploit the state dependence of the errors, and by knowledge of how critical each portion of the computation is to overall system accuracy. By leveraging the observation that a physical row that contains fewer 1s is less susceptible to an error, the proposed scheme increases the effective error correction capability with less than 4.5% area and less than 4.7% energy overheads. When applied to a memristive DNN accelerator performing inference on the MNIST and ILSVRC-2012 datasets, the proposed technique reduces the respective misclassification rates by 1.5x and 1.1x.
Author Ipek, Engin
Wang, Shibo
Feinberg, Ben
Author_xml – sequence: 1
  givenname: Ben
  surname: Feinberg
  fullname: Feinberg, Ben
– sequence: 2
  givenname: Shibo
  surname: Wang
  fullname: Wang, Shibo
– sequence: 3
  givenname: Engin
  surname: Ipek
  fullname: Ipek, Engin
BookMark eNotzE9LwzAYgPEoCq7Tswcv_QKtSd78PZaiTthURGG3kS5vJC5rJa2K396Bnn6Xh6cgJ_3QIyGXjNaMUXu9eGqbmlNmakopk0ekYBKMAiXt-pjMOGhTcQrrM1KM4_uh4VayGVErt4v9W7nCfY7jFL-wfMDP7NKB6XvIu7LZbjFhdtOQx_IZU3RdwnNyGlwa8eLfOXm9vXlpF9Xy8e6-bZZVZFpOlQpeaIss6CA0FeiZ6ahnFoGC5KKToK2X4BTX3muhhJGBA1iF6CA4CXNy9feNiLj5yHHv8s_GAFfWGvgFR6tGPQ
CODEN IEEPAD
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/HPCA.2018.00015
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore Digital Library
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISBN 153863659X
9781538636596
EISSN 2378-203X
EndPage 65
ExternalDocumentID 8326998
Genre orig-research
GroupedDBID 29O
6IE
6IF
6IH
6IK
6IL
6IM
6IN
AAJGR
AAWTH
ABLEC
ADZIZ
ALMA_UNASSIGNED_HOLDINGS
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CBEJK
CHZPO
IEGSK
IPLJI
M43
OCL
RIE
RIL
RNS
ID FETCH-LOGICAL-i175t-6fd479e1f7f4704ed18b0d19e303524b5379d53a627dd746485f23396eea3fa53
IEDL.DBID RIE
IngestDate Wed Aug 27 02:51:15 EDT 2025
IsPeerReviewed false
IsScholarly true
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i175t-6fd479e1f7f4704ed18b0d19e303524b5379d53a627dd746485f23396eea3fa53
PageCount 14
ParticipantIDs ieee_primary_8326998
PublicationCentury 2000
PublicationDate 2018-Feb
PublicationDateYYYYMMDD 2018-02-01
PublicationDate_xml – month: 02
  year: 2018
  text: 2018-Feb
PublicationDecade 2010
PublicationTitle Proceedings - International Symposium on High-Performance Computer Architecture
PublicationTitleAbbrev HPCA
PublicationYear 2018
Publisher IEEE
Publisher_xml – name: IEEE
SSID ssj0002951
Score 2.4219704
Snippet Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks...
SourceID ieee
SourceType Publisher
StartPage 52
SubjectTerms Accelerator architectures
Arrays
Computational modeling
Computer architecture
Electric potential
Neural networks
Programming
Resistance
Thermal noise
Title Making Memristive Neural Network Accelerators Reliable
URI https://ieeexplore.ieee.org/document/8326998
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1NTwMhEJ20PXmq2hq_w8Gj2-4uLCzHprFpTNb0YJPeGhaGxKit0W0P_nqB3VZjPHiCcIEwwDDw5j2Am7RkRnBNIyWkdgGKNeEcjDjFNM1V6Vr800DxwKdzdr_IFi243efCIGIAn-HAV8NfvlnrjX8qG7rVx1140Ia2C9zqXK39qZu6q0JD3ZPEcjidjUceuOWRkrHXvP2hnRJcx6QLxa7TGjHyPNhU5UB__uJj_O-oDqH_naRHZnv3cwQtXB1Dd6fSQJpN2wNeBMEpUuBr2NBbJJ6RQ724IkDAyUhr53zCf_sH8Rhln07Vh_nk7nE8jRq1hOjJXQGqiFvDhMTECstEzNAkeRmbRCL1lKeszKiQJqOKp8IYwTjLM5tSKjmiolZl9AQ6q_UKT4FozTBJjSxdtMacuaSWPKfUxsr6TrIz6PlpWL7VhBjLZgbO_26-gANviBrqfAmd6n2DV86TV-V1MOEXhnCd8A
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV09T8MwED2VMsBUoEV8k4GRtEns2PFYVaAATdWhlbpViX2WENAiSBn49dhOWhBiYErkxZEv9vPZ794DuIoKqjiTxM-5kCZB0cqtgz4jGEVJXpgWezSQjVg6pfezeNaA600tDCI68hl27au7y1dLubJHZT3z9zGTHmzBtsH9OKyqtTbrbmQ2C7V4TxiIXjoe9C11y3IlA-t6-8M9xYHHbQuydbcVZ-SpuyqLrvz8pcj43-_ag853mZ433gDQPjRwcQCttU-DV0_bNrDMWU55Gb64Kf2BntXkyJ_Nw5HAvb6UBn7cjfu7Z1nKtqCqA9Pbm8kg9Wu_BP_RbAJKn2lFucBQc015QFGFSRGoUCCxoqe0iAkXKiY5i7hSnDKaxDoiRDDEnOg8JofQXCwXeASelBTDSInC5GvUBExIwRJCdJBr20l8DG07DPPXShJjXo_Ayd_Nl7CTTrLhfHg3ejiFXRuUivh8Bs3ybYXnBtfL4sKF8wv2g6E5
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=proceeding&rft.title=Proceedings+-+International+Symposium+on+High-Performance+Computer+Architecture&rft.atitle=Making+Memristive+Neural+Network+Accelerators+Reliable&rft.au=Feinberg%2C+Ben&rft.au=Wang%2C+Shibo&rft.au=Ipek%2C+Engin&rft.date=2018-02-01&rft.pub=IEEE&rft.eissn=2378-203X&rft.spage=52&rft.epage=65&rft_id=info:doi/10.1109%2FHPCA.2018.00015&rft.externalDocID=8326998