Compact Mixed-Signal Convolutional Neural Network Using a Single Modular Neuron
This paper demonstrates a compact mixed-signal (MS) convolutional neural network (CNN) design procedure by proposing a MS modular neuron unit that alleviates analog circuit related design issues such as noise. Through the first step of the proposed procedure, we design a CNN in software with a minim...
Saved in:
Published in | IEEE transactions on circuits and systems. I, Regular papers Vol. 67; no. 12; pp. 5189 - 5199 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | This paper demonstrates a compact mixed-signal (MS) convolutional neural network (CNN) design procedure by proposing a MS modular neuron unit that alleviates analog circuit related design issues such as noise. Through the first step of the proposed procedure, we design a CNN in software with a minimized number of channels for each layer while satisfying the network performance to the target, which creates low representational and computational cost. Then, the network is reconstructed and retrained with a single modular neuron that is recursively utilized for the entire network for the maximum hardware efficiency with a fixed number of parameters that consider signal attenuation. For the last step of the proposed procedure, the parameters of the networks are quantized to an implementable level of MS neurons. We designed the networks for MNIST and Cifar-10 and achieved compact CNNs with a single MS neuron with 97% accuracy for MNIST and 85% accuracy for Cifar-10 whose representational cost and computational cost are reduced to least two times smaller than prior works. The estimated energy per classification of the hardware network for Cifar-10 with a single MS neuron, designed with optimum noise and matching requirements, is <inline-formula> <tex-math notation="LaTeX">0.5\mu </tex-math></inline-formula> J, which is five times smaller than its digital counterpart. |
---|---|
AbstractList | This paper demonstrates a compact mixed-signal (MS) convolutional neural network (CNN) design procedure by proposing a MS modular neuron unit that alleviates analog circuit related design issues such as noise. Through the first step of the proposed procedure, we design a CNN in software with a minimized number of channels for each layer while satisfying the network performance to the target, which creates low representational and computational cost. Then, the network is reconstructed and retrained with a single modular neuron that is recursively utilized for the entire network for the maximum hardware efficiency with a fixed number of parameters that consider signal attenuation. For the last step of the proposed procedure, the parameters of the networks are quantized to an implementable level of MS neurons. We designed the networks for MNIST and Cifar-10 and achieved compact CNNs with a single MS neuron with 97% accuracy for MNIST and 85% accuracy for Cifar-10 whose representational cost and computational cost are reduced to least two times smaller than prior works. The estimated energy per classification of the hardware network for Cifar-10 with a single MS neuron, designed with optimum noise and matching requirements, is [Formula Omitted] J, which is five times smaller than its digital counterpart. This paper demonstrates a compact mixed-signal (MS) convolutional neural network (CNN) design procedure by proposing a MS modular neuron unit that alleviates analog circuit related design issues such as noise. Through the first step of the proposed procedure, we design a CNN in software with a minimized number of channels for each layer while satisfying the network performance to the target, which creates low representational and computational cost. Then, the network is reconstructed and retrained with a single modular neuron that is recursively utilized for the entire network for the maximum hardware efficiency with a fixed number of parameters that consider signal attenuation. For the last step of the proposed procedure, the parameters of the networks are quantized to an implementable level of MS neurons. We designed the networks for MNIST and Cifar-10 and achieved compact CNNs with a single MS neuron with 97% accuracy for MNIST and 85% accuracy for Cifar-10 whose representational cost and computational cost are reduced to least two times smaller than prior works. The estimated energy per classification of the hardware network for Cifar-10 with a single MS neuron, designed with optimum noise and matching requirements, is <inline-formula> <tex-math notation="LaTeX">0.5\mu </tex-math></inline-formula> J, which is five times smaller than its digital counterpart. |
Author | Nam, Byeong-Gyu Chang, Dong-Jin Ryu, Seung-Tak |
Author_xml | – sequence: 1 givenname: Dong-Jin surname: Chang fullname: Chang, Dong-Jin email: yourange@kaist.ac.kr organization: School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea – sequence: 2 givenname: Byeong-Gyu orcidid: 0000-0003-0069-1959 surname: Nam fullname: Nam, Byeong-Gyu email: bgnam@cnu.ac.kr organization: Department of Science and Engineering, Chungnam National University, Daejeon, South Korea – sequence: 3 givenname: Seung-Tak orcidid: 0000-0002-6947-7785 surname: Ryu fullname: Ryu, Seung-Tak email: stryu@kaist.ac.kr organization: School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea |
BookMark | eNo9kMtuwjAQRa2KSgXaD6i6idR16PiRxF5WUR9IUBbA2nKSCQoNMbWTPv6-SUFd3Rnp3NHoTMiosQ0SckthRimoh026ns8YMJhxgESI5IKMaRTJECTEo2EWKpScySsy8X4PwBRwOiar1B6OJm-DZfWNRbiudo2pg9Q2n7bu2soO2xt27i_aL-veg62vml1ggnUfNQZLW3S1cX-Uba7JZWlqjzfnnJLt89MmfQ0Xq5d5-rgIc6Z4GzKOkMTATBwDJmBiFlHJUJaY5VFZMiOMLKCAEkTGqAAhIFM8QUULNBkt-JTcn-4enf3o0Ld6bzvXf-s1E3HCVMwk7Sl6onJnvXdY6qOrDsb9aAp68KYHb3rwps_e-s7dqVMh4j-vaMQoj_gvxb9qWw |
CODEN | ITCSCH |
CitedBy_id | crossref_primary_10_1109_ACCESS_2021_3106658 |
Cites_doi | 10.1109/JSSC.2018.2869150 10.1049/el.2014.3995 10.1109/ASSCC.2008.4708780 10.1109/ISSCC.2016.7418008 10.1038/nature14539 10.1109/JSSC.2016.2563780 10.1109/JSSC.2013.2278471 10.1038/nature16961 10.1109/4.104196 10.1109/ISSCC.2017.7870350 10.1109/JSSC.2016.2599536 10.1109/ACSSC.2015.7421361 10.1109/JSSC.2017.2712626 10.1109/VLSIC.2018.8502421 10.1109/JSSC.2016.2642198 10.1109/TCSI.2016.2600663 10.1109/ISSCC.2017.7870467 10.1007/978-3-030-01237-3_23 10.1109/TBCAS.2015.2500101 10.1162/neco.1996.8.3.643 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION 7SP 8FD L7M |
DOI | 10.1109/TCSI.2020.3007447 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005-present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library Online CrossRef Electronics & Communications Abstracts Technology Research Database Advanced Technologies Database with Aerospace |
DatabaseTitle | CrossRef Technology Research Database Advanced Technologies Database with Aerospace Electronics & Communications Abstracts |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-0806 |
EndPage | 5199 |
ExternalDocumentID | 10_1109_TCSI_2020_3007447 9152135 |
Genre | orig-research |
GrantInformation_xml | – fundername: Samsung Research Funding Center of Samsung Electronics grantid: SRFC-IT1502-04 funderid: 10.13039/100004358 |
GroupedDBID | 0R~ 29I 4.4 5VS 6IK 97E AAJGR AASAJ ABQJQ ABVLG ACIWK AETIX AIBXA AKJIK ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD HZ~ H~9 IFIPE IPLJI JAVBF M43 O9- OCL PZZ RIA RIE RIG RNS VJK XFK AAYXX CITATION 7SP 8FD L7M |
ID | FETCH-LOGICAL-c293t-23e07602a660e70a625182e8febc5ff2a4a8d0d0f04b2140440b937e91deab1d3 |
IEDL.DBID | RIE |
ISSN | 1549-8328 |
IngestDate | Thu Oct 10 16:14:32 EDT 2024 Fri Aug 23 01:04:38 EDT 2024 Wed Jun 26 19:26:40 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c293t-23e07602a660e70a625182e8febc5ff2a4a8d0d0f04b2140440b937e91deab1d3 |
ORCID | 0000-0002-6947-7785 0000-0003-0069-1959 |
PQID | 2467296281 |
PQPubID | 85411 |
PageCount | 11 |
ParticipantIDs | proquest_journals_2467296281 ieee_primary_9152135 crossref_primary_10_1109_TCSI_2020_3007447 |
PublicationCentury | 2000 |
PublicationDate | 2020-12-01 |
PublicationDateYYYYMMDD | 2020-12-01 |
PublicationDate_xml | – month: 12 year: 2020 text: 2020-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems. I, Regular papers |
PublicationTitleAbbrev | TCSI |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | bankman (ref9) 2018 ref34 ref12 ref15 he (ref24) 2015 ref14 ref31 tan (ref22) 2019 ref33 ref11 ref32 ref10 howard (ref23) 2017 krizhevsky (ref2) 2012; 25 ref17 ref16 ref19 ref18 silver (ref3) 2016; 529 sakr (ref21) 2017; 70 chen (ref4) 2015 ioffe (ref25) 2015 bishop (ref13) 2006 le cun (ref1) 2015; 521 ref20 kull (ref30) 2013 jung (ref28) 2018 ref27 ref7 ref6 whatmough (ref29) 2017 ref5 courbariaux (ref8) 2016 zhou (ref26) 2016 |
References_xml | – ident: ref10 doi: 10.1109/JSSC.2018.2869150 – ident: ref17 doi: 10.1049/el.2014.3995 – ident: ref34 doi: 10.1109/ASSCC.2008.4708780 – ident: ref5 doi: 10.1109/ISSCC.2016.7418008 – volume: 521 start-page: 436 year: 2015 ident: ref1 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 contributor: fullname: le cun – year: 2015 ident: ref25 article-title: Batch normalization: Accelerating deep network training by reducing internal covariate shift publication-title: arXiv 1502 03167 contributor: fullname: ioffe – volume: 70 start-page: 3007 year: 2017 ident: ref21 article-title: Analytical guarantees on numerical precision of deep neural networks publication-title: Proc 34th Int Conf Mach Learn contributor: fullname: sakr – year: 2016 ident: ref8 article-title: Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or ?1 publication-title: arXiv 1602 02830 [cs] contributor: fullname: courbariaux – ident: ref33 doi: 10.1109/JSSC.2016.2563780 – start-page: 242 year: 2017 ident: ref29 article-title: A 28nm SoC with a 1.2GHz 568nJ/prediction sparse deep-neural-network engine with >0.1 timing error rate tolerance for IoT applications publication-title: IEEE ISSCC Dig Tech Papers contributor: fullname: whatmough – ident: ref19 doi: 10.1109/JSSC.2013.2278471 – volume: 25 start-page: 1097 year: 2012 ident: ref2 article-title: Imagenet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst contributor: fullname: krizhevsky – volume: 529 start-page: 484 year: 2016 ident: ref3 article-title: Mastering the game of Go with deep neural networks and tree search publication-title: Nature doi: 10.1038/nature16961 contributor: fullname: silver – ident: ref14 doi: 10.1109/4.104196 – year: 2015 ident: ref24 article-title: Deep residual learning for image recognition publication-title: arXiv 1512 03385 contributor: fullname: he – start-page: 468 year: 2013 ident: ref30 article-title: A 3.1 mW 8b 1.2GS/s single-channel asynchronous SAR ADC with alternate comparators for enhanced speed in 32nm digital SOI CMOS publication-title: IEEE ISSCC Dig Tech Papers contributor: fullname: kull – ident: ref7 doi: 10.1109/ISSCC.2017.7870350 – year: 2019 ident: ref22 article-title: EfficientNet: Rethinking model scaling for convolutional neural networks publication-title: arXiv 1905 11946 contributor: fullname: tan – ident: ref18 doi: 10.1109/JSSC.2016.2599536 – ident: ref20 doi: 10.1109/ACSSC.2015.7421361 – year: 2016 ident: ref26 article-title: DoReFa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients publication-title: arXiv 1606 06160 [cs] contributor: fullname: zhou – ident: ref11 doi: 10.1109/JSSC.2017.2712626 – ident: ref12 doi: 10.1109/VLSIC.2018.8502421 – year: 2017 ident: ref23 article-title: MobileNets: Efficient convolutional neural networks for mobile vision applications publication-title: arXiv 1704 04861 contributor: fullname: howard – ident: ref15 doi: 10.1109/JSSC.2016.2642198 – ident: ref6 doi: 10.1109/TCSI.2016.2600663 – ident: ref31 doi: 10.1109/ISSCC.2017.7870467 – year: 2015 ident: ref4 article-title: Compressing neural networks with the hashing trick publication-title: arXiv 1504 04788 contributor: fullname: chen – year: 2006 ident: ref13 publication-title: Pattern Recognition and Machine Learning contributor: fullname: bishop – year: 2018 ident: ref28 article-title: Learning to quantize deep networks by optimizing quantization intervals with task loss publication-title: arXiv 1808 05779 contributor: fullname: jung – ident: ref27 doi: 10.1007/978-3-030-01237-3_23 – start-page: 222 year: 2018 ident: ref9 article-title: An always-on 3.8 ?J/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28-nm CMOS publication-title: IEEE ISSCC Dig Tech Papers contributor: fullname: bankman – ident: ref16 doi: 10.1109/TBCAS.2015.2500101 – ident: ref32 doi: 10.1162/neco.1996.8.3.643 |
SSID | ssj0029031 |
Score | 2.357093 |
Snippet | This paper demonstrates a compact mixed-signal (MS) convolutional neural network (CNN) design procedure by proposing a MS modular neuron unit that alleviates... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Publisher |
StartPage | 5189 |
SubjectTerms | Accuracy Analog circuits Artificial neural networks Attenuation Biological neural networks Circuit design compact neural network computational cost Computational efficiency Computing costs Convolution convolutional neural network Deep neural network Design Hardware mixed-signal neuron modular neuron Modular units network retraining Neural networks Neurons Parameters representational cost Software |
Title | Compact Mixed-Signal Convolutional Neural Network Using a Single Modular Neuron |
URI | https://ieeexplore.ieee.org/document/9152135 https://www.proquest.com/docview/2467296281 |
Volume | 67 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLbGTnDgNRCDgXrghMiWplnTHNHENJAGh23SblVeRROoRdAhxK8nSbtpAg6c0kpJFdmu7S92bIBLRUWoFVUooyxD1DrUKJGao74UiQ6ForHvdTh-iEczej_vzxtwvb4LY4zxyWem6x59LF8XaumOynrcGZuovwVbjPPqrtYaXHEcVbVRKUdWSpM6ghli3psOJncWCRILUJ3FdJ1UNmyQb6rySxN78zLcg_FqY1VWyXN3Wcqu-vpRs_G_O9-H3drPDG4qwTiAhskPYWej-mALHr0uUGUwXnwajSaLJ7diUOQftTjaN1e7ww8-WTzwCQaBCCZ2eDHBuNAui9XPKvIjmA1vp4MRqvsrIGWNfIlIZFxcjog4xoZhYaGQRRsmyYxU_Swjglp-YY0zTCVxZXgoltabMTzURshQR8fQzIvcnEAQZowowWKpGKaacWkkZpplOooTTWTYhqsVxdPXqoxG6uEH5qljT-rYk9bsaUPLUXA9sSZeGzorHqX1j_aeEqvoCY9JEp7-veoMtt23qwyUDjTLt6U5t35EKS-8AH0DJzbEnw |
link.rule.ids | 315,783,787,799,27936,27937,55086 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLbGOAAH3ojBgB44ITLSNGubI5pAG6xw2JC4VXkVTaAWQYcQv54k7aYJOHBKKyVqZLu2v9ixAU4l5b6SVKKMRhmixqFGsVAMdQWPlc8lDV2vw-Qu7D_Qm8fuYwPO53dhtNYu-Ux37KOL5atCTu1R2QWzxiboLsGy8avjsLqtNYdXDAdVdVTKkJHTuI5h-phdjHujgcGCxEBUazNtL5UFK-TaqvzSxc7AXG9AMttalVfy3JmWoiO_flRt_O_eN2G99jS9y0o0tqCh821YW6g_uAP3ThvI0ksmn1qh0eTJrugV-UctkObNVu9wg0sX91yKgce9kRletJcUyuaxullFvgsP11fjXh_VHRaQNGa-RCTQNjJHeBhiHWFuwJDBGzrOtJDdLCOcGo5hhTNMBbGFeCgWxp_RzFeaC18Fe9DMi1zvg-dnEZE8CoWMMFURE1rgSEWZCsJYEeG34GxG8fS1KqSROgCCWWrZk1r2pDV7WrBjKTifWBOvBe0Zj9L6V3tPiVH1hIUk9g_-XnUCK_1xMkyHg7vbQ1i136nyUdrQLN-m-sh4FaU4dsL0DZxkx-o |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Compact+Mixed-Signal+Convolutional+Neural+Network+Using+a+Single+Modular+Neuron&rft.jtitle=IEEE+transactions+on+circuits+and+systems.+I%2C+Regular+papers&rft.au=Chang%2C+Dong-Jin&rft.au=Nam%2C+Byeong-Gyu&rft.au=Ryu%2C+Seung-Tak&rft.date=2020-12-01&rft.issn=1549-8328&rft.eissn=1558-0806&rft.volume=67&rft.issue=12&rft.spage=5189&rft.epage=5199&rft_id=info:doi/10.1109%2FTCSI.2020.3007447&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSI_2020_3007447 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1549-8328&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1549-8328&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1549-8328&client=summon |