Hardware Architecture Exploration for Deep Neural Networks

Owing to good performance, deep Convolution Neural Networks (CNNs) are rapidly rising in popularity across a broad range of applications. Since high accuracy CNNs are both computation intensive and memory intensive, many researchers have shown significant interest in the accelerator design. Furtherm...

Full description

Saved in:
Bibliographic Details
Published inArabian journal for science and engineering (2011) Vol. 46; no. 10; pp. 9703 - 9712
Main Authors Zheng, Wenqi, Zhao, Yangyi, Chen, Yunfan, Park, Jinhong, Shin, Hyunchul
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.10.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Owing to good performance, deep Convolution Neural Networks (CNNs) are rapidly rising in popularity across a broad range of applications. Since high accuracy CNNs are both computation intensive and memory intensive, many researchers have shown significant interest in the accelerator design. Furthermore, the AI chip market size grows and the competition on the performance, cost, and power consumption of the artificial intelligence SoC designs is increasing. Therefore, it is important to develop design techniques and platforms that are useful for the efficient design of optimized AI architectures to satisfy the given specifications in a short design time. In this research, we have developed design space exploration techniques and environments for the optimal design of the overall system including computing modules and memories. Our current design platform is built using NVIDIA Deep Learning Accelerator as a computing model, SRAM as a buffer, and DRAM with GDDR6 as an off-chip memory. We also developed a program to estimate the processing time of a given neural network. By modifying both the on-chip SRAM size and the computing module size, a designer can explore the design space efficiently, and then choose the optimal architecture which shows the minimal cost while satisfying the performance specification. To illustrate the operation of the design platform, two well-known deep CNNs are used, which are YOLOv3 and faster RCNN. This technology can be used to explore and to optimize the hardware architectures of the CNNs so that the cost can be minimized.
AbstractList Owing to good performance, deep Convolution Neural Networks (CNNs) are rapidly rising in popularity across a broad range of applications. Since high accuracy CNNs are both computation intensive and memory intensive, many researchers have shown significant interest in the accelerator design. Furthermore, the AI chip market size grows and the competition on the performance, cost, and power consumption of the artificial intelligence SoC designs is increasing. Therefore, it is important to develop design techniques and platforms that are useful for the efficient design of optimized AI architectures to satisfy the given specifications in a short design time. In this research, we have developed design space exploration techniques and environments for the optimal design of the overall system including computing modules and memories. Our current design platform is built using NVIDIA Deep Learning Accelerator as a computing model, SRAM as a buffer, and DRAM with GDDR6 as an off-chip memory. We also developed a program to estimate the processing time of a given neural network. By modifying both the on-chip SRAM size and the computing module size, a designer can explore the design space efficiently, and then choose the optimal architecture which shows the minimal cost while satisfying the performance specification. To illustrate the operation of the design platform, two well-known deep CNNs are used, which are YOLOv3 and faster RCNN. This technology can be used to explore and to optimize the hardware architectures of the CNNs so that the cost can be minimized.
Author Zhao, Yangyi
Shin, Hyunchul
Zheng, Wenqi
Park, Jinhong
Chen, Yunfan
Author_xml – sequence: 1
  givenname: Wenqi
  surname: Zheng
  fullname: Zheng, Wenqi
  organization: Department of Electrical Engineering, Hanyang University
– sequence: 2
  givenname: Yangyi
  surname: Zhao
  fullname: Zhao, Yangyi
  organization: Department of Electrical Engineering, Hanyang University
– sequence: 3
  givenname: Yunfan
  surname: Chen
  fullname: Chen, Yunfan
  organization: Department of Electrical Engineering, Hanyang University
– sequence: 4
  givenname: Jinhong
  surname: Park
  fullname: Park, Jinhong
  organization: Samsung Electronics Inc
– sequence: 5
  givenname: Hyunchul
  orcidid: 0000-0003-3020-5130
  surname: Shin
  fullname: Shin, Hyunchul
  email: shin@hanyang.ac.kr
  organization: Department of Electrical Engineering, Hanyang University
BookMark eNp9kE9LAzEQxYNUsNZ-AU8LnqP5v423UqsVil4UvIVsktXVdbMmKdVvb-wKgoeeZgbeb2beOwajzncOgFOMzjFC5UXElAoJEcEQccY5ZAdgTLDEkJEZHu16Crkon47ANMamQmxGJceYjsHlSge71cEV82BemuRM2uRh-dm3PujU-K6ofSiunOuLO7cJus0lbX14iyfgsNZtdNPfOgGP18uHxQqu729uF_M1NBTLBKXVshKVFFWlqXUlrqk12iJruJVCMmyFKE3NJCNGuBrryjlUG04Rz7ZwSSfgbNjbB_-xcTGpV78JXT6pCC8J4RgRmlWzQWWCjzG4Wpkm7QykoJtWYaR-wlJDWCqHpXZhKZZR8g_tQ_Ouw9d-iA5QzOLu2YW_r_ZQ3wqJfmA
CitedBy_id crossref_primary_10_1038_s41598_022_12141_4
crossref_primary_10_1007_s00034_024_02760_9
Cites_doi 10.1145/3007787.3001179
10.1109/HOTCHIPS.2016.7936226
ContentType Journal Article
Copyright King Fahd University of Petroleum & Minerals 2021
King Fahd University of Petroleum & Minerals 2021.
Copyright_xml – notice: King Fahd University of Petroleum & Minerals 2021
– notice: King Fahd University of Petroleum & Minerals 2021.
DBID AAYXX
CITATION
DOI 10.1007/s13369-021-05455-4
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 2191-4281
EndPage 9712
ExternalDocumentID 10_1007_s13369_021_05455_4
GrantInformation_xml – fundername: Samsung
  funderid: http://dx.doi.org/10.13039/100004358
GroupedDBID -EM
0R~
203
2KG
406
AAAVM
AACDK
AAHNG
AAIAL
AAJBT
AANZL
AARHV
AASML
AATNV
AATVU
AAUYE
AAYTO
AAYZH
ABAKF
ABDBF
ABDZT
ABECU
ABFTD
ABFTV
ABJNI
ABJOX
ABKCH
ABMQK
ABQBU
ABSXP
ABTEG
ABTKH
ABTMW
ABXPI
ACAOD
ACBXY
ACDTI
ACHSB
ACMDZ
ACMLO
ACOKC
ACPIV
ACUHS
ACZOJ
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFQL
AEJRE
AEMSY
AEOHA
AESKC
AEVLU
AEXYK
AFBBN
AFLOW
AFQWF
AGAYW
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AHAVH
AHBYD
AHSBF
AIAKS
AIGIU
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALFXC
ALMA_UNASSIGNED_HOLDINGS
AMXSW
AMYLF
AOCGG
AXYYD
BGNMA
CSCUP
DDRTE
DNIVK
DPUIP
EBLON
EBS
EIOEI
EJD
ESX
FERAY
FIGPU
FINBP
FNLPD
FSGXE
GGCAI
GQ6
GQ7
H13
HG6
I-F
IKXTQ
IWAJR
J-C
JBSCW
JZLTJ
L8X
LLZTM
M4Y
MK~
NPVJJ
NQJWS
NU0
O9J
PT4
ROL
RSV
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
TSG
TUS
UOJIU
UTJUX
UZXMN
VFIZW
Z5O
Z7R
Z7V
Z7X
Z7Y
Z7Z
Z81
Z83
Z85
Z88
ZMTXR
~8M
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
AEZWR
AFDZB
AFHIU
AFOHR
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
06D
0VY
23M
29~
2KM
30V
408
5GY
96X
AAJKR
AARTL
AAYIU
AAYQN
AAZMS
ABTHY
ACGFS
ACKNC
ADHHG
ADHIR
AEGNC
AEJHL
AENEX
AEPYU
AETCA
AFWTZ
AFZKB
AGDGC
AGWZB
AGYKE
AHYZX
AIIXL
AMKLP
AMYQR
ANMIH
AYJHY
ESBYG
FFXSO
FRRFC
FYJPI
GGRSB
GJIRD
GX1
HMJXF
HRMNR
HZ~
I0C
IXD
J9A
KOV
O93
OVT
P9P
R9I
RLLFE
S27
S3B
SEG
SHX
T13
U2A
UG4
VC2
W48
WK8
~A9
ID FETCH-LOGICAL-c319t-9da9b6b96bba3de71f3dcad0dc5d96941d667cf4942c6ef1abee0fc5305336173
ISSN 2193-567X
1319-8025
IngestDate Mon Jun 30 08:57:11 EDT 2025
Tue Jul 01 01:34:17 EDT 2025
Thu Apr 24 23:07:03 EDT 2025
Fri Feb 21 02:47:47 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 10
Keywords CNN
AI architecture
Neural network architecture
Design space exploration
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c319t-9da9b6b96bba3de71f3dcad0dc5d96941d667cf4942c6ef1abee0fc5305336173
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0003-3020-5130
PQID 2572251023
PQPubID 2044268
PageCount 10
ParticipantIDs proquest_journals_2572251023
crossref_citationtrail_10_1007_s13369_021_05455_4
crossref_primary_10_1007_s13369_021_05455_4
springer_journals_10_1007_s13369_021_05455_4
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-10-01
PublicationDateYYYYMMDD 2021-10-01
PublicationDate_xml – month: 10
  year: 2021
  text: 2021-10-01
  day: 01
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationTitle Arabian journal for science and engineering (2011)
PublicationTitleAbbrev Arab J Sci Eng
PublicationYear 2021
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Norman, P. J.; Cliff, Y.; Nishant, P. et al.: In-Datacenter Performance Analysis of a Tensor Processing Unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, ISCA ’17, pp 1–12 (2017).
Manoj, A.; Han, C.; Michael, F.; Peter, M.: Fused CNN accelerator. In: The 49th Annual IEEE/ACM International Symposium on Microarchitecture 22, pp 1–12 (2016)
Yongming, S.; Michael, F.; Peter, M.: Escher: A CNN Accelerator with Flexible Buffering to Minimize Off-Chip Transfer. In: 2017 IEEE 25th annual international symposium on field-programmable custom computing machines
Enrico, R.; Marco, R.; Anna, M. N. et al.: Pareto Optimal Design Space Exploration for Accelerated CNN on FPGA. In: 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Shaoqing, R.; Kaiming, H.; Ross, G.; Jian, S.: Faster R-CNN: Towards real-time object detection with region proposal networks. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2015) 1506.01497
Arthur, S.; Francesco, C.: Optimally Scheduling CNN Convolutions for Efficient Memory Access. IEEE transactions on computer-aided design of integrated circuits and systems (2019)
Irtiza, H.; Shengcai, L.; Jinpeng, L. et al.: Pedestrian Detection: The Elephant. In: The Room. arXiv preprint arXiv:2003.08799 (2020)
NVIDIA,NVIDIA open source ML accelerator, http://nvdla.org, (2018)
Shaoli, L.; Zidong, D.; Jinhua, T. et al.: Cambricon: An instruction set architecture for neural networks. In: The 43rd International Symposium on Computer Architecture, 393–405 (2016)
Wei, L.; Shengcai, L.; Weiqiang, R. et al.: High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection. In: CVPR (2019)
Tianshi, C.; Zidong, D.; Ninghui, S. et al.: DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In: The 19th international conference on Architectural support for programming languages and operating systems, 269–284 (2014)
Shijin, Z.; Zidong, D.; Lei, Z. et al.: Cambricon-X: An accelerator for sparse neural networks. In: The 49th Annual IEEE/ACM International Symposium on Microarchitecture 20, pp 1–12 (2016)
Joseph, R.; Ali, F.: YOLOv3: An Incremental Improvement. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2018)1804.02767
Thang, V.; Cao, V. N.; Trung, X. P. et al.: Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks. In: ECCV workshop (2018)
Joseph, R.; Santosh, D.; Ross, G.; Ali, F.: You Only Look Once: Unified, Real-Time Object Detection. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2016) 1506.02640
Song, H.; Xingyu, L.; Huizi, M. et al.: EIE: efficient inference engine on compressed deep neural network. In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture
Yunji, C.; Tao, L.; Shijin, L. et al.: DaDianNao: A machine-learning supercomputer. In: The 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609–622 (2014)
Daofu, L.; Tianshi, C.; Shaoli, L. et al.: PuDianNao: A polyvalent machine learning accelerator. In: The Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, pp 369–381 (2015)
High Bandwidth Memory (HBM) DRAM JESD235C. https://www.jedec.org/standards-documents/docs/jesd235a. (2020)
Sachin, M., Mohammad, R., Linda, S., Hannaneh, H.: ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network. In: Proceedings IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR) 2019, pp 9190–9200
Zidong, D.; Robert, F.; Tianshi, C. et al.: ShiDianNao: shifting vision processing closer to the sensor. In: The 42nd Annual International Symposium on Computer Architecture, 92–104 (2015)
Graphics Double Data Rate (GDDR6) SGRAM Standard JESD250B. https://www.jedec.org/standards-documents/docs/jesd250b. (2018)
Cheng, L.; Man-Kit, S.; Hongxiang, F. et al.: Towards efficient deep neural network training by FPGA-based batch-level parallelism. In: 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
Qijie, Z.; Tao, S.; Yongtao, W. et al.: M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network. In: AAAI (2019)
Chen, W.; Wenjing, W.; Wenhan, Y.; Jiaying, L.: Deep Retinex Decomposition for Low-Light Enhancement. In British Machine Vision Conference (2018)
5455_CR9
5455_CR7
5455_CR8
5455_CR1
5455_CR17
5455_CR2
5455_CR16
5455_CR19
5455_CR18
5455_CR5
5455_CR13
5455_CR6
5455_CR12
5455_CR3
5455_CR15
5455_CR4
5455_CR14
5455_CR11
5455_CR10
5455_CR24
5455_CR23
5455_CR25
5455_CR20
5455_CR22
5455_CR21
References_xml – reference: Joseph, R.; Ali, F.: YOLOv3: An Incremental Improvement. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2018)1804.02767
– reference: Arthur, S.; Francesco, C.: Optimally Scheduling CNN Convolutions for Efficient Memory Access. IEEE transactions on computer-aided design of integrated circuits and systems (2019)
– reference: Norman, P. J.; Cliff, Y.; Nishant, P. et al.: In-Datacenter Performance Analysis of a Tensor Processing Unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, ISCA ’17, pp 1–12 (2017).
– reference: Manoj, A.; Han, C.; Michael, F.; Peter, M.: Fused CNN accelerator. In: The 49th Annual IEEE/ACM International Symposium on Microarchitecture 22, pp 1–12 (2016)
– reference: Yunji, C.; Tao, L.; Shijin, L. et al.: DaDianNao: A machine-learning supercomputer. In: The 47th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 609–622 (2014)
– reference: NVIDIA,NVIDIA open source ML accelerator, http://nvdla.org, (2018)
– reference: Daofu, L.; Tianshi, C.; Shaoli, L. et al.: PuDianNao: A polyvalent machine learning accelerator. In: The Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems, pp 369–381 (2015)
– reference: Zidong, D.; Robert, F.; Tianshi, C. et al.: ShiDianNao: shifting vision processing closer to the sensor. In: The 42nd Annual International Symposium on Computer Architecture, 92–104 (2015)
– reference: Shaoli, L.; Zidong, D.; Jinhua, T. et al.: Cambricon: An instruction set architecture for neural networks. In: The 43rd International Symposium on Computer Architecture, 393–405 (2016)
– reference: Chen, W.; Wenjing, W.; Wenhan, Y.; Jiaying, L.: Deep Retinex Decomposition for Low-Light Enhancement. In British Machine Vision Conference (2018)
– reference: Shijin, Z.; Zidong, D.; Lei, Z. et al.: Cambricon-X: An accelerator for sparse neural networks. In: The 49th Annual IEEE/ACM International Symposium on Microarchitecture 20, pp 1–12 (2016)
– reference: Joseph, R.; Santosh, D.; Ross, G.; Ali, F.: You Only Look Once: Unified, Real-Time Object Detection. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2016) 1506.02640
– reference: Cheng, L.; Man-Kit, S.; Hongxiang, F. et al.: Towards efficient deep neural network training by FPGA-based batch-level parallelism. In: 2019 IEEE 27th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
– reference: Thang, V.; Cao, V. N.; Trung, X. P. et al.: Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks. In: ECCV workshop (2018)
– reference: Enrico, R.; Marco, R.; Anna, M. N. et al.: Pareto Optimal Design Space Exploration for Accelerated CNN on FPGA. In: 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
– reference: Song, H.; Xingyu, L.; Huizi, M. et al.: EIE: efficient inference engine on compressed deep neural network. In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture
– reference: Wei, L.; Shengcai, L.; Weiqiang, R. et al.: High-Level Semantic Feature Detection: A New Perspective for Pedestrian Detection. In: CVPR (2019)
– reference: Irtiza, H.; Shengcai, L.; Jinpeng, L. et al.: Pedestrian Detection: The Elephant. In: The Room. arXiv preprint arXiv:2003.08799 (2020)
– reference: Graphics Double Data Rate (GDDR6) SGRAM Standard JESD250B. https://www.jedec.org/standards-documents/docs/jesd250b. (2018)
– reference: Yongming, S.; Michael, F.; Peter, M.: Escher: A CNN Accelerator with Flexible Buffering to Minimize Off-Chip Transfer. In: 2017 IEEE 25th annual international symposium on field-programmable custom computing machines
– reference: Qijie, Z.; Tao, S.; Yongtao, W. et al.: M2Det: A Single-Shot Object Detector based on Multi-Level Feature Pyramid Network. In: AAAI (2019)
– reference: Sachin, M., Mohammad, R., Linda, S., Hannaneh, H.: ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network. In: Proceedings IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR) 2019, pp 9190–9200
– reference: Tianshi, C.; Zidong, D.; Ninghui, S. et al.: DianNao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. In: The 19th international conference on Architectural support for programming languages and operating systems, 269–284 (2014)
– reference: Shaoqing, R.; Kaiming, H.; Ross, G.; Jian, S.: Faster R-CNN: Towards real-time object detection with region proposal networks. Computer Vision and Pattern Recognition (cs.CV), arXiv: (2015) 1506.01497
– reference: High Bandwidth Memory (HBM) DRAM JESD235C. https://www.jedec.org/standards-documents/docs/jesd235a. (2020)
– ident: 5455_CR8
  doi: 10.1145/3007787.3001179
– ident: 5455_CR10
– ident: 5455_CR14
– ident: 5455_CR16
– ident: 5455_CR12
– ident: 5455_CR4
– ident: 5455_CR22
– ident: 5455_CR24
– ident: 5455_CR19
– ident: 5455_CR2
– ident: 5455_CR20
– ident: 5455_CR6
– ident: 5455_CR11
– ident: 5455_CR23
  doi: 10.1109/HOTCHIPS.2016.7936226
– ident: 5455_CR17
– ident: 5455_CR15
– ident: 5455_CR13
– ident: 5455_CR21
– ident: 5455_CR18
– ident: 5455_CR25
– ident: 5455_CR9
– ident: 5455_CR7
– ident: 5455_CR1
– ident: 5455_CR3
– ident: 5455_CR5
SSID ssib048395113
ssj0001916267
ssj0061873
Score 2.231847
Snippet Owing to good performance, deep Convolution Neural Networks (CNNs) are rapidly rising in popularity across a broad range of applications. Since high accuracy...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 9703
SubjectTerms Artificial intelligence
Artificial neural networks
Chips (memory devices)
Computation
Computer architecture
Design
Design optimization
Dynamic random access memory
Engineering
Hardware
Humanities and Social Sciences
Machine learning
Modules
multidisciplinary
Neural networks
Power consumption
Research Article-Electrical Engineering
Science
Specifications
Static random access memory
Title Hardware Architecture Exploration for Deep Neural Networks
URI https://link.springer.com/article/10.1007/s13369-021-05455-4
https://www.proquest.com/docview/2572251023
Volume 46
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9swDBay9rIdhu6FZe0KH3bLHPghy9ZuSdegKIpshwRLT4Yky22BwenaBMX2G_ajSz1sKV1bbLsYhmwrEUmRFMWPQuiDABuViTwNwfYkIWY0DUHn4TCTkaCFEARznW0xJUdzfLzIFr3eby9rab3iQ_HrXlzJ_3AV2oCvCiX7D5ztOoUGuAf-whU4DNe_4rHadr9RqVsjfzvApNW5JMLPUl4OVBEO4MbUZH1f-z6pPpf5iqni4-3utP6unfYqtO6VLRyqGL3ufKwVqXvigtCyORt8k82PCy8uvRycsubsZ9d0YHEhp-umdiL61eZuH18050vbpQ1KJHGX3rYZlFQZ12rgHWhG61iFmyoig3ceSt0GehOWsok5vaVVzDY2aQUw8tQszaPUM9k0N6nYf5iDyMKj05TQUP1NcFCzzMKKNmpvT7-Uk_nJSTk7XMyeoO0EFh0aOr6IXcQOPGl9XJex8yQudP5CNxwLyTLAzLu_uOn2uLXMne137dXMdtBzy-5gZGTrBerJ5iV65vH0FfrUSlngS1ngSVkA0hIoKQuMlAWtlL1G88nh7OAotCduhAJGsQppxSgnnBLOWVrJPK7TSrAqqkRWUQV5rgjJRY0pTgSRdcy4lFEtslQhusEXTt-grWbZyLcoIKyQSS5ZzWDqsxwWErAyxkUdY4YZ57yP4pYgpbDl6NWpKN9LV0hbEbEEIpaaiCXuo0H3zaUpxvLo23stnUs7aa9LsFBgwVS9kj762NLePX64t3eP97aLnrp5sIe2Vldr-R7c1RXfR9ujyXg83dfSdAu-hZEH
linkProvider Geneva Foundation for Medical Education and Research
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Hardware+Architecture+Exploration+for+Deep+Neural+Networks&rft.jtitle=The+Arabian+Journal+for+Science+and+Engineering.+Section+B%2C+Engineering&rft.au=Zheng+Wenqi&rft.au=Zhao+Yangyi&rft.au=Chen%2C+Yunfan&rft.au=Park%2C+Jinhong&rft.date=2021-10-01&rft.pub=Springer+Nature+B.V&rft.issn=1319-8025&rft.eissn=2191-4281&rft.volume=46&rft.issue=10&rft.spage=9703&rft.epage=9712&rft_id=info:doi/10.1007%2Fs13369-021-05455-4&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2193-567X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2193-567X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2193-567X&client=summon