Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors
The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 22; no. 3; p. 1230 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Switzerland
MDPI AG
06.02.2022
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm
and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. |
---|---|
AbstractList | The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm
2
and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm[sup.2] and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm2 and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial Intelligence Internet of Things) is expected to be a solution that aids rapid and secure data processing. While the success of AIoT demanded low-power neural network processors, most of the recent research has been focused on accelerator designs only for inference. The growing interest in self-supervised and semi-supervised learning now calls for processors offloading the training process in addition to the inference process. Incorporating training with high accuracy goals requires the use of floating-point operators. The higher precision floating-point arithmetic architectures in neural networks tend to consume a large area and energy. Consequently, an energy-efficient/compact accelerator is required. The proposed architecture incorporates training in 32 bits, 24 bits, 16 bits, and mixed precisions to find the optimal floating-point format for low power and smaller-sized edge device. The proposed accelerator engines have been verified on FPGA for both inference and training of the MNIST image dataset. The combination of 24-bit custom FP format with 16-bit Brain FP has achieved an accuracy of more than 93%. ASIC implementation of this optimized mixed-precision accelerator using TSMC 65nm reveals an active area of 1.036 × 1.036 mm and energy consumption of 4.445 µJ per training of one image. Compared with 32-bit architecture, the size and the energy are reduced by 4.7 and 3.91 times, respectively. Therefore, the CNN structure using floating-point numbers with an optimized data path will significantly contribute to developing the AIoT field that requires a small area, low energy, and high accuracy. |
Audience | Academic |
Author | Junaid, Muhammad Lee, TaeGeon Arslan, Saad Kim, HyungWon |
AuthorAffiliation | 1 Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea; junaid@chungbuk.ac.kr (M.J.); tglee2@chungbuk.ac.kr (T.L.) 2 Department of Electrical and Computer Engineering, COMSATS University Islamabad, Park Road, Tarlai Kalan, Islamabad 45550, Pakistan; saad.arslan@comsats.edu.pk |
AuthorAffiliation_xml | – name: 2 Department of Electrical and Computer Engineering, COMSATS University Islamabad, Park Road, Tarlai Kalan, Islamabad 45550, Pakistan; saad.arslan@comsats.edu.pk – name: 1 Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea; junaid@chungbuk.ac.kr (M.J.); tglee2@chungbuk.ac.kr (T.L.) |
Author_xml | – sequence: 1 givenname: Muhammad orcidid: 0000-0003-0500-904X surname: Junaid fullname: Junaid, Muhammad organization: Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea – sequence: 2 givenname: Saad orcidid: 0000-0003-4038-462X surname: Arslan fullname: Arslan, Saad organization: Department of Electrical and Computer Engineering, COMSATS University Islamabad, Park Road, Tarlai Kalan, Islamabad 45550, Pakistan – sequence: 3 givenname: TaeGeon surname: Lee fullname: Lee, TaeGeon organization: Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea – sequence: 4 givenname: HyungWon orcidid: 0000-0003-2602-2075 surname: Kim fullname: Kim, HyungWon organization: Department of Electronics, College of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Korea |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/35161975$$D View this record in MEDLINE/PubMed |
BookMark | eNpdkl9vFCEUxYmpsX_0wS9gJvFFH6YCl2GYF5NNY7VJ0_ahJr4RhoFd1tlhBUbjt_euWzet4QFy-d3DPeSckqMpTo6Q14yeA3T0Q-acAuNAn5ETJrioFRaOHp2PyWnOa0o5AKgX5BgaJlnXNifk2-22hI0Zq0Wyq1CcLXNyVfTV5RhNCdOyvothKngdymrjSrCVj6m6cXPCphtXfsX0vbpPJkwIV3cpWpdzTPklee7NmN2rh_2MfL38dH_xpb6-_Xx1sbiubUNVqUFSI0XPBAxUUtYI1UjXGMda0fRK0Ib1bFDccw8tc6xrhp4qBWC4AgNUwBm52usO0az1NqGZ9FtHE_TfQkxLbRKOPTo99KLHHs6M74RUsvPWD74zzpqBtqpDrY97re3cb9xg3VTQ5RPRpzdTWOll_KkVTiqAosC7B4EUf8wuF70J2bpxNJOLc9Zc8o5KQJeIvv0PXcc5TfhVO6pVIIHvqPM9tTRoIEw-4rsW1-A2wWIKfMD6olVM0k5Ciw3v9w02xZyT84fpGdW7sOhDWJB989jugfyXDvgD4p26Jg |
CitedBy_id | crossref_primary_10_1016_j_iswa_2024_200356 crossref_primary_10_1109_ACCESS_2022_3204704 crossref_primary_10_3390_s24072145 crossref_primary_10_1007_s11554_023_01352_1 crossref_primary_10_1109_TNANO_2024_3367916 crossref_primary_10_1038_s41598_024_52356_1 |
Cites_doi | 10.1145/3341301.3359646 10.1109/VLSIC.2018.8502276 10.1109/ISSCC42613.2021.9365816 10.3390/electronics10070787 10.1016/j.nanoen.2020.105414 10.1109/SiPS50750.2020.9195234 10.1109/TVLSI.2019.2935251 10.1109/TNNLS.2017.2778940 10.1109/ACCESS.2019.2924045 10.1109/IJCNN.1989.118695 10.3390/s18072110 10.1109/TVLSI.2013.2294916 10.3389/fncom.2015.00099 10.1109/LED.2017.2731859 10.3390/s21134462 10.1109/ISVLSI.2019.00087 10.1109/JIOT.2021.3095077 10.1109/CVPR42600.2020.00240 10.1021/acsnano.6b07894 10.1109/ACCESS.2019.2923822 10.1109/TVLSI.2019.2913958 10.1109/IPSN.2016.7460664 10.1109/ICAECC.2018.8479517 10.1109/JETCAS.2018.2842761 10.23919/DATE.2018.8342119 |
ContentType | Journal Article |
Copyright | COPYRIGHT 2022 MDPI AG 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. 2022 by the authors. 2022 |
Copyright_xml | – notice: COPYRIGHT 2022 MDPI AG – notice: 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: 2022 by the authors. 2022 |
DBID | CGR CUY CVF ECM EIF NPM AAYXX CITATION 3V. 7X7 7XB 88E 8FI 8FJ 8FK ABUWG AFKRA AZQEC BENPR CCPQU DWQXO FYUFA GHDGH K9. M0S M1P PIMPY PQEST PQQKQ PQUKI PRINS 7X8 5PM DOA |
DOI | 10.3390/s22031230 |
DatabaseName | Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed CrossRef ProQuest Central (Corporate) ProQuest - Health & Medical Complete保健、医学与药学数据库 ProQuest Central (purchase pre-March 2016) Medical Database (Alumni Edition) Hospital Premium Collection Hospital Premium Collection (Alumni Edition) ProQuest Central (Alumni) (purchase pre-March 2016) ProQuest Central (Alumni) ProQuest Central ProQuest Central Essentials ProQuest Central ProQuest One Community College ProQuest Central Health Research Premium Collection Health Research Premium Collection (Alumni) ProQuest Health & Medical Complete (Alumni) Health & Medical Collection (Alumni Edition) PML(ProQuest Medical Library) Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China MEDLINE - Academic PubMed Central (Full Participant titles) DOAJ |
DatabaseTitle | MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) CrossRef Publicly Available Content Database ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Health & Medical Complete (Alumni) ProQuest Central (Alumni Edition) ProQuest One Community College ProQuest Hospital Collection Health Research Premium Collection (Alumni) ProQuest Central China ProQuest Hospital Collection (Alumni) ProQuest Central ProQuest Health & Medical Complete Health Research Premium Collection ProQuest Medical Library ProQuest One Academic UKI Edition Health and Medicine Complete (Alumni Edition) ProQuest Central Korea ProQuest One Academic ProQuest Medical Library (Alumni) ProQuest Central (Alumni) MEDLINE - Academic |
DatabaseTitleList | Publicly Available Content Database MEDLINE CrossRef |
Database_xml | – sequence: 1 dbid: DOA name: DOAJ Directory of Open Access Journals url: https://www.doaj.org/ sourceTypes: Open Website – sequence: 2 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 3 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 4 dbid: 7X7 name: ProQuest Health & Medical Collection url: https://search.proquest.com/healthcomplete sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1424-8220 |
ExternalDocumentID | oai_doaj_org_article_db4ba3021af946869fcfdf9aecad0789 A781609637 10_3390_s22031230 35161975 |
Genre | Journal Article |
GrantInformation_xml | – fundername: Ministry of Science ICT and Future Planning grantid: IITP-2020-0-01462 |
GroupedDBID | --- 123 2WC 3V. 53G 5VS 7X7 88E 8FE 8FG 8FI 8FJ AADQD ABDBF ABJCF ABUWG ADBBV AENEX AFKRA AFZYC ALMA_UNASSIGNED_HOLDINGS ARAPS BENPR BPHCQ BVXVI CCPQU CGR CS3 CUY CVF D1I DU5 E3Z EBD ECM EIF ESX F5P FYUFA GROUPED_DOAJ GX1 HCIFZ HH5 HMCUK HYE IAO KB. KQ8 L6V M1P M48 M7S MODMG M~E NPM OK1 P2P P62 PDBOC PIMPY PQQKQ PROAC PSQYO RIG RNS RPM TUS UKHRP XSB ~8M AAHBH AAYXX ALIPV CITATION BGLVJ 7XB 8FK AZQEC DWQXO ITC K9. PQEST PQUKI PRINS 7X8 5PM |
ID | FETCH-LOGICAL-c508t-360a64b143d060154856e5ae1745b84051b1d82f2f371e195db08833a283a3043 |
IEDL.DBID | RPM |
ISSN | 1424-8220 |
IngestDate | Thu Jul 04 20:46:51 EDT 2024 Tue Sep 17 21:13:55 EDT 2024 Fri Jun 28 14:13:54 EDT 2024 Wed Sep 25 00:00:59 EDT 2024 Tue Feb 13 05:09:44 EST 2024 Wed Jul 31 12:49:41 EDT 2024 Thu May 23 23:40:50 EDT 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | floating-points convolutional neural network (CNN) MNIST dataset IEEE 754 |
Language | English |
License | Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c508t-360a64b143d060154856e5ae1745b84051b1d82f2f371e195db08833a283a3043 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0003-0500-904X 0000-0003-2602-2075 0000-0003-4038-462X |
OpenAccessLink | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8840430/ |
PMID | 35161975 |
PQID | 2627836328 |
PQPubID | 2032333 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_db4ba3021af946869fcfdf9aecad0789 pubmedcentral_primary_oai_pubmedcentral_nih_gov_8840430 proquest_miscellaneous_2629063548 proquest_journals_2627836328 gale_infotracacademiconefile_A781609637 crossref_primary_10_3390_s22031230 pubmed_primary_35161975 |
PublicationCentury | 2000 |
PublicationDate | 20220206 |
PublicationDateYYYYMMDD | 2022-02-06 |
PublicationDate_xml | – month: 2 year: 2022 text: 20220206 day: 6 |
PublicationDecade | 2020 |
PublicationPlace | Switzerland |
PublicationPlace_xml | – name: Switzerland – name: Basel |
PublicationTitle | Sensors (Basel, Switzerland) |
PublicationTitleAlternate | Sensors (Basel) |
PublicationYear | 2022 |
Publisher | MDPI AG MDPI |
Publisher_xml | – name: MDPI AG – name: MDPI |
References | Imteaj (ref_6) 2021; 9 Sim (ref_21) 2020; 28 Tan (ref_4) 2020; 67 ref_12 ref_34 ref_11 ref_33 ref_10 ref_32 ref_31 Hassija (ref_2) 2019; 7 ref_30 ref_19 Kim (ref_14) 2017; 11 ref_18 Sun (ref_17) 2019; 7 Neil (ref_35) 2014; 22 Lian (ref_25) 2019; 27 ref_24 ref_23 Diehl (ref_13) 2015; 9 ref_22 Guo (ref_15) 2018; 29 ref_1 Truong (ref_20) 2018; 8 Woo (ref_16) 2017; 38 ref_29 ref_28 ref_27 ref_26 ref_9 ref_8 ref_5 ref_7 Dong (ref_3) 2020; 79 |
References_xml | – ident: ref_10 doi: 10.1145/3341301.3359646 – ident: ref_30 – ident: ref_24 doi: 10.1109/VLSIC.2018.8502276 – ident: ref_32 – ident: ref_34 – ident: ref_11 – ident: ref_5 doi: 10.1109/ISSCC42613.2021.9365816 – ident: ref_33 doi: 10.3390/electronics10070787 – volume: 79 start-page: 105414 year: 2020 ident: ref_3 article-title: Technology evolution from self-powered sensors to AIoT enabled smart homes publication-title: Nano Energy doi: 10.1016/j.nanoen.2020.105414 contributor: fullname: Dong – ident: ref_9 doi: 10.1109/SiPS50750.2020.9195234 – volume: 28 start-page: 87 year: 2020 ident: ref_21 article-title: An Energy-Efficient Deep Convolutional Neural Network Inference Processor With Enhanced Output Stationary Dataflow in 65-Nm CMOS publication-title: IEEE Trans. VLSI Syst. doi: 10.1109/TVLSI.2019.2935251 contributor: fullname: Sim – volume: 67 start-page: 1534 year: 2020 ident: ref_4 article-title: A ReRAM-Based Computing-in-Memory Convolutional-Macro With Customized 2T2R Bit-Cell for AIoT Chip IP Applications publication-title: IEEE Trans. Circuits Syst. II: Express Briefs contributor: fullname: Tan – volume: 29 start-page: 4782 year: 2018 ident: ref_15 article-title: High-performance mixed-signal neurocom- puting with nanoscale floating-gate memory cell arrays publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2017.2778940 contributor: fullname: Guo – volume: 7 start-page: 82721 year: 2019 ident: ref_2 article-title: A Survey on IoT Security: Application Areas, Security Threats, and Solution Architectures publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2924045 contributor: fullname: Hassija – ident: ref_26 doi: 10.1109/IJCNN.1989.118695 – ident: ref_18 doi: 10.3390/s18072110 – ident: ref_23 – volume: 22 start-page: 2621 year: 2014 ident: ref_35 article-title: Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator publication-title: IEEE Trans. Very Large-Scale Integr. (VLSI) Syst. doi: 10.1109/TVLSI.2013.2294916 contributor: fullname: Neil – volume: 9 start-page: 99 year: 2015 ident: ref_13 article-title: Unsupervised learning of digit recognition using spike-timing-dependent plasticity publication-title: Front. Comput. Neurosci. doi: 10.3389/fncom.2015.00099 contributor: fullname: Diehl – volume: 38 start-page: 1220 year: 2017 ident: ref_16 article-title: Linking conductive filament properties and evolution to synaptic behavior of RRAM devices for neuromorphic applications publication-title: IEEE Electron. Device Lett. doi: 10.1109/LED.2017.2731859 contributor: fullname: Woo – ident: ref_12 doi: 10.3390/s21134462 – ident: ref_1 doi: 10.1109/ISVLSI.2019.00087 – ident: ref_8 – volume: 9 start-page: 1 year: 2021 ident: ref_6 article-title: A Survey on Federated Learning for Resource-Constrained IoT Devices publication-title: IEEE Internet Things J. doi: 10.1109/JIOT.2021.3095077 contributor: fullname: Imteaj – ident: ref_31 – ident: ref_27 doi: 10.1109/CVPR42600.2020.00240 – volume: 11 start-page: 2814 year: 2017 ident: ref_14 article-title: Pattern recognition using carbon nanotube synaptic transistors with an adjustable weight update protocol publication-title: ACS Nano doi: 10.1021/acsnano.6b07894 contributor: fullname: Kim – volume: 7 start-page: 81370 year: 2019 ident: ref_17 article-title: ADAS Acceptability Improvement Based on Self-Learning of Individual Driving Characteristics: A Case Study of Lane Change Warning System publication-title: IEEE Access doi: 10.1109/ACCESS.2019.2923822 contributor: fullname: Sun – volume: 27 start-page: 1874 year: 2019 ident: ref_25 article-title: High-Performance FPGA-Based CNN Accelerator With Block-Floating-Point Arithmetic publication-title: IEEE Trans. Very Large Scale Integr. (VLSI) Syst. doi: 10.1109/TVLSI.2019.2913958 contributor: fullname: Lian – ident: ref_7 doi: 10.1109/IPSN.2016.7460664 – ident: ref_19 – ident: ref_22 – ident: ref_28 doi: 10.1109/ICAECC.2018.8479517 – volume: 8 start-page: 849 year: 2018 ident: ref_20 article-title: Integer Convolutional Neural Network for Seizure Detection publication-title: IEEE J. Emerg. Sel. Top. Circuits Syst. doi: 10.1109/JETCAS.2018.2842761 contributor: fullname: Truong – ident: ref_29 doi: 10.23919/DATE.2018.8342119 |
SSID | ssj0023338 |
Score | 2.429488 |
Snippet | The convergence of artificial intelligence (AI) is one of the critical technologies in the recent fourth industrial revolution. The AIoT (Artificial... |
SourceID | doaj pubmedcentral proquest gale crossref pubmed |
SourceType | Open Website Open Access Repository Aggregation Database Index Database |
StartPage | 1230 |
SubjectTerms | Accuracy Algorithms Approximation Artificial Intelligence Back propagation Brain Computer architecture convolutional neural network (CNN) Data processing Energy consumption Floating point arithmetic floating-points Format IEEE 754 Inference Internet of Things MNIST dataset Neural networks Neural Networks, Computer Power management Probability Probability distribution Processors Semiconductors Supervised Machine Learning |
SummonAdditionalLinks | – databaseName: DOAJ dbid: DOA link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1Nb9QwEB2hnuCAKJ-BggJC4hQ18cSOfWwRqwqJwqGV9mbZsa2uRBPUTf8_M0l22YgDF66xIzkztuc9ZeYNwMeEykUdyqIWxE1qV8vChxILITwaRxFIeq5G_napLq7rr2u5Pmj1xTlhkzzwZLjT4GvvkCKRS6ZWWpnUppCMi60LLJU-3r6V3JGpmWohMa9JRwiJ1J9uaSFId3S5iD6jSP_fV_FBLFrmSR4EntUTeDwjxvxsWukxPIjdU3h0oCP4DNbf6eDfjpP-_BbI-5SvfvaO85qLH_2mG2h4M9zcctliTlg1Z2EOeulyygTPr-ZuEflcPNDfbZ_D9erL1eeLYm6ZULSEtIYCVelU7QkEBRZaIToiVZQuEu-QnricrHwVtEgiYVPFysjgS2437AhlkJVrfAFHXd_FV5C3pYmoW-eRXECw0aHAJgYtIwadGpPBh50p7a9JGcMSo2B72729MzhnI-8nsJj1-IBcbGcX23-5OINP7CLLR4780Lq5coDWyeJV9qzRlSIqhk0GJzsv2vksbq1Q3E1EodAZvN8P0yniXyOui_39OMcQWCN7ZfBycvp-zSgJFZtGZtAstsPio5Yj3eZmVOrWmsWLytf_wwpv4KHg0gvOGFcncDTc3ce3BIgG_27c-78B9MgLCg priority: 102 providerName: Directory of Open Access Journals – databaseName: ProQuest - Health & Medical Complete保健、医学与药学数据库 dbid: 7X7 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lb9QwEB6VcoED4k3aggJC4hRtYseOfUIFsVohUTi00t4sO7bpSm1SdtP_z0ySfURIXNfeXWc8npkvnvkG4GPk0gbl86xkiE1KW4rM-ZxnjDmuLXog4aga-ceFXFyV35dieQSLbS0MpVVubWJvqH1b0zvyGZPUE0JypmbW0VuAupt9vvuTUf8oumcdm2k8gIcFw7ACNbta7qEXRyQ28ApxBPmzDS6Mo83OJ96oJ-3_1zQf-KZp3uSBI5o_hSdjBJmeD1v-DI5C8xweH_AKvoDlTzQEt_2k_TVB2sZ0ftNaynPOfrWrpsPhVXd9S2WMKcauKRF14Jcuhszw9HLsHpGOxQTtevMSrubfLr8usrGFQlZj5NVlXOZWlg6DIk_EKwhPhAzCBsQhwiG2E4UrvGKRRV4VodDCu5zaD1uMOizPS_4Kjpu2CW8grXMduKqt496h188tZ7wKXonAvYqVTuDDVpTmbmDKMIgwSN5mJ-8EvpCQdxOI3Lr_oF3_NuNZMfj7Dv-cFTbqUiqpYx191DbU1hM7fgKfaIsMHUFSBDtWEuA6iczKnFeqkAjNeJXA2XYXzXg2N2avSQm83w3jqaKrEtuE9r6fozF4Q3kl8HrY9N2aucAoWVcigWqiDpOHmo40q-ueuVspIjPKT_6_rFN4xKjIgnLD5Rkcd-v78BZDn86967X6LzmRBKA priority: 102 providerName: ProQuest – databaseName: Scholars Portal Journals: Open Access dbid: M48 link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5V5QIHxJtAQQEhcQokfsU5IFQQqwqphUNX2ptlxzZdqU1gN5Xg3zOTZMNGcOAaO7EzM_bMJ3u-AXgVubJB-zwTDLGJsEJmzuc8Y8zxyqIHko6ykU_P1MlSfF7J1QHsamyOAtz-E9pRPanl5vLNzx-_3uOCf0eIEyH72y0Ow3EHRuR-gwkuyNBPxXSYwDjCsIFUaN595op6xv6_9-U9xzS_NLnnhRZ34PYYPqbHg77vwkFo7sGtPVLB-7D6grvAVd_pzxlB2sZ0cdlauuScfW3XTYfN6-7iinIYUwxcU2LpwJfOhmvh6flYOiIdMwnazfYBLBefzj-eZGP9hKzGsKvLuMqtEg4jIk-sK4hNpArSBgQh0iGwk4UrvGaRRV4WoaikdznVHrYYclieC_4QDpu2CY8hrfMqcF1bx71Dl59bzngZvJaBex3LKoGXO1Ga7wNNhkF4QfI2k7wT-EBCnjoQs3X_oN18M-NCMfh9h4OzwsZKKK2qWEcfKxtq64kaP4HXpCJDFoF6qO2YRoDzJCYrc1zqQiEu42UCRzstmp1dGaaotIjiTCfwYmrGJUXnJLYJ7XXfp8LIDeWVwKNB6dOcucQQuSplAuXMHGY_NW9p1hc9bbfWxGSUP_mPcZ_CTUZpFnQ7XB3BYbe5Ds8w-Onc8960fwN_7QRM priority: 102 providerName: Scholars Portal |
Title | Optimal Architecture of Floating-Point Arithmetic for Neural Network Training Processors |
URI | https://www.ncbi.nlm.nih.gov/pubmed/35161975 https://www.proquest.com/docview/2627836328/abstract/ https://search.proquest.com/docview/2629063548 https://pubmed.ncbi.nlm.nih.gov/PMC8840430 https://doaj.org/article/db4ba3021af946869fcfdf9aecad0789 |
Volume | 22 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Nb9swDCXa7rIdhn3PWxd4w4Cd3NiiJcvHtmhWDEgWDC2QmyFZ8hqgsYvE_f8jHTtLsNsuPkSyrZCiyWeTjwBfK1TGaxdHqSBskppURtbFGAlhMTfkgaTlauTpTF3fpj8WcnEEcqiF6ZL2S7s8q-9XZ_XyrsutfFiV4yFPbDyfXmrNnDDx-BiOM8QBovcoCwl0bSmEkPD8eENrQHo8c8s3lBTf5JxSuOeDOqr-fx_Iex7pMFtyz_1MXsDzPm4Mz7frewlHvn4Fz_bYBF_D4ieZ_6qb9PfjQNhU4eS-MZzdHM2bZd3S8LK9W3HxYkgRa8j0HHTSbJsPHt70PSPCvoSgWW_ewO3k6ubyOuobJ0QlxVtthCo2KrUUCjmmWyFQIpWXxhP6kJZkJxObOC0qUWGW-CSXzsbcdNhQrGGQRPsWTuqm9u8hLOPcoy6NRWfJ18cGBWbeaenR6SrLA_gyiLJ42PJjFIQrWPTFTvQBXLCQdxOY0rr7oVn_LnrFFnR9SzcXianyVGmVV2Xlqtz40jjmxA_gG6uoYMMjPZSmrx-gdTKFVXGe6UQRIMMsgNNBi0VvkZtCKO4polDoAD7vhsmW-AOJqX3z2M3JKWQjeQXwbqv03ZqHvRNAdrAdDv7U4Qht346vu9-uH_77zI_wVHDVBSeLq1M4adeP_hPFQq0dkQUsMjrqyfcRPLm4ms1_jbr3CnScpnrU2cYfvrgPBg |
link.rule.ids | 230,315,733,786,790,870,891,2115,2236,12083,12792,21416,24346,27957,27958,31754,31755,33408,33409,33779,33780,43345,43635,43840,53827,53829,74102,74392,74659 |
linkProvider | National Library of Medicine |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3db9MwED_BeAAeEN9kDAgIiadoiR07zhMaiKrAVnjopL5ZduxslViytdn_z13ifkRIvNZu69z5fPeL734H8LHm0njl0iRniE1yk4vEupQnjFleGvRAwlI18tlMTs_zHwuxCC_c1iGtcnMm9ge1ayt6R37MJPWEkJypz9c3CXWNotvV0ELjLtzLOc8ppa9Y7AAXR_w1sAlxhPbHa1wOx5M6Hfmgnqr_3wN5zyONsyX33M_kMTwKcWN8Mij6CdzxzVN4uMcm-AwWv9D8r_pJu8uBuK3jyZ_WUHZz8rtdNh0OL7vLKypejDFijYmeA780G_LB43noGRGHEoJ2tX4O55Nv86_TJDROSCqMt7qEy9TI3GIo5IhuBUGJkF4Yj-hDWER0IrOZU6xmNS8yn5XC2ZSaDhuMNQxPc_4CDpq28a8grtLSc1UZy51FX58aznjhnRKeO1UXZQQfNqLU1wM_hkZcQfLWW3lH8IWEvJ1AlNb9B-3qQgcL0fj7Fv-cZaYuc6lkWVe1q0vjK-OIEz-CT6QiTYaHeqhMqB_AdRKFlT4pVCYRkPEigqONFnWwyLXe7Z8I3m-H0ZbogsQ0vr3t55QYsqG8Ing5KH27Zi4wNi4LEUEx2g6jhxqPNMvLnq9bKaIwSg__v6x3cH86PzvVp99nP1_DA0ZlFpQdLo_goFvd-jcY_HT2bb_D_wI8jQPT |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV3db9MwED_BJiF4QHyOwICAkHiKmtixkzyhDVaNr1KhTeqbZcc2q8SS0Wb_P3eJm7VC4jV2EufufB_x3e8A3nkutSttmuQMY5Nc5yIxNuUJY4ZXGi2QMFSN_H0mT8_zLwuxCPlP65BWudGJvaK2bU3_yCdMUk8IyVk58SEtYv5p-uHqT0IdpOikNbTTuA37RS4FSvj-8cls_nMMvzhGYwO2EMdAf7LGxXHU2-mOReqB-_9Vz1v2aTd3cssYTR_A_eBFxkcD2x_CLdc8gntb2IKPYfEDlcFlP-nmqCBufTz93WrKdU7m7bLpcHjZXVxSKWOM_mtMYB1402zIDo_PQgeJOBQUtKv1Ezifnpx9PE1CG4WkRu-rS7hMtcwNOkaWwFcwRBHSCe0wFhEG4zuRmcyWzDPPi8xllbAmpRbEGj0PzdOcP4W9pm3cM4jrtHK8rLXh1qDlTzVnvHC2FI7b0hdVBG83pFRXA1qGwiiD6K1GekdwTEQeJxDAdX-hXf1SYb8ofL7Bl7NM-yqXpax87a2vtKu1JYT8CN4TixRtQ-RDrUM1Aa6TAK3UUVFmEsMzXkRwuOGiCvtzrW6kKYI34zDuLDou0Y1rr_s5FTpwSK8IDgamj2vmAj3lqhARFDvisPNRuyPN8qJH7y5LAjRKn_9_Wa_hDoq3-vZ59vUF3GVUc0Gp4vIQ9rrVtXuJnlBnXgUR_wvWEQl2 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Optimal+Architecture+of+Floating-Point+Arithmetic+for+Neural+Network+Training+Processors&rft.jtitle=Sensors+%28Basel%2C+Switzerland%29&rft.au=Junaid%2C+Muhammad&rft.au=Arslan%2C+Saad&rft.au=Lee%2C+TaeGeon&rft.au=Kim%2C+HyungWon&rft.date=2022-02-06&rft.eissn=1424-8220&rft.volume=22&rft.issue=3&rft_id=info:doi/10.3390%2Fs22031230&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1424-8220&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1424-8220&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1424-8220&client=summon |