Machine Learning Techniques for the Energy and Performance Improvement in Network-on-Chip (NoC)

On resource-constrained embedded devices (e.g., Internet of Things nodes), deep neural network inference requires specialized architectural solutions to deliver the greatest possible performance, energy, and cost trade-offs. In this regard, a Network-on-Chip architecture with many parallel and speci...

Full description

Saved in:
Bibliographic Details
Published in2021 4th International Conference on Computing and Communications Technologies (ICCCT) pp. 590 - 595
Main Authors RamaDevi, J, Pathur Nisha, S, Karunakaran, S, Hemavathi, S, Majji, Sankararao, Shunmugam, Anandaraj
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.12.2021
Subjects
Online AccessGet full text
DOI10.1109/ICCCT53315.2021.9711872

Cover

Abstract On resource-constrained embedded devices (e.g., Internet of Things nodes), deep neural network inference requires specialized architectural solutions to deliver the greatest possible performance, energy, and cost trade-offs. In this regard, a Network-on-Chip architecture with many parallel and specialized cores is one of the most promising (NoC). An architecture parameter that impacts deep neural networks' performance is the number and size of memory interfaces. Using these and other architectural criteria, we investigate the design space that can be created. We demonstrate how on-chip communication dominates delay while memory consumes the majority of energy. According to the findings, a new research area devoted to improving the performance and energy efficiency of on-chip communication fabrics and memory subsystems should be estavlished.
AbstractList On resource-constrained embedded devices (e.g., Internet of Things nodes), deep neural network inference requires specialized architectural solutions to deliver the greatest possible performance, energy, and cost trade-offs. In this regard, a Network-on-Chip architecture with many parallel and specialized cores is one of the most promising (NoC). An architecture parameter that impacts deep neural networks' performance is the number and size of memory interfaces. Using these and other architectural criteria, we investigate the design space that can be created. We demonstrate how on-chip communication dominates delay while memory consumes the majority of energy. According to the findings, a new research area devoted to improving the performance and energy efficiency of on-chip communication fabrics and memory subsystems should be estavlished.
Author Majji, Sankararao
Hemavathi, S
Shunmugam, Anandaraj
Karunakaran, S
RamaDevi, J
Pathur Nisha, S
Author_xml – sequence: 1
  givenname: J
  surname: RamaDevi
  fullname: RamaDevi, J
  email: k.ramakarthik@gmail.com
  organization: PVP SIDDHARTHA Institute of Technology,CSE Department,Vijayawada,India
– sequence: 2
  givenname: S
  surname: Pathur Nisha
  fullname: Pathur Nisha, S
  email: thanish05@gmail.com
  organization: Nehru Institute of Technology,Department of CSE,Coimbatore,India
– sequence: 3
  givenname: S
  surname: Karunakaran
  fullname: Karunakaran, S
  email: s.karunakaran@vardhaman.org
  organization: Vardhaman College of Engineering,Department of ECE,Hyderabad,India
– sequence: 4
  givenname: S
  surname: Hemavathi
  fullname: Hemavathi, S
  email: hemavathi@cecri.res.in
  organization: Central Electrochemical Research Institute,Scientist Battery Division,Chennai,India
– sequence: 5
  givenname: Sankararao
  surname: Majji
  fullname: Majji, Sankararao
  email: sankar3267@gmail.com
  organization: GRIET,Department of ECE,Hyderabad,India
– sequence: 6
  givenname: Anandaraj
  surname: Shunmugam
  fullname: Shunmugam, Anandaraj
  email: anandboyzz@gmail.com
  organization: School of CS & IT, DMI-St. John the Baptist University The Republic of Malawi,Central Africa
BookMark eNotj8FOhDAURWuiCx3nC1zYpS7AvkJbWJpmHElwdIFrUuB1aJSCHdTM30virG5yc3Jz7hU596NHQm6BxQAsfyi01pVIEhAxZxziXAFkip-Rda4ykFKkkKaKXZL6xbS980hLNME7v6cVtr13X994oHYMdO6RbjyG_ZEa39E3DEs7GN8iLYYpjD84oJ-p83SH8-8YPqLRR7p3E73bjfr-mlxY83nA9SlX5P1pU-nnqHzdFvqxjNwiNkciYTlvVN6atrF5I2TTSmRgsZGcJdwqSDuUUrKMSeDCYscEmkaA7VAtZLIiN_-7DhHrKbjBhGN9up38AauMUqU
ContentType Conference Proceeding
DBID 6IE
6IL
CBEJK
RIE
RIL
DOI 10.1109/ICCCT53315.2021.9711872
DatabaseName IEEE Electronic Library (IEL) Conference Proceedings
IEEE Xplore POP ALL
IEEE Xplore All Conference Proceedings
IEEE Electronic Library (IEL)
IEEE Proceedings Order Plans (POP All) 1998-Present
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
EISBN 9781665414470
1665414472
EndPage 595
ExternalDocumentID 9711872
Genre orig-research
GroupedDBID 6IE
6IL
CBEJK
RIE
RIL
ID FETCH-LOGICAL-i118t-53092b79cacbf9b56bc6e01feb62032f714de6660806125fed05eab51fde76e03
IEDL.DBID RIE
IngestDate Thu Jun 29 18:37:29 EDT 2023
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-i118t-53092b79cacbf9b56bc6e01feb62032f714de6660806125fed05eab51fde76e03
PageCount 6
ParticipantIDs ieee_primary_9711872
PublicationCentury 2000
PublicationDate 2021-Dec.-16
PublicationDateYYYYMMDD 2021-12-16
PublicationDate_xml – month: 12
  year: 2021
  text: 2021-Dec.-16
  day: 16
PublicationDecade 2020
PublicationTitle 2021 4th International Conference on Computing and Communications Technologies (ICCCT)
PublicationTitleAbbrev ICCCT
PublicationYear 2021
Publisher IEEE
Publisher_xml – name: IEEE
Score 1.7818668
Snippet On resource-constrained embedded devices (e.g., Internet of Things nodes), deep neural network inference requires specialized architectural solutions to...
SourceID ieee
SourceType Publisher
StartPage 590
SubjectTerms Correlation
Deep learning
Energy consumption
Machine Learning
Machine learning algorithms
Memory management
Network-on-Chip
neural network
Neural networks
Performance evaluation
Title Machine Learning Techniques for the Energy and Performance Improvement in Network-on-Chip (NoC)
URI https://ieeexplore.ieee.org/document/9711872
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5zJ08qm_ibHDwomK5dm6Q5l40pbOywwW5jSV51CO3Q7uJf70vabSgevIXyaEJS8r73-r3vEXIv4sRy6FumbWwwQJE40iplkMQWrBI69Drd44kYzZOXBV-0yNO-FgYAPPkMAjf0__JtabYuVdZT0jXHxgv3CD-zularoWxFoeo9Z1k2Q_QScQz7-lHQWP9om-K9xvCEjHfz1WSR92Bb6cB8_ZJi_O-CTkn3UJ9Hp3vPc0ZaUHTIcux5kUAbydRXOtvps35ShKYUoR4d-FI_uiosnR5KBmidWvCZQrou6KTmhrOyYNnbekMfJmX22CXz4WCWjVjTPoGtcVUV43Go-loqszI6V5oLbQSEUQ5auLbpuYwSCxi9IGZ0MCcHG3JYaR7lFiRaxuekXZQFXBCqjdKK89wp6SQQ4_tSowTamVQi3pCXpOM2Z7mpFTKWzb5c_f34mhy7A3KkkEjckHb1sYVbdO2VvvNn-g3HfqZk
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3PS8MwFA5jHvSksom_zcGDgu3atUmXc9nYdC07dLBbWZJXHUI7tLv41_vSdhuKB28hPNqQQN73Xr73PULuuedrBn1tSe0pDFACHEkxsMD3NGjBpVPpdEcxH8_95wVbtMjTrhYGACryGdhmWL3l60JtTKqsJwLTHBsv3AP0-z6rq7Ua0pbriN4kDMME8YvLMPDru3Zj_6NxSuU3Rsck2v6xpou825tS2urrlxjjf5d0Qrr7Cj062_meU9KCvEPSqGJGAm1EU19pslVo_aQITimCPTqsiv3oMtd0ti8aoHVyocoV0lVO45odbhW5Fb6t1vQhLsLHLpmPhkk4tpoGCtYKV1VazHNEXwZCLZXMhGRcKg6Om4HkpnF6Fri-BoxfEDUaoJOBdhgsJXMzDQFaemeknRc5nBMqlZCCscxo6fjg4fcGSnC0U4MAEUdwQTpmc9J1rZGRNvty-ff0HTkcJ9E0nU7ilytyZA7LUERcfk3a5ccGbtDRl_K2Ot9vrBapsQ
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2021+4th+International+Conference+on+Computing+and+Communications+Technologies+%28ICCCT%29&rft.atitle=Machine+Learning+Techniques+for+the+Energy+and+Performance+Improvement+in+Network-on-Chip+%28NoC%29&rft.au=RamaDevi%2C+J&rft.au=Pathur+Nisha%2C+S&rft.au=Karunakaran%2C+S&rft.au=Hemavathi%2C+S&rft.date=2021-12-16&rft.pub=IEEE&rft.spage=590&rft.epage=595&rft_id=info:doi/10.1109%2FICCCT53315.2021.9711872&rft.externalDocID=9711872