HADAS: Hardware-Aware Dynamic Neural Architecture Search for Edge Performance Scaling

Dynamic neural networks (DyNNs) have become viable techniques to enable intelligence on resource-constrained edge devices while maintaining computational efficiency. In many cases, the implementation of DyNNs can be sub-optimal due to its underlying backbone architecture being developed at the desig...

Full description

Saved in:
Bibliographic Details
Published in2023 Design, Automation & Test in Europe Conference & Exhibition (DATE) pp. 1 - 6
Main Authors Bouzidi, Halima, Odema, Mohanad, Ouarnoughi, Hamza, Al Faruque, Mohammad Abdullah, Niar, Smail
Format Conference Proceeding
LanguageEnglish
Published EDAA 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Dynamic neural networks (DyNNs) have become viable techniques to enable intelligence on resource-constrained edge devices while maintaining computational efficiency. In many cases, the implementation of DyNNs can be sub-optimal due to its underlying backbone architecture being developed at the design stage independent of both: (i) potential support for dynamic computing, e.g. early exiting, and (ii) resource efficiency features of the underlying hardware, e.g., dynamic voltage and frequency scaling (DVFS). Addressing this, we present HADAS, a novel Hardware-Aware Dynamic Neural Architecture Search framework that realizes DyNN architectures whose backbone, early exiting features, and DVFS settings have been jointly optimized to maximize performance and resource efficiency. Our experiments using the CIFAR-100 dataset and a diverse set of edge computing platforms have shown that HADAS can elevate dynamic models' energy efficiency by up to 57% for the same level of accuracy scores. Our code is available at https://github.com/HalimaBouzidi/HADAS
ISSN:1558-1101
DOI:10.23919/DATE56975.2023.10137095