Energy-Efficient, Low-Latency Realization of Neural Networks Through Boolean Logic Minimization
Deep neural networks have been successfully deployed in a wide variety of applications including computer vision and speech recognition. To cope with computational and storage complexity of these models, this paper presents a training method that enables a radically different approach for realizatio...
Saved in:
Published in | 2019 24th Asia and South Pacific Design Automation Conference (ASP-DAC) pp. 1 - 6 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
ACM
21.01.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep neural networks have been successfully deployed in a wide variety of applications including computer vision and speech recognition. To cope with computational and storage complexity of these models, this paper presents a training method that enables a radically different approach for realization of deep neural networks through Boolean logic minimization. The aforementioned realization completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders of magnitude fewer computing resources compared to realizations that use floating-point operations, and has a substantially lower latency. |
---|---|
ISSN: | 2153-697X |
DOI: | 10.1145/3287624.3287722 |