Energy-Efficient, Low-Latency Realization of Neural Networks Through Boolean Logic Minimization

Deep neural networks have been successfully deployed in a wide variety of applications including computer vision and speech recognition. To cope with computational and storage complexity of these models, this paper presents a training method that enables a radically different approach for realizatio...

Full description

Saved in:
Bibliographic Details
Published in2019 24th Asia and South Pacific Design Automation Conference (ASP-DAC) pp. 1 - 6
Main Authors Nazemi, Mahdi, Pasandi, Ghasem, Pedram, Massoud
Format Conference Proceeding
LanguageEnglish
Published ACM 21.01.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep neural networks have been successfully deployed in a wide variety of applications including computer vision and speech recognition. To cope with computational and storage complexity of these models, this paper presents a training method that enables a radically different approach for realization of deep neural networks through Boolean logic minimization. The aforementioned realization completely removes the energy-hungry step of accessing memory for obtaining model parameters, consumes about two orders of magnitude fewer computing resources compared to realizations that use floating-point operations, and has a substantially lower latency.
ISSN:2153-697X
DOI:10.1145/3287624.3287722