Efficient Time-Multiplexed Realization of Feedforward Artificial Neural Networks

This paper presents techniques and design structures to reduce the time-multiplexed hardware complexity of a feed-forward artificial neural network (ANN). After the weights of ANN are determined in a training phase, in a post-training stage, initially, the minimum quantization value used to convert...

Full description

Saved in:
Bibliographic Details
Published in2020 IEEE International Symposium on Circuits and Systems (ISCAS) pp. 1 - 5
Main Authors Aksoy, Levent, Parvin, Sajjad, Nojehdeh, Mohammadreza Esmali, Altun, Mustafa
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents techniques and design structures to reduce the time-multiplexed hardware complexity of a feed-forward artificial neural network (ANN). After the weights of ANN are determined in a training phase, in a post-training stage, initially, the minimum quantization value used to convert the floating-point weights to integers is found. Then, the integer weights related to each neuron are tuned to reduce the hardware complexity in the time-multiplexed design avoiding a loss on the ANN accuracy in hardware. Also, at each layer of ANN, the multiplications of integer weights by an input variable at each time are realized under the shift-adds architecture using a minimum number of adders and subtractors. It is observed that the application of the post-training stage yields a significant reduction in area, latency, and energy consumption on the time-multiplexed designs including multipliers. Moreover, the multiplierless design of ANN whose weights are found in the posttraining stage leads to a further reduction in area and energy consumption, increasing the latency slightly.
ISBN:9781728133201
1728133203
ISSN:2158-1525
2158-1525
DOI:10.1109/ISCAS45731.2020.9181002