Mixed-8T: Energy-Efficient Configurable Mixed-VT SRAM Design Techniques for Neural Networks

Artificial Neural Network-based applications such as pattern recognition, image classification etc. consume a significant amount of energy while accessing the memory. Various techniques to reduce these energy demands in SRAM, including heterogeneous and hybrid SRAM designs, have been proposed in ear...

Full description

Saved in:
Bibliographic Details
Published in2022 35th International Conference on VLSI Design and 2022 21st International Conference on Embedded Systems (VLSID) pp. 174 - 179
Main Authors Surana, Neelam, Bharti, Pramod Kumar, Tej, Bachu Varun, Mekie, Joycee
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.02.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Artificial Neural Network-based applications such as pattern recognition, image classification etc. consume a significant amount of energy while accessing the memory. Various techniques to reduce these energy demands in SRAM, including heterogeneous and hybrid SRAM designs, have been proposed in earlier works. However, these designs still consume significant energy at higher voltage and suffer from area overhead. Considering the aforementioned issue, we propose 7 different homogeneous Mixed-V T 8T SRAM architectures for neural networks, which overcome these issues. We analyzed the effect of truncation on different neural networks for different datasets and further applied the truncation technique on the SRAM architecture used for ANN. We design the Mixed-\mathrm{V}_{T}\,8T SRAM architecture and validate it suitability for 5 different neural networks. Our proposed Mixed- \mathrm{V}_{T}\,8\mathrm{T} SRAM architecture requires maximum of 0.34\times(0.46\times) and 0.56\times(0.69\times) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM architecture respectively at 0.5V and maximum of 0.7\times(0.84\times) and 0.92\times(0.90\times) dynamic energy(leakage power) than Het-6T and Hyb-8T/6T SRAM array respectively at 0.7 V for 6-bit weights of neural networks.
ISSN:2380-6923
DOI:10.1109/VLSID2022.2022.00043