Low Power 10T SRAM Based Computing in Memory Macro Architecture for Binary MAC Operation of Edge AI Processors

Implementation of deep neural networks (DNN) using computing in memory architecture is a preferred method nowadays since it reduces the power and delay. This makes the DNN suitable for IoT applications with edge AI processors. The paper reports a computing in memory (CIM) architecture with a low pow...

Full description

Saved in:
Bibliographic Details
Published in2024 1st International Conference on Trends in Engineering Systems and Technologies (ICTEST) pp. 01 - 05
Main Authors A V, Arun, V, Hareesh, S A, Ajay Nath, M, Sajeesh
Format Conference Proceeding
LanguageEnglish
Published IEEE 11.04.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Implementation of deep neural networks (DNN) using computing in memory architecture is a preferred method nowadays since it reduces the power and delay. This makes the DNN suitable for IoT applications with edge AI processors. The paper reports a computing in memory (CIM) architecture with a low power 10T SRAM cell. The 10T SRAM cell has a cross coupled architecture of a inverter and a Schmitt trigger which eliminates the possibility of read disturbance. Furthermore, it has a write assist technique and performs pseudo differential writing through the bitline. The 10T SRAM cell exhibits better read delay, write delay, RSNM and WSNM when comparing with other SRAM cells. The CIM macro provides clearly distinguishable output '0' and output '1' validating the MAC operation.
DOI:10.1109/ICTEST60614.2024.10576118