Two-dimensional mesh for compute-in-memory accelerator architecture

Embodiments disclosed herein include a compute in-memory (CIM) accelerator architecture for deep neural network(DNN). The CIM accelerator architecture may include a first analog fabric engine having a plurality of compute in-memory (CIM) analog tiles. Each CIM analog tile may be configured to store...

Full description

Saved in:
Bibliographic Details
Main Authors TSAI, HSINYU, BURR, GEOFFREY, NARAYANAN, PRITISH, STANISAVLJEVIC, MILOS, JAIN, SHUBHAM
Format Patent
LanguageChinese
English
Published 16.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Embodiments disclosed herein include a compute in-memory (CIM) accelerator architecture for deep neural network(DNN). The CIM accelerator architecture may include a first analog fabric engine having a plurality of compute in-memory (CIM) analog tiles. Each CIM analog tile may be configured to store a matrix of weight operands producing a vector of outputs from a vector of inputs, and perform in-memory computations. The first analog fabric may also include a plurality of compute cores. Each CIM analog tile and each compute core may include a microcontroller configured to execute a set of instructions. The first analog fabric may also include on-chip interconnects communicatively connecting all CIM analog tiles in the plurality of CIM analog tile to the compute cores.
Bibliography:Application Number: TW202312101704