A GNN Computing-in-Memory Macro and Accelerator with Analog-Digital Hybrid Transformation and CAMenabled Search-reduce

Graph Neural Networks (GNN) recently find many exciting applications. Despite previous approaches [1], [2], accelerating spatial GNN remains challenging due to its unbalanced computing flow, poor locality, high sparsity, and high memory bandwidth requirements, especially for edge applications such a...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE Custom Integrated Circuits Conference (CICC) pp. 1 - 2
Main Authors Wang, Yipeng, Xie, Shanshan, Rohan, Jacob, Wang, Meizhi, Yang, Mengtian, Oruganti, Sirish, Kulkarni, Jaydeep P.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Graph Neural Networks (GNN) recently find many exciting applications. Despite previous approaches [1], [2], accelerating spatial GNN remains challenging due to its unbalanced computing flow, poor locality, high sparsity, and high memory bandwidth requirements, especially for edge applications such as real-time motion detectors and point cloud processing. This work presents the first GNN computing-in-memory (CIM) macro and accelerator chip, addressing major issues and achieving up to 78.6 X improvement in system energy efficiency compared with previous implementations.
ISSN:2152-3630
DOI:10.1109/CICC57935.2023.10121238