The Road to Widely Deploying Processing-in-Memory: Challenges and Opportunities

Processing-in-memory (PIM) refers to a computing paradigm where some or all of the computation for an ap-plication is moved closer to where the data resides (e.g., in main memory). While PIM has been the subject of ongoing research since the 1970s [8], [11], [17], [19], [26], 2[8], [29], [33], it ha...

Full description

Saved in:
Bibliographic Details
Published inProceedings / IEEE Computer Society Annual Symposium on VLSI pp. 259 - 260
Main Author Ghose, Saugata
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Processing-in-memory (PIM) refers to a computing paradigm where some or all of the computation for an ap-plication is moved closer to where the data resides (e.g., in main memory). While PIM has been the subject of ongoing research since the 1970s [8], [11], [17], [19], [26], 2[8], [29], [33], it has experienced a resurgence in the last decade due to (1) the pressing need to reduce the energy and latency overheads associated with data movement between the CPU and memory in conventional systems [6], [18], and (2) recent innovations in memory technologies that can enable PIM integration (e.g., [13]-[16], [20], [21], [24], [31]). Recently-released products and prototypes, ranging from programmable near-memory pro-cessing units [7], [36] to custom near-bank accelerators for machine learning [22], [23], [30] and analog compute support within memory arrays [9], [27], have demonstrated the viability of manufacturing PIM architectures.
ISSN:2159-3477
DOI:10.1109/ISVLSI54635.2022.00057