Memristive Accelerators for Dense and Sparse Linear Algebra: From Machine Learning to High-Performance Scientific Computing

Initial research shows that in situ accelerators can be architected to meet the requirements of applications beyond machine learning while still preserving the performance and energy benefits of in situ acceleration. However, challenges remain to improve the efficiency of these techniques. Although...

Full description

Saved in:
Bibliographic Details
Published inIEEE MICRO Vol. 39; no. 1; pp. 58 - 61
Main Author Ipek, Engin
Format Journal Article
LanguageEnglish
Published Los Alamitos IEEE 01.01.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Initial research shows that in situ accelerators can be architected to meet the requirements of applications beyond machine learning while still preserving the performance and energy benefits of in situ acceleration. However, challenges remain to improve the efficiency of these techniques. Although memristive accelerators significantly reduce data movement, coordinating computation, and marshaling data within a large accelerator, or between systems of multiple accelerators, remains largely unexplored. Additionally, many proposed in situ accelerators rely on the matrix being written infrequently such that writes can be amortized over a large number of MVM operations, limiting the applicability to workloads with infrequent matrix updates. If these challenges are addressed, in situ memristive computation can provide benefits to an enormous number of applications with linear algebra at their core.
ISSN:0272-1732
1937-4143
DOI:10.1109/MM.2018.2885498