Memristive Accelerators for Dense and Sparse Linear Algebra: From Machine Learning to High-Performance Scientific Computing
Initial research shows that in situ accelerators can be architected to meet the requirements of applications beyond machine learning while still preserving the performance and energy benefits of in situ acceleration. However, challenges remain to improve the efficiency of these techniques. Although...
Saved in:
Published in | IEEE MICRO Vol. 39; no. 1; pp. 58 - 61 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Los Alamitos
IEEE
01.01.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Initial research shows that in situ accelerators can be architected to meet the requirements of applications beyond machine learning while still preserving the performance and energy benefits of in situ acceleration. However, challenges remain to improve the efficiency of these techniques. Although memristive accelerators significantly reduce data movement, coordinating computation, and marshaling data within a large accelerator, or between systems of multiple accelerators, remains largely unexplored. Additionally, many proposed in situ accelerators rely on the matrix being written infrequently such that writes can be amortized over a large number of MVM operations, limiting the applicability to workloads with infrequent matrix updates. If these challenges are addressed, in situ memristive computation can provide benefits to an enormous number of applications with linear algebra at their core. |
---|---|
ISSN: | 0272-1732 1937-4143 |
DOI: | 10.1109/MM.2018.2885498 |