Design and implementation of near-memory computing array architecture based on shared buffer

Deep learning algorithms have been widely used in computer vision, natural language process-ing and other fields. However, due to the ever-increasing scale of the deep learning model, the re-quirements for storage and computing performance are getting higher and higher, and the processors based on t...

Full description

Saved in:
Bibliographic Details
Published in高技术通讯(英文版) Vol. 28; no. 4; pp. 345 - 353
Main Authors SHAN Rui, GAO Xu, FENG Yani, HUI Chao, CUI Xinyue, CHAI Miaomiao
Format Journal Article
LanguageEnglish
Published School of Electronic Engineering,Xi'an University of Posts and Telecommunications,Xi'an 710121,P.R.China%School of Computer,Xi'an University of Posts and Telecommunications,Xi'an 710121,P.R.China 01.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning algorithms have been widely used in computer vision, natural language process-ing and other fields. However, due to the ever-increasing scale of the deep learning model, the re-quirements for storage and computing performance are getting higher and higher, and the processors based on the von Neumann architecture have gradually exposed significant shortcomings such as con-sumption and long latency. In order to alleviate this problem, large-scale processing systems are shifting from a traditional computing-centric model to a data-centric model. A near-memory compu-ting array architecture based on the shared buffer is proposed in this paper to improve system per-formance, which supports instructions with the characteristics of store-calculation integration, reduc-ing the data movement between the processor and main memory. Through data reuse, the processing speed of the algorithm is further improved. The proposed architecture is verified and tested through the parallel realization of the convolutional neural network ( CNN) algorithm. The experimental re-sults show that at the frequency of 110 MHz, the calculation speed of a single convolution operation is increased by 66 . 64% on average compared with the CNN architecture that performs parallel cal-culations on field programmable gate array( FPGA) . The processing speed of the whole convolution layer is improved by 8 . 81% compared with the reconfigurable array processor that does not support near-memory computing.
ISSN:1006-6748
DOI:10.3772/j.issn.1006-6748.2022.04.002