DCNN accelerator based on mixed multi-row data stream strategy

The invention discloses a DCNN accelerator based on a mixed multi-row data stream strategy. The DCNN accelerator is formed by stacking a plurality of convolution processing modules. The convolution processing module comprises a plurality of parallel computing unit arrays, a computing buffer and a da...

Full description

Saved in:
Bibliographic Details
Main Authors LUO CONGHUI, HUANG WENJIN, HUANG YIHUA
Format Patent
LanguageChinese
English
Published 08.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The invention discloses a DCNN accelerator based on a mixed multi-row data stream strategy. The DCNN accelerator is formed by stacking a plurality of convolution processing modules. The convolution processing module comprises a plurality of parallel computing unit arrays, a computing buffer and a data buffer. Data transmission of adjacent convolution processing modules takes a row as a unit, row data is stored in a data buffer, and the data in the data buffer is read in sequence, is subjected to rearrangement operation and then is sent into a calculation buffer for operation of a calculation unit array. Each calculation unit array is responsible for calculating an output feature map of a single row, all the calculation unit arrays share the same weight data, and all the weight data are stored in an off-chip DRAM (Dynamic Random Access Memory). The off-chip bandwidth usage amount can be adjusted by adjusting the parallelism degree of the calculation unit array of the convolution processing module, and the prob
Bibliography:Application Number: CN202210482658