Recognition system of convolution neural network based on FPGA acceleration
A convolutional neural network(CNN) inference system is designed based on the FPGA platform for the problem that the convolutional neural network infers at low speed and it is power consuming on the general CPU and GPU platforms. By computing resource reusing, parallel processing of data and pipelin...
Saved in:
Published in | Diànzǐ jìshù yīngyòng Vol. 46; no. 2; pp. 24 - 27 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | Chinese |
Published |
National Computer System Engineering Research Institute of China
01.02.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A convolutional neural network(CNN) inference system is designed based on the FPGA platform for the problem that the convolutional neural network infers at low speed and it is power consuming on the general CPU and GPU platforms. By computing resource reusing, parallel processing of data and pipeline design, it greatly improved the computing speed, and reduced the use of computing and storage resources by model compression and sparse matrix multipliers using the sparseness of the fully connected layer. The system uses the ORL face database. The experimental results show that the model inference performance is 10.24 times of the CPU, 3.08 times of the GPU and 1.56 times of the benchmark version at the working frequency of 100 MHz, and the power is less than 2 W. When the model is compressed by 4 times, the system identification accuracy is 95%. |
---|---|
ISSN: | 0258-7998 |
DOI: | 10.16157/j.issn.0258-7998.191000 |