Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers

Neural Networks (NNs) have emerged as a fundamental technology for machine learning. The sparsity of weight and activation in NNs varies widely from 5%-90% and can potentially lower computation requirements. However, existing designs lack a universal solution to efficiently handle different sparsity...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE Symposium on VLSI Circuits pp. 33 - 34
Main Authors Zhe Yuan, Jinshan Yue, Huanrui Yang, Zhibo Wang, Jinyang Li, Yixiong Yang, Qingwei Guo, Xueqing Li, Meng-Fan Chang, Huazhong Yang, Yongpan Liu
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2018
Subjects
Online AccessGet full text
DOI10.1109/VLSIC.2018.8502404

Cover

More Information
Summary:Neural Networks (NNs) have emerged as a fundamental technology for machine learning. The sparsity of weight and activation in NNs varies widely from 5%-90% and can potentially lower computation requirements. However, existing designs lack a universal solution to efficiently handle different sparsity in various layers and neural networks. This work, named STICKER, first systematically explores NN sparsity for inference and online tuning operations. Its major contributions are: 1) autonomous NN sparsity detector that switches the processor modes; 2) Multi-sparsity compatible Convolution (CONV) PE arrays that contain a multi-mode memory supporting different sparsity, and the set-associative PEs supporting both dense and sparse operations and reducing 92% memory area compared with previous hash memory banks; 3) Online tuning PE for sparse FCs that achieves 32.5x speedup compared with conventional CPU, using quantization center-based weight updating and Compressed Sparse Column (CSC) based back propagations. Peak energy efficiency of the 65nm STICKER chip is up to 62.1 TOPS/W at 8bit data length.
DOI:10.1109/VLSIC.2018.8502404