Direct Conversion: Accelerating Convolutional Neural Networks Utilizing Sparse Input Activation

The amount of computation and the number of parameters of neural networks are increasing rapidly as the depth of convolutional neural networks (CNNs) is increasing. Therefore, it is very crucial to reduce both the amount of computation and that of memory usage. The pruning method, which compresses a...

Full description

Saved in:
Bibliographic Details
Published inIECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society pp. 441 - 446
Main Authors Lee, Won-Hyuk, Roh, Si-Dong, Park, Sangki, Chung, Ki-Seok
Format Conference Proceeding
LanguageEnglish
Published IEEE 18.10.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The amount of computation and the number of parameters of neural networks are increasing rapidly as the depth of convolutional neural networks (CNNs) is increasing. Therefore, it is very crucial to reduce both the amount of computation and that of memory usage. The pruning method, which compresses a neural network, has been actively studied. Depending on the layer characteristics, the sparsity level of each layer varies significantly after the pruning is conducted. If weights are sparse, most results of convolution operations will be zeroes. Although several studies have proposed methods to utilize the weight sparsity to avoid carrying out meaningless operations, those studies lack consideration that input activations may also have a high sparsity level. The Rectified Linear Unit (ReLU) function is one of the most popular activation functions because it is simple and yet pretty effective. Due to properties of the ReLU function, it is often observed that the input activation sparsity level is high (up to 85%). Therefore, it is important to consider both the input activation sparsity and the weight one to accelerate CNN to minimize carrying out meaningless computation. In this paper, we propose a new acceleration method called Direct Conversion that considers the weight sparsity under the sparse input activation condition. The Direct Conversion method converts a 3D input tensor directly into a compressed format. This method selectively applies one of two different methods: a method called image to Compressed Sparse Row (im2CSR) when input activations are sparse and weights are dense; the other method called image to Compressed Sparse Overlapped Activations (im2CSOA) when both input activations and weights are sparse. Our experimental results show that Direct Conversion improves the inference speed up to 2.82× compared to the conventional method.
ISSN:2577-1647
DOI:10.1109/IECON43393.2020.9254473