The Speed Improvement by Merging Batch Normalization into Previously Linear Layer in CNN
With the development of deep learning, the convolutional neural network is developing more deeply, which undoubtedly has considerable requirements for the calculative ability and storage capacity of the application environment. In order to get a better use of convolutional neural networks in mobile...
Saved in:
Published in | 2018 International Conference on Audio, Language and Image Processing (ICALIP) pp. 67 - 72 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | With the development of deep learning, the convolutional neural network is developing more deeply, which undoubtedly has considerable requirements for the calculative ability and storage capacity of the application environment. In order to get a better use of convolutional neural networks in mobile phones, embedded platforms and other platforms with limited calculative ability, this paper proposes to accelerate the neural network by integrating the Batch Normalization and previously Linear layers and analyzes the feasibility theoretically. The experimental results on the CPU, GPU and Raspberry Pi validate the effectiveness of this method. The experiments using Caffe framework show that the merging of Batch Normalization and previously linear layers can increase the speed of the neural network by 30% to 50%. |
---|---|
DOI: | 10.1109/ICALIP.2018.8455587 |