The Speed Improvement by Merging Batch Normalization into Previously Linear Layer in CNN

With the development of deep learning, the convolutional neural network is developing more deeply, which undoubtedly has considerable requirements for the calculative ability and storage capacity of the application environment. In order to get a better use of convolutional neural networks in mobile...

Full description

Saved in:
Bibliographic Details
Published in2018 International Conference on Audio, Language and Image Processing (ICALIP) pp. 67 - 72
Main Authors Duan, Jie, Zhang, RuiXin, Huang, Jiahu, Zhu, Qiuyu
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the development of deep learning, the convolutional neural network is developing more deeply, which undoubtedly has considerable requirements for the calculative ability and storage capacity of the application environment. In order to get a better use of convolutional neural networks in mobile phones, embedded platforms and other platforms with limited calculative ability, this paper proposes to accelerate the neural network by integrating the Batch Normalization and previously Linear layers and analyzes the feasibility theoretically. The experimental results on the CPU, GPU and Raspberry Pi validate the effectiveness of this method. The experiments using Caffe framework show that the merging of Batch Normalization and previously linear layers can increase the speed of the neural network by 30% to 50%.
DOI:10.1109/ICALIP.2018.8455587