MixedNet: Network Design Strategies For A Cost-Effective Quantized CNNs

This paper proposes design strategies for a low-cost quantized neural network. To prevent the classification accuracy from being degraded by quantization, a structure-design strategy that utilizes a large number of channels rather than deep layers is proposed. In addition, a squeeze-and-excitation (...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 9; p. 1
Main Authors Chang, Dong-Jin, Nam, Byeong-Gyu, Ryu, Seung-Tak
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes design strategies for a low-cost quantized neural network. To prevent the classification accuracy from being degraded by quantization, a structure-design strategy that utilizes a large number of channels rather than deep layers is proposed. In addition, a squeeze-and-excitation (SE) layer is adopted to enhance the performance of the quantized network. Through a quantitative analysis and simulations of the quantized key convolution layers of ResNet and MobileNets, a low-cost layer-design strategy for use when building a neural network is proposed. With this strategy, a low-cost network referred to as a MixedNet is constructed. A 4-bit quantized MixedNet example achieves an on-chip memory size reduction of 60% and fewer memory access by 53% with negligible classification accuracy degradation in comparison with conventional networks while also showing classification accuracy rates of approximately 73% for Cifar-100 and 93% for Cifar-10.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3106658