Estimation-Based Strategy Generation for Deep Neural Network Model Compression

Compressing the neural network can significantly reduce its computational complexity, save resources and speed up inference time. However, current compression methods, whether used individually or in combination, often neglect the issue of compression strategy generation, making it challenging to ob...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE 6th International Conference on Pattern Recognition and Artificial Intelligence (PRAI) pp. 1009 - 1015
Main Authors Wang, Hongkai, Feng, Jun, Zhao, Shuai, Wang, Yidan, Mao, Dong, Chen, Zuge, Ke, Gongwu, Wang, Gaoli, Long, Youqun
Format Conference Proceeding
LanguageEnglish
Published IEEE 18.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Compressing the neural network can significantly reduce its computational complexity, save resources and speed up inference time. However, current compression methods, whether used individually or in combination, often neglect the issue of compression strategy generation, making it challenging to obtain compressed models with the smallest accuracy degradation that meet the user's deployment requirements. This paper proposes a method for automatically generating compression strategy, aiming to achieve high-performance models that meet deployment requirements with minimal accuracy degradation. Firstly, we design a predictor to estimate the compression performance of the model if it is compressed by different compression methods such as distillation, pruning and quantization. This includes estimating the model size, the number of parameters, computational complexity, and memory access of the model after compression. Then a computational method for estimating the inference time of the model after compression is discussed. Based on the estimated results, user requirements and hardware parameters, a method for automatically generating compression strategy is designed, which outputs suitable combinations of compression methods and compression parameter settings. Experimental results on commonly used convolutional neural networks and Jetson Nano development board validated the effectiveness of the proposed method.
DOI:10.1109/PRAI59366.2023.10331943