Learnable Mixed-precision and Dimension Reduction Co-design for Low-storage Activation

Recently, deep convolutional neural networks (CNNs) have achieved many eye-catching results. However, deploying CNNs on resource-constrained edge devices is constrained by limited memory bandwidth for transmitting large intermediated data during inference, i.e., activation. Existing research utilize...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Tai, Yu-Shan, Chang, Cheng-Yang, Teng, Chieh-Fang, AnYeu, Wu
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 19.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, deep convolutional neural networks (CNNs) have achieved many eye-catching results. However, deploying CNNs on resource-constrained edge devices is constrained by limited memory bandwidth for transmitting large intermediated data during inference, i.e., activation. Existing research utilizes mixed-precision and dimension reduction to reduce computational complexity but pays less attention to its application for activation compression. To further exploit the redundancy in activation, we propose a learnable mixed-precision and dimension reduction co-design system, which separates channels into groups and allocates specific compression policies according to their importance. In addition, the proposed dynamic searching technique enlarges search space and finds out the optimal bit-width allocation automatically. Our experimental results show that the proposed methods improve 3.54%/1.27% in accuracy and save 0.18/2.02 bits per value over existing mixed-precision methods on ResNet18 and MobileNetv2, respectively.
ISSN:2331-8422
DOI:10.48550/arxiv.2207.07931