Rethinking Channel Dimensions for Efficient Model Design

Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage....

Full description

Saved in:
Bibliographic Details
Published in2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 732 - 741
Main Authors Han, Dongyoon, Yun, Sangdoo, Heo, Byeongho, Yoo, YoungJoon
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Designing an efficient model within the limited computational cost is challenging. We argue the accuracy of a lightweight model has been further limited by the design convention: a stage-wise configuration of the channel dimensions, which looks like a piecewise linear function of the network stage. In this paper, we study an effective channel dimension configuration towards better performance than the convention. To this end, we empirically study how to design a single layer properly by analyzing the rank of the output feature. We then investigate the channel configuration of a model by searching network architectures concerning the channel configuration under the computational cost restriction. Based on the investigation, we propose a simple yet effective channel configuration that can be parameterized by the layer index. As a result, our proposed model following the channel parameterization achieves remarkable performance on ImageNet classification and transfer learning tasks including COCO object detection, COCO instance segmentation, and fine-grained classifications. Code and ImageNet pretrained models are available at https: //github.com/clovaai/rexnet.
ISSN:2575-7075
DOI:10.1109/CVPR46437.2021.00079