Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture Search
Quantization Neural Networks (QNN) have attracted a lot of attention due to their high efficiency. To enhance the quantization accuracy, prior works mainly focus on designing advanced quantization algorithms but still fail to achieve satisfactory results under the extremely low-bit case. In this wor...
Saved in:
Main Authors | , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Quantization Neural Networks (QNN) have attracted a lot of attention due to
their high efficiency. To enhance the quantization accuracy, prior works mainly
focus on designing advanced quantization algorithms but still fail to achieve
satisfactory results under the extremely low-bit case. In this work, we take an
architecture perspective to investigate the potential of high-performance QNN.
Therefore, we propose to combine Network Architecture Search methods with
quantization to enjoy the merits of the two sides. However, a naive combination
inevitably faces unacceptable time consumption or unstable training problem. To
alleviate these problems, we first propose the joint training of architecture
and quantization with a shared step size to acquire a large number of quantized
models. Then a bit-inheritance scheme is introduced to transfer the quantized
models to the lower bit, which further reduces the time cost and meanwhile
improves the quantization accuracy. Equipped with this overall framework,
dubbed as Once Quantization-Aware Training~(OQAT), our searched model family,
OQATNets, achieves a new state-of-the-art compared with various architectures
under different bit-widths. In particular, OQAT-2bit-M achieves 61.6% ImageNet
Top-1 accuracy, outperforming 2-bit counterpart MobileNetV3 by a large margin
of 9% with 10% less computation cost. A series of quantization-friendly
architectures are identified easily and extensive analysis can be made to
summarize the interaction between quantization and neural architectures. Codes
and models are released at https://github.com/LaVieEnRoseSMZ/OQA |
---|---|
DOI: | 10.48550/arxiv.2010.04354 |