STN: Scalable Tensorizing Networks via Structure-Aware Training and Adaptive Compression
Deep neural networks (DNNs) have delivered a remarkable performance in many tasks of computer vision. However, over-parameterized representations of popular architectures dramatically increase their computational complexity and storage costs, and hinder their availability in edge devices with constr...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.05.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep neural networks (DNNs) have delivered a remarkable performance in many
tasks of computer vision. However, over-parameterized representations of
popular architectures dramatically increase their computational complexity and
storage costs, and hinder their availability in edge devices with constrained
resources. Regardless of many tensor decomposition (TD) methods that have been
well-studied for compressing DNNs to learn compact representations, they suffer
from non-negligible performance degradation in practice. In this paper, we
propose Scalable Tensorizing Networks (STN), which dynamically and adaptively
adjust the model size and decomposition structure without retraining. First, we
account for compression during training by adding a low-rank regularizer to
guarantee networks' desired low-rank characteristics in full tensor format.
Then, considering network layers exhibit various low-rank structures, STN is
obtained by a data-driven adaptive TD approach, for which the topological
structure of decomposition per layer is learned from the pre-trained model, and
the ranks are selected appropriately under specified storage constraints. As a
result, STN is compatible with arbitrary network architectures and achieves
higher compression performance and flexibility over other tensorizing versions.
Comprehensive experiments on several popular architectures and benchmarks
substantiate the superiority of our model towards improving parameter
efficiency. |
---|---|
DOI: | 10.48550/arxiv.2205.15198 |