Quantized Sparse Weight Decomposition for Neural Network Compression

In this paper, we introduce a novel method of neural network weight compression. In our method, we store weight tensors as sparse, quantized matrix factors, whose product is computed on the fly during inference to generate the target model's weights. We use projected gradient descent methods to...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Kuzmin, Andrey, Mart van Baalen, Nagel, Markus, Behboodi, Arash
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 22.07.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we introduce a novel method of neural network weight compression. In our method, we store weight tensors as sparse, quantized matrix factors, whose product is computed on the fly during inference to generate the target model's weights. We use projected gradient descent methods to find quantized and sparse factorization of the weight tensors. We show that this approach can be seen as a unification of weight SVD, vector quantization, and sparse PCA. Combined with end-to-end fine-tuning our method exceeds or is on par with previous state-of-the-art methods in terms of the trade-off between accuracy and model size. Our method is applicable to both moderate compression regimes, unlike vector quantization, and extreme compression regimes.
ISSN:2331-8422