Uncertainty-based quantization method for stable training of binary neural networks

Binary neural networks (BNNs) have gained attention due to their computational efficiency. However, training BNNs has proven to be challenging. Existing algorithms either fail to produce stable and high-quality results or are overly complex for practical use. In this paper, we introduce a novel quan...

Full description

Saved in:
Bibliographic Details
Published inKompʹûternaâ optika Vol. 48; no. 4; pp. 573 - 581
Main Authors Trusov, A.V., Putintsev, D.N., Limonova, E.E.
Format Journal Article
LanguageEnglish
Published Samara National Research University 01.08.2024
Subjects
Online AccessGet full text
ISSN0134-2452
2412-6179
DOI10.18287/2412-6179-CO-1427

Cover

More Information
Summary:Binary neural networks (BNNs) have gained attention due to their computational efficiency. However, training BNNs has proven to be challenging. Existing algorithms either fail to produce stable and high-quality results or are overly complex for practical use. In this paper, we introduce a novel quantizer called UBQ (Uncertainty-based quantizer) for BNNs, which combines the advantages of existing methods, resulting in stable training and high-quality BNNs even with a low number of trainable parameters. We also propose a training method involving gradual network freezing and batch normalization replacement, facilitating a smooth transition from training mode to execution mode for BNNs. To evaluate UBQ, we conducted experiments on the MNIST and CIFAR-10 datasets and compared our method to existing algorithms. The results demonstrate that UBQ outperforms previous methods for smaller networks and achieves comparable results for larger networks.
ISSN:0134-2452
2412-6179
DOI:10.18287/2412-6179-CO-1427