Deep Deterministic Information Bottleneck with Matrix-Based Entropy Functional
We introduce the matrix-based Rényi's α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle [1] with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption...
Saved in:
Published in | ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 3160 - 3164 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
06.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We introduce the matrix-based Rényi's α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle [1] with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack. Code available at https://github.com/yuxi120407/DIB. |
---|---|
ISSN: | 2379-190X |
DOI: | 10.1109/ICASSP39728.2021.9414151 |