Deep Deterministic Information Bottleneck with Matrix-Based Entropy Functional

We introduce the matrix-based Rényi's α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle [1] with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption...

Full description

Saved in:
Bibliographic Details
Published inICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 3160 - 3164
Main Authors Yu, Xi, Yu, Shujian, Principe, Jose C.
Format Conference Proceeding
LanguageEnglish
Published IEEE 06.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We introduce the matrix-based Rényi's α-order entropy functional to parameterize Tishby et al. information bottleneck (IB) principle [1] with a neural network. We term our methodology Deep Deterministic Information Bottleneck (DIB), as it avoids variational inference and distribution assumption. We show that deep neural networks trained with DIB outperform the variational objective counterpart and those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack. Code available at https://github.com/yuxi120407/DIB.
ISSN:2379-190X
DOI:10.1109/ICASSP39728.2021.9414151