Variational Autoencoders Pursue PCA Directions (by Accident)

The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Rolinek, Michal, Zietlow, Dominik, Martius, Georg
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 16.04.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The Variational Autoencoder (VAE) is a powerful architecture capable of representation learning and generative modeling. When it comes to learning interpretable (disentangled) representations, VAE and its variants show unparalleled performance. However, the reasons for this are unclear, since a very particular alignment of the latent embedding is needed but the design of the VAE does not encourage it in any explicit way. We address this matter and offer the following explanation: the diagonal approximation in the encoder together with the inherent stochasticity force local orthogonality of the decoder. The local behavior of promoting both reconstruction and orthogonality matches closely how the PCA embedding is chosen. Alongside providing an intuitive understanding, we justify the statement with full theoretical analysis as well as with experiments.
ISSN:2331-8422