Inverting Supervised Representations with Autoregressive Neural Density Models
We present a method for feature interpretation that makes use of recent advances in autoregressive density estimation models to invert model representations. We train generative inversion models to express a distribution over input features conditioned on intermediate model representations. Insights...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.06.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We present a method for feature interpretation that makes use of recent
advances in autoregressive density estimation models to invert model
representations. We train generative inversion models to express a distribution
over input features conditioned on intermediate model representations. Insights
into the invariances learned by supervised models can be gained by viewing
samples from these inversion models. In addition, we can use these inversion
models to estimate the mutual information between a model's inputs and its
intermediate representations, thus quantifying the amount of information
preserved by the network at different stages. Using this method we examine the
types of information preserved at different layers of convolutional neural
networks, and explore the invariances induced by different architectural
choices. Finally we show that the mutual information between inputs and network
layers decreases over the course of training, supporting recent work by
Shwartz-Ziv and Tishby (2017) on the information bottleneck theory of deep
learning. |
---|---|
DOI: | 10.48550/arxiv.1806.00400 |