Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics
Several algorithms involving the Variational R\'enyi (VR) bound have been proposed to minimize an alpha-divergence between a target posterior distribution and a variational distribution. Despite promising empirical results, those algorithms resort to biased stochastic gradient descent procedure...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.10.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Several algorithms involving the Variational R\'enyi (VR) bound have been
proposed to minimize an alpha-divergence between a target posterior
distribution and a variational distribution. Despite promising empirical
results, those algorithms resort to biased stochastic gradient descent
procedures and thus lack theoretical guarantees. In this paper, we formalize
and study the VR-IWAE bound, a generalization of the Importance Weighted
Auto-Encoder (IWAE) bound. We show that the VR-IWAE bound enjoys several
desirable properties and notably leads to the same stochastic gradient descent
procedure as the VR bound in the reparameterized case, but this time by relying
on unbiased gradient estimators. We then provide two complementary theoretical
analyses of the VR-IWAE bound and thus of the standard IWAE bound. Those
analyses shed light on the benefits or lack thereof of these bounds. Lastly, we
illustrate our theoretical claims over toy and real-data examples. |
---|---|
DOI: | 10.48550/arxiv.2210.06226 |