A Framework for Improving the Reliability of Black-box Variational Inference
Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial exp...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
29.03.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Black-box variational inference (BBVI) now sees widespread use in machine
learning and statistics as a fast yet flexible alternative to Markov chain
Monte Carlo methods for approximate Bayesian inference. However, stochastic
optimization methods for BBVI remain unreliable and require substantial
expertise and hand-tuning to apply effectively. In this paper, we propose
Robust and Automated Black-box VI (RABVI), a framework for improving the
reliability of BBVI optimization. RABVI is based on rigorously justified
automation techniques, includes just a small number of intuitive tuning
parameters, and detects inaccurate estimates of the optimal variational
approximation. RABVI adaptively decreases the learning rate by detecting
convergence of the fixed--learning-rate iterates, then estimates the
symmetrized Kullback--Leibler (KL) divergence between the current variational
approximation and the optimal one. It also employs a novel optimization
termination criterion that enables the user to balance desired accuracy against
computational cost by comparing (i) the predicted relative decrease in the
symmetrized KL divergence if a smaller learning were used and (ii) the
predicted computation required to converge with the smaller learning rate. We
validate the robustness and accuracy of RABVI through carefully designed
simulation studies and on a diverse set of real-world model and data examples. |
---|---|
DOI: | 10.48550/arxiv.2203.15945 |