Langevin Monte Carlo: random coordinate descent and variance reduction

Langevin Monte Carlo (LMC) is a popular Bayesian sampling method. For the log-concave distribution function, the method converges exponentially fast, up to a controllable discretization error. However, the method requires the evaluation of a full gradient in each iteration, and for a problem on \(\m...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ding, Zhiyan, Li, Qin
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 07.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Langevin Monte Carlo (LMC) is a popular Bayesian sampling method. For the log-concave distribution function, the method converges exponentially fast, up to a controllable discretization error. However, the method requires the evaluation of a full gradient in each iteration, and for a problem on \(\mathbb{R}^d\), this amounts to \(d\) times partial derivative evaluations per iteration. The cost is high when \(d\gg1\). In this paper, we investigate how to enhance computational efficiency through the application of RCD (random coordinate descent) on LMC. There are two sides of the theory: 1 By blindly applying RCD to LMC, one surrogates the full gradient by a randomly selected directional derivative per iteration. Although the cost is reduced per iteration, the total number of iteration is increased to achieve a preset error tolerance. Ultimately there is no computational gain; 2 We then incorporate variance reduction techniques, such as SAGA (stochastic average gradient) and SVRG (stochastic variance reduced gradient), into RCD-LMC. It will be proved that the cost is reduced compared with the classical LMC, and in the underdamped case, convergence is achieved with the same number of iterations, while each iteration requires merely one-directional derivative. This means we obtain the best possible computational cost in the underdamped-LMC framework.
ISSN:2331-8422