Convergence of variational Monte Carlo simulation and scale-invariant pre-training

We provide theoretical convergence bounds for the variational Monte Carlo (VMC) method as applied to optimize neural network wave functions for the electronic structure problem. We study both the energy minimization phase and the supervised pre-training phase that is commonly used prior to energy mi...

Full description

Saved in:
Bibliographic Details
Main Authors Abrahamsen, Nilin, Ding, Zhiyan, Goldshlager, Gil, Lin, Lin
Format Journal Article
LanguageEnglish
Published 21.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We provide theoretical convergence bounds for the variational Monte Carlo (VMC) method as applied to optimize neural network wave functions for the electronic structure problem. We study both the energy minimization phase and the supervised pre-training phase that is commonly used prior to energy minimization. For the energy minimization phase, the standard algorithm is scale-invariant by design, and we provide a proof of convergence for this algorithm without modifications. The pre-training stage typically does not feature such scale-invariance. We propose using a scale-invariant loss for the pretraining phase and demonstrate empirically that it leads to faster pre-training.
DOI:10.48550/arxiv.2303.11602