Convergence of variational Monte Carlo simulation and scale-invariant pre-training
We provide theoretical convergence bounds for the variational Monte Carlo (VMC) method as applied to optimize neural network wave functions for the electronic structure problem. We study both the energy minimization phase and the supervised pre-training phase that is commonly used prior to energy mi...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.03.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We provide theoretical convergence bounds for the variational Monte Carlo
(VMC) method as applied to optimize neural network wave functions for the
electronic structure problem. We study both the energy minimization phase and
the supervised pre-training phase that is commonly used prior to energy
minimization. For the energy minimization phase, the standard algorithm is
scale-invariant by design, and we provide a proof of convergence for this
algorithm without modifications. The pre-training stage typically does not
feature such scale-invariance. We propose using a scale-invariant loss for the
pretraining phase and demonstrate empirically that it leads to faster
pre-training. |
---|---|
DOI: | 10.48550/arxiv.2303.11602 |