Probabilistic graph networks for learning physics simulations

Inductive biases play a critical role in enabling Graph Networks (GN) to learn particle and mesh-based physics simulations. In this paper, we propose two generalizable inductive biases that minimize rollout error and energy accumulation. GNs conditioned on the input states and relying on the Mean Sq...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational physics Vol. 513; p. 113137
Main Authors Prakash, Sakthi Kumar Arul, Tucker, Conrad
Format Journal Article
LanguageEnglish
Published Elsevier Inc 15.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Inductive biases play a critical role in enabling Graph Networks (GN) to learn particle and mesh-based physics simulations. In this paper, we propose two generalizable inductive biases that minimize rollout error and energy accumulation. GNs conditioned on the input states and relying on the Mean Squared Error (MSE) loss function implicitly assume Gaussian-distributed output errors. Consequently, GNs may either assign probability densities to infeasible regions in the state space of the deterministic physics problem or fail to assign densities to feasible regions. Instead, we advocate for maximizing the likelihood of the actual target distribution, challenging the underlying assumptions of MSE-based regression models using our proposed conditional normalizing flows (cNF) decoder. We discover that this inductive bias enables GNs to significantly improve their next state prediction accuracy. Existing sequential GNs encode temporal dependencies by autoregressively processing the latent representations of the input data. In our work, we find that inducing the Arrow-of-Time inductive bias through an auto-regressive encoding step before autoregressively processing the resulting latent vectors enables GNs to better minimize rollout error. We critically analyze the impact of existing inductive biases on rollout error and energy accumulation and discover that the choice of biases encoded in a GN, rather than the number of inductive biases, has a substantial impact on forward simulation prediction. •Probabilistic inductive biases are crucial for minimizing rollout error/energy accumulation when learning physics simulations.•Inducing Arrow-of-Time bias via auto-regressive encoding improves Graph Networks' simulation prediction accuracy.•A conditional Normalizing Flows decoder minimizes prediction and energy accumulation errors in Graph Networks.
ISSN:0021-9991
DOI:10.1016/j.jcp.2024.113137