Constraining cosmological parameters from N-body simulations with variational Bayesian neural networks

Introduction: Methods based on deep learning have recently been applied to recover astrophysical parameters, thanks to the ability of these techniques to capture information from complex data. One of these schemes is the approximate Bayesian neural network (BNN), which has demonstrated to yield a po...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in astronomy and space sciences Vol. 10
Main Authors Hortúa, Héctor J., García, Luz Ángela, Castañeda C., Leonardo
Format Journal Article
LanguageEnglish
Published Frontiers Media S.A 27.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Introduction: Methods based on deep learning have recently been applied to recover astrophysical parameters, thanks to the ability of these techniques to capture information from complex data. One of these schemes is the approximate Bayesian neural network (BNN), which has demonstrated to yield a posterior distribution into the parameter space that is extremely helpful for uncertainty quantification. However, modern neural networks tend to produce overly confident uncertainty estimates and introduce bias when applying BNNs to data. Method: In this work, we implement multiplicative normalizing flows (MNFs), a family of approximate posteriors for the parameters of BNNs with the purpose of enhancing the flexibility of the variational posterior distribution, to extract Ω m , h , and σ 8 from the QUIJOTE simulations. We compared the latter method with the standard BNNs and the Flipout estimator. Results: We have found that the use of MNFs consistently outperforms the standard BNNs with a percent difference in the mean squared error of 21%, in addition to high-accuracy extraction of σ 8 ( r 2 = 0.99), with precise and consistent uncertainty estimates. Discussions: These findings imply that MNFs provide a more realistic predictive distribution closer to the true posterior, mitigating the bias introduced by the variational approximation and allowing us to work with well-calibrated networks.
ISSN:2296-987X
2296-987X
DOI:10.3389/fspas.2023.1139120