Gradient-enhanced multifidelity neural networks for high-dimensional function approximation

In this work, a novel multifidelity machine learning (ML) model, the gradient-enhanced multifidelity neural networks (GEMFNNs), is proposed. This model is a multifidelity version of gradient-enhanced neural networks (GENNs) as it uses both function and gradient information available at multiple leve...

Full description

Saved in:
Bibliographic Details
Main Authors Nagawkar, Jethro, Leifsson, Leifur
Format Journal Article
LanguageEnglish
Published 22.03.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this work, a novel multifidelity machine learning (ML) model, the gradient-enhanced multifidelity neural networks (GEMFNNs), is proposed. This model is a multifidelity version of gradient-enhanced neural networks (GENNs) as it uses both function and gradient information available at multiple levels of fidelity to make function approximations. Its construction is similar to multifidelity neural networks (MFNNs). This model is tested on three analytical function, a one, two, and a 20 variable function. It is also compared to neural networks (NNs), GENNs, and MFNNs, and the number of samples required to reach a global accuracy of 0.99 coefficient of determination (R^2) is measured. GEMFNNs required 18, 120, and 600 high-fidelity samples for the one, two, and 20 dimensional cases, respectively, to meet the target accuracy. NNs performed best on the one variable case, requiring only ten samples, while GENNs worked best on the two variable case, requiring 120 samples. GEMFNNs worked best for the 20 variable case, while requiring nearly eight times fewer samples than its nearest competitor, GENNs. For this case, NNs and MFNNs did not reach the target global accuracy even after using 10,000 high-fidelity samples. This work demonstrates the benefits of using gradient as well as multifidelity information in NNs for high-dimensional problems.
DOI:10.48550/arxiv.2103.12247