PDE-DKL: PDE-constrained deep kernel learning in high dimensionality
Many physics-informed machine learning methods for PDE-based problems rely on Gaussian processes (GPs) or neural networks (NNs). However, both face limitations when data are scarce and the dimensionality is high. Although GPs are known for their robust uncertainty quantification in low-dimensional s...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.01.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Many physics-informed machine learning methods for PDE-based problems rely on
Gaussian processes (GPs) or neural networks (NNs). However, both face
limitations when data are scarce and the dimensionality is high. Although GPs
are known for their robust uncertainty quantification in low-dimensional
settings, their computational complexity becomes prohibitive as the
dimensionality increases. In contrast, while conventional NNs can accommodate
high-dimensional input, they often require extensive training data and do not
offer uncertainty quantification. To address these challenges, we propose a
PDE-constrained Deep Kernel Learning (PDE-DKL) framework that combines DL and
GPs under explicit PDE constraints. Specifically, NNs learn a low-dimensional
latent representation of the high-dimensional PDE problem, reducing the
complexity of the problem. GPs then perform kernel regression subject to the
governing PDEs, ensuring accurate solutions and principled uncertainty
quantification, even when available data are limited. This synergy unifies the
strengths of both NNs and GPs, yielding high accuracy, robust uncertainty
estimates, and computational efficiency for high-dimensional PDEs. Numerical
experiments demonstrate that PDE-DKL achieves high accuracy with reduced data
requirements. They highlight its potential as a practical, reliable, and
scalable solver for complex PDE-based applications in science and engineering. |
---|---|
DOI: | 10.48550/arxiv.2501.18258 |