CDAnet: A Physics‐Informed Deep Neural Network for Downscaling Fluid Flows
Generating high‐resolution flow fields is of paramount importance for various applications in engineering and climate sciences. This is typically achieved by solving the governing dynamical equations on high‐resolution meshes, suitably nudged toward available coarse‐scale data. To alleviate the comp...
Saved in:
Published in | Journal of advances in modeling earth systems Vol. 14; no. 12 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Washington
John Wiley & Sons, Inc
01.12.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Generating high‐resolution flow fields is of paramount importance for various applications in engineering and climate sciences. This is typically achieved by solving the governing dynamical equations on high‐resolution meshes, suitably nudged toward available coarse‐scale data. To alleviate the computational cost of such downscaling process, we develop a physics‐informed deep neural network (PI‐DNN) that mimics the mapping of coarse‐scale information into their fine‐scale counterparts of continuous data assimilation (CDA). Specifically, the PI‐DNN is trained within the theoretical framework described by Foias et al. (2014, https://doi.org/10.1070/rm2014v069n02abeh004891) to generate a surrogate of the theorized determining form map from the coarse‐resolution data to the fine‐resolution solution. We demonstrate the PI‐DNN methodology through application to 2D Rayleigh‐Bénard convection, and assess its performance by contrasting its predictions against those obtained by dynamical downscaling using CDA. The analysis suggests that the surrogate is constrained by similar conditions, in terms of spatio‐temporal resolution of the input, as the ones required by the theoretical determining form map. The numerical results also suggest that the surrogate's downscaled fields are of comparable accuracy to those obtained by dynamically downscaling using CDA. Consistent with the analysis of Farhat, Jolly, and Titi (2015, https://doi.org/10.48550/arxiv.1410.176), temperature observations are not needed for the PI‐DNN to predict the fine‐scale velocity, pressure and temperature fields.
Plain Language Summary
Detailed descriptions of the solution of dynamical systems are essential for a comprehensive understanding of their behavior. Obtaining sufficiently fine solutions is, however, computationally expensive. Mathematical tools have been developed to drive the solution of dynamical systems toward the reference solution, from which only coarse‐scale observations are available. These techniques either rely on statistical correlations between the low‐resolution observations and the higher‐resolution reference solution, or alternatively on solving the governing equation on a high‐resolution mesh with an additional forcing term that depends on the discrepancy between the observations and solution. Recently, a continuous map, called determining form, from the coarse‐scale data to the high‐resolution solution fields was proven to exist, subject to reasonable assumptions on the resolution of the coarse data. We propose a physics‐informed deep neural network that serves as a surrogate of this theoretical map, to predict the higher‐resolution solution fields from a history of coarse‐scale data. The proposed network is shown to mimic the performance of dynamical downscaling, and is governed by similar constraints on the spatial and temporal resolutions of the observations.
Key Points
A physics‐informed deep neural network is proposed for downscaling coarse observations in space and time
The neural network is trained to approximate the system's determining form map, mapping coarse‐scale information to higher‐resolution fields
The neural network's predictions are comparable to those obtained by dynamically downscaling using continuous data assimilation |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1942-2466 1942-2466 |
DOI: | 10.1029/2022MS003051 |