Creating Synthetic Radar Imagery Using Convolutional Neural Networks

Abstract In this work deep convolutional neural networks (CNNs) are shown to be an effective model for fusing heterogeneous geospatial data to create radar-like analyses of precipitation intensity (i.e., synthetic radar). The CNN trained in this work has a directed acyclic graph (DAG) structure that...

Full description

Saved in:
Bibliographic Details
Published inJournal of atmospheric and oceanic technology Vol. 35; no. 12; pp. 2323 - 2338
Main Authors Veillette, Mark S., Hassey, Eric P., Mattioli, Christopher J., Iskenderian, Haig, Lamey, Patrick M.
Format Journal Article
LanguageEnglish
Published Boston American Meteorological Society 01.12.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstract In this work deep convolutional neural networks (CNNs) are shown to be an effective model for fusing heterogeneous geospatial data to create radar-like analyses of precipitation intensity (i.e., synthetic radar). The CNN trained in this work has a directed acyclic graph (DAG) structure that takes inputs from multiple data sources with varying spatial resolutions. These data sources include geostationary satellite (1-km visible and four 4-km infrared bands), lightning flash density from Earth Network’s Total Lightning Network, and numerical model data from NOAA’s 13-km Rapid Refresh model. A regression is performed in the final layer of the network using NEXRAD-derived data mapped onto a 1-km grid as a target variable. The outputs of the CNN are fused with analyses from NEXRAD to create seamless radar mosaics that extend to offshore sectors and beyond. The model is calibrated and validated using both NEXRAD and spaceborne radar from NASA’s Global Precipitation Measurement (GPM) Mission’s Core Observatory satellite. The advantages over a random forest–based approach used in previous works are discussed.
ISSN:0739-0572
1520-0426
DOI:10.1175/JTECH-D-18-0010.1