Synthesis of Multispectral Optical Images From SAR/Optical Multitemporal Data Using Conditional Generative Adversarial Networks

The synthesis of realistic data using deep learning techniques has greatly improved the performance of classifiers in handling incomplete data. Remote sensing applications that have profited from those techniques include translating images of different sensors, improving the image resolution and com...

Full description

Saved in:
Bibliographic Details
Published inIEEE geoscience and remote sensing letters Vol. 16; no. 8; pp. 1220 - 1224
Main Authors Bermudez, Jose D., Happ, Patrick N., Feitosa, Raul Q., Oliveira, Dario A. B.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.08.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The synthesis of realistic data using deep learning techniques has greatly improved the performance of classifiers in handling incomplete data. Remote sensing applications that have profited from those techniques include translating images of different sensors, improving the image resolution and completing missing temporal or spatial data such as in cloudy optical images. In this context, this letter proposes a new deep-learning-based framework to synthesize missing or corrupted multispectral optical images using multimodal/multitemporal data. Specifically, we use conditional generative adversarial networks (cGANs) to generate the missing optical image by exploiting the correspondent synthetic aperture radar (SAR) data with a SAR-optical data from the same area at a different acquisition date. The proposed framework was evaluated in two land-cover applications over tropical regions, where cloud coverage is a major problem: crop recognition and wildfire detection. In both applications, our proposal was superior to alternative approaches tested in our experiments. In particular, our approach outperformed recent cGAN-based proposals for cloud removal, on average, by 7.7% and 8.6% in terms of overall accuracy and F1-score, respectively.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2019.2894734