Reciprocal Translation between SAR and Optical Remote Sensing Images with Cascaded-Residual Adversarial Networks
Despite the advantages of all-weather and all-day high-resolution imaging, synthetic aperture radar (SAR) images are much less viewed and used by general people because human vision is not adapted to microwave scattering phenomenon. However, expert interpreters can be trained by comparing side-by-si...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.01.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Despite the advantages of all-weather and all-day high-resolution imaging,
synthetic aperture radar (SAR) images are much less viewed and used by general
people because human vision is not adapted to microwave scattering phenomenon.
However, expert interpreters can be trained by comparing side-by-side SAR and
optical images to learn the mapping rules from SAR to optical. This paper
attempts to develop machine intelligence that are trainable with large-volume
co-registered SAR and optical images to translate SAR image to optical version
for assisted SAR image interpretation. Reciprocal SAR-Optical image translation
is a challenging task because it is raw data translation between two physically
very different sensing modalities. This paper proposes a novel reciprocal
adversarial network scheme where cascaded residual connections and hybrid
L1-GAN loss are employed. It is trained and tested on both spaceborne GF-3 and
airborne UAVSAR images. Results are presented for datasets of different
resolutions and polarizations and compared with other state-of-the-art methods.
The FID is used to quantitatively evaluate the translation performance. The
possibility of unsupervised learning with unpaired SAR and optical images is
also explored. Results show that the proposed translation network works well
under many scenarios and it could potentially be used for assisted SAR
interpretation. |
---|---|
DOI: | 10.48550/arxiv.1901.08236 |