GCOANet: A gradient consistency constraints semi-supervised network for optical image-assisted SAR despeckling

•Optically-guided SAR despeckling with semi-supervised learning.•Rapid cross-domain feature generation with non-local filtering and conditional diffusion model.•Multi-scale blind spot network and interval downsampling strategy.•All-directional gradient loss for homogeneous and heterogeneous supervis...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of applied earth observation and geoinformation Vol. 142; p. 104677
Main Authors Yang, Yang, Pan, Jun, Xu, Jiangong, Fan, Zhongli, Geng, Zeming, Li, Junli
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2025
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Optically-guided SAR despeckling with semi-supervised learning.•Rapid cross-domain feature generation with non-local filtering and conditional diffusion model.•Multi-scale blind spot network and interval downsampling strategy.•All-directional gradient loss for homogeneous and heterogeneous supervision. Synthetic Aperture Radar (SAR), as an active remote sensing technology with all-weather and all-time capabilities, plays an essential role in environmental monitoring and disaster management. However, SAR employed the coherent imaging mechanism to synthesize images, inevitably introducing speckles in the obtained images. These speckles reduce the signal-to-noise ratio of an image due to the random variation of image pixel value, which brings challenges for subsequent applications. The breakthrough of image registration provides the data requirements for acquiring multi-source remote sensing data. Based on the above study, this paper presents a Gradient Consistency constraints semi-supervised network for Optical image-Assisted SAR despeckling (GCOANet). The presented method generates cross-domain reference images by utilizing optical pixel correlation to conduct the paired SAR reconstruction, thereby mitigating feature misalignment caused by the modal differences between SAR and optical imagery. The conditional diffusion model is then employed to learn the mapping between the SAR and reference images, eliminating the necessity for paired SAR/optical data. The reference image is first generated using a pre-trained conditional diffusion model during the training and test phase. Subsequently, a multi-scale blind spot despeckling network is designed to suppress speckles by fusing SAR and reference features, while also preventing the loss of blind pixel information. Finally, an all-directional gradient loss is proposed to rapidly differentiate homogeneous and heterogeneous regions and achieve separate speckle suppression. Extensive experiments conducted on both real and simulated data verify the effectiveness of the presented method, which retains complete texture details and smooth homogeneous areas. Furthermore, the related applications further demonstrate the efficiency of the proposed method in real-world scenarios.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1569-8432
DOI:10.1016/j.jag.2025.104677