AMCA: Attention-guided Multi-scale Context Aggregation Network for Remote Sensing Image Change Detection

Remote sensing image change detection is the key to understanding surface changes. Although the existing change detection methods have achieved good results, some structural details are missing and the detection accuracy needs to be improved. Therefore, we propose an attention-guided multi-scale con...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing p. 1
Main Authors Xu, Xintao, Yang, Zhe, Li, Jinjiang
Format Journal Article
LanguageEnglish
Published IEEE 29.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Remote sensing image change detection is the key to understanding surface changes. Although the existing change detection methods have achieved good results, some structural details are missing and the detection accuracy needs to be improved. Therefore, we propose an attention-guided multi-scale context aggregation network (AMCA) for remote sensing image change detection. First, we use the fully attentional pyramid module (FAPM) to enhance the deep feature information of the original image. And we introduce the dense feature fusion module (DFFM) to fully fuse the bi-temporal features to obtain the change regions. Second, the introduction of channel-wise cross fusion transformer (CCT) and channel-wise cross attention (CCA) not only can effectively fuse channel features focusing on different semantic patterns, but also bridge the semantic gap between multi-scale features. Next, we use the transformer decoder to map the learned high-level semantic information into the pixel space to refine the original features. In addition, we use the context extraction module (CEM) to obtain the local and global associations of feature maps. Finally, the addition of attention aggregation module (AAM) can effectively combine the feature information at different scales. Extensive experiments on three public change detection datasets show that the proposed method has advantages over other methods in terms of both visual interpretation and quantitative analysis.
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2023.3272006