Global Context-Enhanced Network for Pixel-Level Change Detection in Remote Sensing Images

Despite the ongoing advancements in deep learning, challenges persist in the domain of change detection in remote sensing imagery. Objects with intricate structures and features may exhibit different shapes or appearances at different times or spatial locations. While most models aim to improve the...

Full description

Saved in:
Bibliographic Details
Published inIAENG international journal of computer science Vol. 51; no. 8; p. 1060
Main Authors Zhao, Zixue, Li, Zhengpeng, Miao, Jiawei, Wu, Kunyang, Wu, Jiansheng
Format Journal Article
LanguageEnglish
Published Hong Kong International Association of Engineers 01.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Despite the ongoing advancements in deep learning, challenges persist in the domain of change detection in remote sensing imagery. Objects with intricate structures and features may exhibit different shapes or appearances at different times or spatial locations. While most models aim to improve the performance of change detection tasks, these enhancements may lead to significantly increased computational efficiency. In this paper, we propose a global context enhancement network. Firstly, we use ResNet18 to extract dual-temporal features, which are then represented as concise semantic labels by an image semantic extractor. Subsequently, we process these semantic labels through a contextual transformer encoder to generate more refined remote sensing semantic labels enriched with abundant contextual information. The refined semantic labels are integrated with the original features and processed through a Transformer decoder to generate enhanced dual-temporal feature maps. Finally, through the processing of the classification head, we obtain pixel-level predictive images. Extensive experiments conducted on two public change detection datasets yielded impressive results, achieving an F1 score of 89.95% on the WHUCD dataset and 95.16% on the SVCD dataset. When compared to state-of-the-art change detection models, our approach not only achieves significant performance gains but also maintains relatively high computational efficiency. Our method excels in capturing relevant features and relationships within input data, thereby enhancing the model's ability to represent relationships between different features. This results in a significant performance improvement without adding to the computational complexity.
ISSN:1819-656X
1819-9224