Multi-Level Feature Fusion and Attention Network for High-Resolution Remote Sensing Image Semantic Labeling

Semantic labeling of high-resolution remote sensing images(HRRSIs) has always been an important research field in remote sensing images analysis. However, remote sensing images contain substantial low-level features and high-level features, which makes them quite difficult to be recognized. In this...

Full description

Saved in:
Bibliographic Details
Published inIEEE geoscience and remote sensing letters p. 1
Main Authors Zhang, Yijie, Cheng, Jian, Bai, Haiwei, Wang, Qi, Liang, Xingyu
Format Journal Article
LanguageEnglish
Published IEEE 2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semantic labeling of high-resolution remote sensing images(HRRSIs) has always been an important research field in remote sensing images analysis. However, remote sensing images contain substantial low-level features and high-level features, which makes them quite difficult to be recognized. In this letter, we proposed a multi-level feature fusion and attention network(MFANet) to adaptively capture and fuse mutil-level features in a more effective and efficient manner. Specifically, the backbone of our network is divided into two branches - the detail branch and the semantic branch, where the detail branch extracts low-level features and the semantic branch extracts high-level features. The Deep Atrous Spatial Pyramid(DASPP) module is embedded in the end of the semantic branch to capture multiscale features as a supplement to high-level features. It is worth noting that the feature alignment and fusion (FAF) module is used to align and fuse features from different stages to enhance feature representation. Furthermore, the context attention (CA) module is employed to process feature map from the two branches to establish contextual dependencies in the spatial dimension and channel dimension, which can help network focus on more meaningful features. The experiments are carried out on the ISPRS Vaihingen and Potsdam datasets, and the results show that our proposed method has achieved better performance than other state-of-art methods.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2022.3184553