Context-Driven Feature-Focusing Network for Semantic Segmentation of High-Resolution Remote Sensing Images

High-resolution remote sensing images (HRRSIs) cover a broad range of geographic regions and contain a wide variety of artificial objects and natural elements at various scales that comprise different image contexts. In semantic segmentation tasks based on deep convolutional neural networks (DCNNs),...

Full description

Saved in:
Bibliographic Details
Published inRemote sensing (Basel, Switzerland) Vol. 15; no. 5; p. 1348
Main Authors Tan, Xiaowei, Xiao, Zhifeng, Zhang, Yanru, Wang, Zhenjiang, Qi, Xiaole, Li, Deren
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.03.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:High-resolution remote sensing images (HRRSIs) cover a broad range of geographic regions and contain a wide variety of artificial objects and natural elements at various scales that comprise different image contexts. In semantic segmentation tasks based on deep convolutional neural networks (DCNNs), different resolution features are not equally effective for extracting ground objects with different scales. In this article, we propose a novel context-driven feature-focusing network (CFFNet) aimed at focusing on the multi-scale ground object in fused features of different resolutions. The CFFNet consists of three components: a depth-residual encoder, a context-driven feature-focusing (CFF) decoder, and a classifier. First, features with different resolutions are extracted using the depth-residual encoder to construct a feature pyramid. The multi-scale information in the fused features is then extracted using the feature-focusing (FF) module in the CFF decoder, followed by computing the focus weights of different scale features adaptively using the context-focusing (CF) module to obtain the weighted multi-scale fused feature representation. Finally, the final results are obtained using the classifier. The experiments are conducted on the public LoveDA and GID datasets. Quantitative and qualitative analyses of state-of-the-art (SOTA) segmentation benchmarks demonstrate the rationality and effectiveness of the proposed approach.
ISSN:2072-4292
2072-4292
DOI:10.3390/rs15051348