Cross-modality attentive feature fusion for object detection in multispectral remote sensing imagery

•We propose a simple yet effective CMAFF module that can fuse the complementary information of multispectral remote sensing images with joint common-modality and differential-modality attentions.•We confirm the effectiveness of our cross-modality fusion attention module through extensive ablation st...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 130; p. 108786
Main Authors Qingyun, Fang, Zhaokui, Wang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We propose a simple yet effective CMAFF module that can fuse the complementary information of multispectral remote sensing images with joint common-modality and differential-modality attentions.•We confirm the effectiveness of our cross-modality fusion attention module through extensive ablation studies.•We design a new two-stream object detection network YOLOFusion for multispectral remote sensing images and verify its performance. Cross-modality fusing complementary information of multispectral remote sensing image pairs can improve the perception ability of detection algorithms, making them more robust and reliable for a wider range of applications, such as nighttime detection. Compared with prior methods, we think different features should be processed specifically, the modality-specific features should be retained and enhanced, while the modality-shared features should be cherry-picked from the RGB and thermal IR modalities. Following this idea, a novel and lightweight multispectral feature fusion approach with joint common-modality and differential-modality attentions are proposed, named Cross-Modality Attentive Feature Fusion (CMAFF). Given the intermediate feature maps of RGB and thermal images, our module parallel infers attention maps from two separate modalities, common- and differential-modality, then the attention maps are multiplied to the input feature map respectively for adaptive feature enhancement or selection. Extensive experiments demonstrate that our proposed approach can achieve the state-of-the-art performance at a low computation cost.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2022.108786