Revisiting Feature Fusion for RGB-T Salient Object Detection

While many RGB-based saliency detection algorithms have recently shown the capability of segmenting salient objects from an image, they still suffer from unsatisfactory performance when dealing with complex scenarios, insufficient illumination or occluded appearances. To overcome this problem, this...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 31; no. 5; pp. 1804 - 1818
Main Authors Zhang, Qiang, Xiao, Tonglin, Huang, Nianchang, Zhang, Dingwen, Han, Jungong
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While many RGB-based saliency detection algorithms have recently shown the capability of segmenting salient objects from an image, they still suffer from unsatisfactory performance when dealing with complex scenarios, insufficient illumination or occluded appearances. To overcome this problem, this article studies RGB-T saliency detection, where we take advantage of thermal modality's robustness against illumination and occlusion. To achieve this goal, we revisit feature fusion for mining intrinsic RGB-T saliency patterns and propose a novel deep feature fusion network, which consists of the multi-scale, multi-modality, and multi-level feature fusion modules. Specifically, the multi-scale feature fusion module captures rich contexture features from each modality feature, while the multi-modality and multi-level feature fusion modules integrate complementary features from different modality features and different level of features, respectively. To demonstrate the effectiveness of the proposed approach, we conduct comprehensive experiments on the RGB-T saliency detection benchmark. The experimental results demonstrate that our approach outperforms other state-of-the-art methods and the conventional feature fusion modules by a large margin.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2020.3014663