Material-Guided Multiview Fusion Network for Hyperspectral Object Tracking

Hyperspectral videos (HSVs) have more potential in object tracking than color videos, thanks to their material identification ability. Nevertheless, previous works have not fully explored the benefits of the material information, resulting in limited representation ability and tracking accuracy. To...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on geoscience and remote sensing Vol. 62; pp. 1 - 15
Main Authors Li, Zhuanfeng, Xiong, Fengchao, Zhou, Jun, Lu, Jianfeng, Zhao, Zhuang, Qian, Yuntao
Format Journal Article
LanguageEnglish
Published New York IEEE 2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Hyperspectral videos (HSVs) have more potential in object tracking than color videos, thanks to their material identification ability. Nevertheless, previous works have not fully explored the benefits of the material information, resulting in limited representation ability and tracking accuracy. To address this issue, this article introduces a material-guided multiview fusion network (MMF-Net) for improved tracking. Specifically, we combine false-color information, hyperspectral information, and material information obtained by hyperspectral unmixing to provide a rich multiview representation of the object. Cross-material attention (CMA) is employed to capture the interaction among materials, enabling the network to focus on the most relevant materials for the target. Furthermore, leveraging the discriminative ability of material view, a novel material-guided multiview fusion module is proposed to capture both intraview and cross-view long-range spatial dependencies for effective feature aggregation. Thanks to the enhanced representation ability of each view and the integration of the complementary advantages of all views, our network is more capable of suppressing the tracking drift in various challenging scenes and achieving accurate object localization. Extensive experiments show that our tracker achieves state-of-the-art tracking performance. The source code will be available at https://github.com/hscv/MMF-Net .
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0196-2892
1558-0644
DOI:10.1109/TGRS.2024.3366536