Efficient Single-Object Tracker Based on Local-Global Feature Fusion

Since Vision Transformers (ViTs) are introduced into computer vision, they have developed rapidly in a variety of visual tasks. Recently, they have been gradually applied to visual tracking. The Transformer can adaptively capture the global similarity comparisons of target objects and search regions...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 34; no. 2; pp. 1114 - 1122
Main Authors Ni, Xiaoyu, Yuan, Liang, Lv, Kai
Format Journal Article
LanguageEnglish
Published New York IEEE 01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Since Vision Transformers (ViTs) are introduced into computer vision, they have developed rapidly in a variety of visual tasks. Recently, they have been gradually applied to visual tracking. The Transformer can adaptively capture the global similarity comparisons of target objects and search regions, which has achieved competitive performance results. However, Transformer architectures often require a large amount of training data and computing resources, and lack prior knowledge of inductive biases that existed in images. The advantages of convolutional neural networks (CNNs) in extracting local similarities are not fully exploited. To resolve these problems, we propose a lightweight tracking architecture, combining CNN and Transformer in the feature fusion stage. Specifically, Local-Global Feature Interaction (LGFI) module and Feature Cross-Fusion (FCF) module are the key components in our approach. In the LGFI module, the proposed method includes a Transformer global information network and a Transformer-like CNN local information network for simultaneous global scope dependency establishment and local feature similarity enhancement, then aggregates their feature results together. In the FCF module, the proposed method includes a multi-head cross-attention and a convolutional feedforward network for feature fusion of templates and search regions. Finally, we use the classification and regression head to predict the exact location of the target. Extensive experiments demonstrate that, our method achieves better tracking performance than the baseline method, when both methods are trained with fewer data. Meanwhile, without any extra training data, the proposed method also obtains comparable results with other state-of-the-art trackers on six challenging benchmarks, including GOT-10k, LaSOT, TrackingNet, OTB100, UAV123, and NFS. Furthermore, our model is lightweight compared with the baseline method, with fewer parameters and lower FLOPs, while running at real-time speed.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2023.3290868