Dual-Graph Regularized Discriminative Multitask Tracker

Multitask and low-rank learning methods have attracted increasing attention for visual tracking. However, most trackers only focus on learning appearance subspace basis or the sparse low rankness of representation and, thus, do not make full use of the structure information among and inside target c...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 20; no. 9; pp. 2303 - 2315
Main Authors Fan, Baojie, Cong, Yang, Tang, Yandong
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.09.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multitask and low-rank learning methods have attracted increasing attention for visual tracking. However, most trackers only focus on learning appearance subspace basis or the sparse low rankness of representation and, thus, do not make full use of the structure information among and inside target candidates (or samples). In this paper, we propose a dual-graph regularized discriminative low-rank learning for a multitask tracker, which integrates the discriminative subspace and intrinsic geometric structures among tasks. By constructing dual-graph regulations from two views of multitask observation, the developed model not only exploits the intrinsic relationship among tasks, and preserves the spatial layout structure among the local patches inside each candidate, but also learns the salient features of the target samples. This operation has the benefit of having good target representation and improving the performance of the tracker. Moreover, our developed tracker is a collaborate multitask tracking model and learns the discriminative subspace with adaptive dimension and optimal classifier simultaneously. Then, a collaborate metric is developed to find the best candidate, which integrates both classification reliability and representation accuracy. Encouraging experimental results on a large set of public video sequences justify that our tracker performs favorably against many other state-of-the-art trackers.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2018.2804762