Multi-cue Visual Tracking Using Robust Feature-Level Fusion Based on Joint Sparse Representation

The use of multiple features for tracking has been proved as an effective approach because limitation of each feature could be compensated. Since different types of variations such as illumination, occlusion and pose may happen in a video sequence, especially long sequence videos, how to dynamically...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE Conference on Computer Vision and Pattern Recognition pp. 1194 - 1201
Main Authors Xiangyuan Lan, Ma, Andy Jinhua, Pong Chi Yuen
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The use of multiple features for tracking has been proved as an effective approach because limitation of each feature could be compensated. Since different types of variations such as illumination, occlusion and pose may happen in a video sequence, especially long sequence videos, how to dynamically select the appropriate features is one of the key problems in this approach. To address this issue in multi-cue visual tracking, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. As a result, robust tracking performance is obtained. Experimental results on publicly available videos show that the proposed method outperforms both existing sparse representation based and fusion-based trackers.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1063-6919
1063-6919
2575-7075
DOI:10.1109/CVPR.2014.156