Visual Tracking via Nonnegative Multiple Coding

It has been extensively observed that an accurate appearance model is critical to achieving satisfactory performance for robust object tracking. Most existing top-ranked methods rely on linear representation over a single dictionary, which brings about improper understanding on the target appearance...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 19; no. 12; pp. 2680 - 2691
Main Authors Liu, Fanghui, Gong, Chen, Zhou, Tao, Fu, Keren, He, Xiangjian, Yang, Jie
Format Journal Article
LanguageEnglish
Published IEEE 01.12.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:It has been extensively observed that an accurate appearance model is critical to achieving satisfactory performance for robust object tracking. Most existing top-ranked methods rely on linear representation over a single dictionary, which brings about improper understanding on the target appearance. To address this problem, in this paper, we propose a novel appearance model named as "nonnegative multiple coding" (NMC) to accurately represent a target. First, a series of local dictionaries are created with different predefined numbers of nearest neighbors, and then the contributions of these dictionaries are automatically learned. As a result, this ensemble of dictionaries can comprehensively exploit the appearance information carried by all the constituted dictionaries. Second, the existing methods explicitly impose the nonnegative constraint to coefficient vectors, but in the proposed model, we directly deploy an efficient 12 norm regularization to achieve the similar nonnegative purpose with theoretical guarantees. Moreover, an efficient occlusion detection scheme is designed to alleviate tracking drifts, which investigates whether negative templates are selected to represent the severely occluded target. Experimental results on two benchmarks demonstrate that our NMC tracker are able to achieve superior performance to state-of-the-art methods.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2017.2708424