Adaptive Decontamination of the Training Set: A Unified Formulation for Discriminative Visual Tracking
Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker i...
Saved in:
Published in | 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1430 - 1438 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2016
|
Series | IEEE Conference on Computer Vision and Pattern Recognition |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-by-detection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets. |
---|---|
ISBN: | 9781467388511 1467388513 1467388521 9781467388528 |
ISSN: | 1063-6919 |
DOI: | 10.1109/CVPR.2016.159 |