Adaptive spatio-temporal context learning for visual tracking

In recent years, a spatio-temporal context (STC) algorithm has attracted the attention of scholars, due to the algorithm makes full use of the information of the target background. Although the STC algorithm achieve tracking at the real-time, but there is still a need to improve the tracking capabil...

Full description

Saved in:
Bibliographic Details
Published inThe imaging science journal Vol. 67; no. 3; pp. 136 - 147
Main Authors Zhang, Yaqin, Wang, Liejun, Qin, Jiwei
Format Journal Article
LanguageEnglish
Published Taylor & Francis 03.04.2019
Subjects
Online AccessGet full text
ISSN1368-2199
1743-131X
DOI10.1080/13682199.2019.1567020

Cover

Loading…
More Information
Summary:In recent years, a spatio-temporal context (STC) algorithm has attracted the attention of scholars, due to the algorithm makes full use of the information of the target background. Although the STC algorithm achieve tracking at the real-time, but there is still a need to improve the tracking capability when the target is occluded or the size of the target changes. In this paper, we presented an adaptive spatio-temporal context learning for visual tracking (AFSTC). Firstly, in order to accurately describe the appearance of the target, we integrate Histogram of Oriented Gradient (HOG) and Colour-naming (CN) features. And then we use the average difference between two adjacent frames to adjust the learning rate of update model for adaptive tracking. Finally, we adjust parameters of scale update strategy to achieve the competitive results on accuracy and robustness. We perform experiments on the Online Tracking Benchmark (OTB) 2015 dataset. Our tracker achieves a 13% relative gain in distance precision compared to the traditional STC algorithm. Moreover, although the speed of our tracker reduces, but it reaches 129.99 frames per second (FPS) and can still achieve tracking at the real-time.
ISSN:1368-2199
1743-131X
DOI:10.1080/13682199.2019.1567020