Embedding Motion in Model-Based Stochastic Tracking

Particle filtering is now established as one of the most popular methods for visual tracking. Within this framework, there are two important considerations. The first one refers to the generic assumption that the observations are temporally independent given the sequence of object states. The second...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 15; no. 11; pp. 3514 - 3530
Main Authors Odobez, J.-M., Gatica-Perez, D., Ba, S.O.
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.11.2006
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Particle filtering is now established as one of the most popular methods for visual tracking. Within this framework, there are two important considerations. The first one refers to the generic assumption that the observations are temporally independent given the sequence of object states. The second consideration, often made in the literature, uses the transition prior as the proposal distribution. Thus, the current observations are not taken into account, requiring the noise process of this prior to be large enough to handle abrupt trajectory changes. As a result, many particles are either wasted in low likelihood regions of the state space, resulting in low sampling efficiency, or more importantly, propagated to distractor regions of the image, resulting in tracking failures. In this paper, we propose to handle both considerations using motion. We first argue that, in general, observations are conditionally correlated, and propose a new model to account for this correlation, allowing for the natural introduction of implicit and/or explicit motion measurements in the likelihood term. Second, explicit motion measurements are used to drive the sampling process towards the most likely regions of the state space. Overall, the proposed model handles abrupt motion changes and filters out visual distractors, when tracking objects with generic models based on shape or color distribution. Results were obtained on head tracking experiments using several sequences with moving camera involving large dynamics. When compared against the Condensation Algorithm, they have demonstrated the superior tracking performance of our approach
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ObjectType-Article-2
ObjectType-Feature-1
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2006.877497