Sequential Kernel Density Approximation and Its Application to Real-Time Visual Tracking

Visual features are commonly modeled with probability density functions in computer vision problems, but current methods such as a mixture of Gaussians and kernel density estimation suffer from either the lack of flexibility by fixing or limiting the number of Gaussian components in the mixture or l...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 30; no. 7; pp. 1186 - 1197
Main Authors Bohyung Han, Comaniciu, D., Ying Zhu, Davis, L.S.
Format Journal Article
LanguageEnglish
Published Los Alamitos, CA IEEE 01.07.2008
IEEE Computer Society
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Visual features are commonly modeled with probability density functions in computer vision problems, but current methods such as a mixture of Gaussians and kernel density estimation suffer from either the lack of flexibility by fixing or limiting the number of Gaussian components in the mixture or large memory requirement by maintaining a nonparametric representation of the density. These problems are aggravated in real-time computer vision applications since density functions are required to be updated as new data becomes available. We present a novel kernel density approximation technique based on the mean-shift mode finding algorithm and describe an efficient method to sequentially propagate the density modes over time. Although the proposed density representation is memory efficient, which is typical for mixture densities, it inherits the flexibility of nonparametric methods by allowing the number of components to be variable. The accuracy and compactness of the sequential kernel density approximation technique is illustrated by both simulations and experiments. Sequential kernel density approximation is applied to online target appearance modeling for visual tracking, and its performance is demonstrated on a variety of videos.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0162-8828
1939-3539
DOI:10.1109/TPAMI.2007.70771