System and process for bootstrap initialization of vision-based tracking systems

The present invention is embodied in a system and process for automatically learning a reliable tracking system. The tracking system is learned by using information produced by an initial object model in combination with an initial tracking function, and a data acquisition function for gathering obs...

Full description

Saved in:
Bibliographic Details
Main Author TOYAMA KENTARO
Format Patent
LanguageEnglish
Published 29.06.2004
Edition7
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The present invention is embodied in a system and process for automatically learning a reliable tracking system. The tracking system is learned by using information produced by an initial object model in combination with an initial tracking function, and a data acquisition function for gathering observations about each image. The initial tracking function probabilistically determines the configuration of one or more target objects in a temporal sequence of images. The observations gathered by the data acquisition function include information that is relevant to parameters desired for a final object model. These relevant observations may include information such as the color, shape, or size of a tracked object, and depend on the parameters necessary to support the final tracking function. A learning function based on a learning method such as, for example, neural networks, Bayesian belief networks (BBN), discrimination functions, decision trees, expectation-maximization on mixtures of Guassians, probability distribution functions (PDF), estimation through moment computation, PDF estimation through histograms, etc., then uses the observations and probabilistic target location information to probabilistically learn an object model automatically tailored to specific target objects. The learned object model is then used in combination with the final tracking function to probabilistically locate and track specific target objects in one or more sequential images.
Bibliography:Application Number: US20000593628