Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories

Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on mor...

Full description

Saved in:
Bibliographic Details
Published inComputer vision and image understanding Vol. 106; no. 1; pp. 59 - 70
Main Authors Fei-Fei, Li, Fergus, Rob, Perona, Pietro
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.04.2007
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2005.09.012