Towards total scene understanding: Classification, annotation and segmentation in an automatic framework

Given an image, we propose a hierarchical generative model that classifies the overall scene, recognizes and segments each object component, as well as annotates the image with a list of tags. To our knowledge, this is the first model that performs all three tasks in one coherent framework. For inst...

Full description

Saved in:
Bibliographic Details
Published in2009 IEEE Conference on Computer Vision and Pattern Recognition pp. 2036 - 2043
Main Authors Li-Jia Li, Socher, Richard, Li Fei-Fei
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2009
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Given an image, we propose a hierarchical generative model that classifies the overall scene, recognizes and segments each object component, as well as annotates the image with a list of tags. To our knowledge, this is the first model that performs all three tasks in one coherent framework. For instance, a scene of a `polo game' consists of several visual objects such as `human', `horse', `grass', etc. In addition, it can be further annotated with a list of more abstract (e.g. `dusk') or visually less salient (e.g. `saddle') tags. Our generative model jointly explains images through a visual model and a textual model. Visually relevant objects are represented by regions and patches, while visually irrelevant textual annotations are influenced directly by the overall scene class. We propose a fully automatic learning framework that is able to learn robust scene models from noisy Web data such as images and user tags from Flickr.com. We demonstrate the effectiveness of our framework by automatically classifying, annotating and segmenting images from eight classes depicting sport scenes. In all three tasks, our model significantly outperforms state-of-the-art algorithms.
ISBN:1424439922
9781424439928
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2009.5206718