Learning Optimal Seeds for Diffusion-Based Salient Object Detection

In diffusion-based saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yields...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE Conference on Computer Vision and Pattern Recognition pp. 2790 - 2797
Main Authors Song Lu, Mahadevan, Vijay, Vasconcelos, Nuno
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In diffusion-based saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yields the object saliency map. The optimal solution is the product of a propagation matrix and a saliency seed vector that contains a prior saliency assessment. This is obtained from either a bottom-up saliency detector or some heuristics. In this work, we propose a method to learn optimal seeds for object saliency. Two types of features are computed per superpixel: the bottom-up saliency of the superpixel region and a set of mid-level vision features informative of how likely the superpixel is to belong to an object. The combination of features that best discriminates between object and background saliency is then learned, using a large-margin formulation of the discriminant saliency principle. The propagation of the resulting saliency seeds, using a diffusion process, is finally shown to outperform the state of the art on a number of salient object detection datasets.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1063-6919
1063-6919
2575-7075
DOI:10.1109/CVPR.2014.357