Non-local spatial redundancy reduction for bottom-up saliency estimation

► The bottom-up visual saliency is estimated by spatial redundancy reduction. ► A non-local scheme is proposed to measure the spatial redundancy. ► The model is adaptive to both natural and conceptual images. In this paper we present a redundancy reduction based approach for computational bottom-up...

Full description

Saved in:
Bibliographic Details
Published inJournal of visual communication and image representation Vol. 23; no. 7; pp. 1158 - 1166
Main Authors Wu, Jinjian, Qi, Fei, Shi, Guangming, Lu, Yongheng
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.10.2012
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:► The bottom-up visual saliency is estimated by spatial redundancy reduction. ► A non-local scheme is proposed to measure the spatial redundancy. ► The model is adaptive to both natural and conceptual images. In this paper we present a redundancy reduction based approach for computational bottom-up visual saliency estimation. In contrast to conventional methods, our approach determines the saliency by filtering out redundant contents instead of measuring their significance. To analyze the redundancy of self-repeating spatial structures, we propose a non-local self-similarity based procedure. The result redundancy coefficient is used to compensate the Shannon entropy, which is based on statistics of pixel intensities, to generate the bottom-up saliency map of the visual input. Experimental results on three publicly available databases demonstrate that the proposed model is highly consistent with the subjective visual attention.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2012.07.010