Deep Co-Saliency Detection via Stacked Autoencoder-Enabled Fusion and Self-Trained CNNs

Image co-saliency detection via fusion-based or learning-based methods faces cross-cutting issues. Fusion-based methods often combine saliency proposals using a majority voting rule. Their performance hence highly depends on the quality and coherence of individual proposals. Learning-based methods t...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 22; no. 4; pp. 1016 - 1031
Main Authors Tsai, Chung-Chi, Hsu, Kuang-Jui, Lin, Yen-Yu, Qian, Xiaoning, Chuang, Yung-Yu
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.04.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Image co-saliency detection via fusion-based or learning-based methods faces cross-cutting issues. Fusion-based methods often combine saliency proposals using a majority voting rule. Their performance hence highly depends on the quality and coherence of individual proposals. Learning-based methods typically require ground-truth annotations for training, which are not available for co-saliency detection. In this work, we present a two-stage approach to address these issues jointly. At the first stage, an unsupervised deep learning model with stacked autoencoder (SAE) is proposed to evaluate the quality of saliency proposals. It employs latent representations for image foregrounds, and auto-encodes foreground consistency and foreground-background distinctiveness in a discriminative way. The resultant model, SAE-enabled fusion ( SAEF ), can combine multiple saliency proposals to yield a more reliable saliency map. At the second stage, motivated by the fact that fusion often leads to over-smoothed saliency maps, we develop self-trained convolutional neural networks ( STCNN ) to alleviate this negative effect. STCNN takes the saliency maps produced by SAEF as inputs. It propagates information from regions of high confidence to those of low confidence. During propagation, feature representations are distilled, resulting in sharper and better co-saliency maps. Our approach is comprehensively evaluated on three benchmarks, including MSRC, iCoseg, and Cosal2015, and performs favorably against the state-of-the-arts. In addition, we demonstrate that our method can be applied to object co-segmentation and object co-localization, achieving the state-of-the-art performance in both applications.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2019.2936803