SalGAN: Visual Saliency Prediction with Generative Adversarial Networks

We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versi...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pan, Junting, Cristian Canton Ferrer, McGuinness, Kevin, O'Connor, Noel E, Torres, Jordi, Sayrol, Elisa, Giro-i-Nieto, Xavier
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 01.07.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples. The first stage of the network consists of a generator model whose weights are learned by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency maps. The resulting prediction is processed by a discriminator network trained to solve a binary classification task between the saliency maps generated by the generative stage and the ground truth ones. Our experiments show how adversarial training allows reaching state-of-the-art performance across different metrics when combined with a widely-used loss function like BCE. Our results can be reproduced with the source code and trained models available at https://imatge-upc.github.io/saliency-salgan-2017/.
ISSN:2331-8422