A spatiotemporal model for video saliency detection

Visual saliency approaches aim to detect regions that attract human attention more than others. To find salient objects in a video shot we start from the hypothesis that moving objects attract attention more than other and are considered salient. In this paper, a novel video saliency model is propos...

Full description

Saved in:
Bibliographic Details
Published in2016 International Image Processing, Applications and Systems (IPAS) pp. 1 - 6
Main Authors Kalboussi, Rahma, Abdellaoui, Mehrez, Douik, Ali
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Visual saliency approaches aim to detect regions that attract human attention more than others. To find salient objects in a video shot we start from the hypothesis that moving objects attract attention more than other and are considered salient. In this paper, a novel video saliency model is proposed. Saliency map is the result of a combination between a dynamic map and a static map. For each pair of video frames, a dense optical flow is computed using the polynomial expansion. This dense optical flow is presented in the RGB color space and leads to a dynamic map. Then, a static map is generated by considering the spatial edges of each frame. Static map and dynamic map are fused into one unique map. Finally, to generate saliency map we used the Gestalt principle of figure-ground segregation, which assumes that connected regions are fused together and belong to the foreground. The proposed method is evaluated on the SegTrackV2 and Fukuchi datasets and shows good performances comparing to the state-of-the-art including three recent saliency methods.
DOI:10.1109/IPAS.2016.7880113