A spatiotemporal model for video saliency detection
Visual saliency approaches aim to detect regions that attract human attention more than others. To find salient objects in a video shot we start from the hypothesis that moving objects attract attention more than other and are considered salient. In this paper, a novel video saliency model is propos...
Saved in:
Published in | 2016 International Image Processing, Applications and Systems (IPAS) pp. 1 - 6 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.11.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Visual saliency approaches aim to detect regions that attract human attention more than others. To find salient objects in a video shot we start from the hypothesis that moving objects attract attention more than other and are considered salient. In this paper, a novel video saliency model is proposed. Saliency map is the result of a combination between a dynamic map and a static map. For each pair of video frames, a dense optical flow is computed using the polynomial expansion. This dense optical flow is presented in the RGB color space and leads to a dynamic map. Then, a static map is generated by considering the spatial edges of each frame. Static map and dynamic map are fused into one unique map. Finally, to generate saliency map we used the Gestalt principle of figure-ground segregation, which assumes that connected regions are fused together and belong to the foreground. The proposed method is evaluated on the SegTrackV2 and Fukuchi datasets and shows good performances comparing to the state-of-the-art including three recent saliency methods. |
---|---|
DOI: | 10.1109/IPAS.2016.7880113 |