Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video
Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) sa...
Saved in:
Published in | Journal of electronic imaging Vol. 25; no. 1; p. 013008 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Society of Photo-Optical Instrumentation Engineers
01.01.2016
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) saliency prediction models. Newly proposed 3-D VAMs have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2-D image and video content. In the case of 3-D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3-D VAMs. We introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3-D videos (and also 2-D versions of those), and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2-D and 3-D VAMs and facilitating the addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs. |
---|---|
ISSN: | 1017-9909 1560-229X |
DOI: | 10.1117/1.JEI.25.1.013008 |