Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video

Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) sa...

Full description

Saved in:
Bibliographic Details
Published inJournal of electronic imaging Vol. 25; no. 1; p. 013008
Main Authors Banitalebi-Dehkordi, Amin, Nasiopoulos, Eleni, Pourazad, Mahsa T, Nasiopoulos, Panos
Format Journal Article
LanguageEnglish
Published Society of Photo-Optical Instrumentation Engineers 01.01.2016
Subjects
Online AccessGet full text
ISSN1017-9909
1560-229X
DOI10.1117/1.JEI.25.1.013008

Cover

Loading…
Abstract Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) saliency prediction models. Newly proposed 3-D VAMs have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2-D image and video content. In the case of 3-D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3-D VAMs. We introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3-D videos (and also 2-D versions of those), and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2-D and 3-D VAMs and facilitating the addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs.
AbstractList Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is well explored for two-dimensional (2-D) image and video content, there have been only a few attempts made to design three-dimensional (3-D) saliency prediction models. Newly proposed 3-D VAMs have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2-D image and video content. In the case of 3-D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3-D VAMs. We introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3-D videos (and also 2-D versions of those), and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2-D and 3-D VAMs and facilitating the addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs.
Author Pourazad, Mahsa T
Banitalebi-Dehkordi, Amin
Nasiopoulos, Panos
Nasiopoulos, Eleni
Author_xml – sequence: 1
  givenname: Amin
  surname: Banitalebi-Dehkordi
  fullname: Banitalebi-Dehkordi, Amin
  email: dehkordi@ece.ubc.ca
  organization: aUniversity of British Columbia, Electrical and Computer Engineering Department, Vancouver, BC V6T 1Z4, Canada
– sequence: 2
  givenname: Eleni
  surname: Nasiopoulos
  fullname: Nasiopoulos, Eleni
  organization: bUniversity of British Columbia, Department of Psychology, Vancouver, BC V6T 1Z4, Canada
– sequence: 3
  givenname: Mahsa T
  surname: Pourazad
  fullname: Pourazad, Mahsa T
  organization: dTELUS Communications Inc., Vancouver, BC V6B 8N9, Canada
– sequence: 4
  givenname: Panos
  surname: Nasiopoulos
  fullname: Nasiopoulos, Panos
  organization: cUniversity of British Columbia, Institute for Computing, Information, and Cognitive Systems, Vancouver, BC V6T 1Z4, Canada
BookMark eNp9kNFKwzAUhoNMcE4fwLu-QGtOu7bL5ZxTJwO9mOBdSZNTl61rSpJN5tObOUGYUziQQ875fs7_n5NOoxsk5ApoBAD5NUSP40kUpxFEFBJKByekC2lGwzhmrx3fU8hDxig7I-fWLigFGPShS95vsBHzFTfLwM0NYijVChurdMPrALcYOsPFUjVvgeSOW3RBpU2wUXbt55bXyuPboDUolXCeCnxZhwa1FbpV4ojqRknUF-S04rXFy--3R17uxrPRQzh9up-MhtNQJBm4EL05ISuaVALyTJRZSdN-FlNWJkmaohQVrZKEcZbyEmXJaJ4zSaEsSyEGLBdJj-R7XWG0tQarQijHd5d6Y6ougBa7_AoofH5FnPpmn58n4YBsjfJBbf9lZnvGtgqLhV4bb9j-rH2o9pD5-hsap0SNz7d3v8atrLxsdEz27zs-AUPSorg
CitedBy_id crossref_primary_10_1007_s11042_016_4126_3
crossref_primary_10_1007_s11042_016_4155_y
crossref_primary_10_1109_TIM_2022_3225009
crossref_primary_10_1109_TITS_2023_3275279
crossref_primary_10_3390_ijgi10100664
crossref_primary_10_4018_IJSSCI_2018100101
crossref_primary_10_1007_s13319_017_0115_1
crossref_primary_10_1016_j_neucom_2018_09_009
crossref_primary_10_1109_TIP_2017_2721112
crossref_primary_10_1109_TMM_2018_2867742
crossref_primary_10_1007_s11042_018_5837_4
crossref_primary_10_1109_TBC_2019_2957670
ContentType Journal Article
Copyright 2016 SPIE and IS&T
Copyright_xml – notice: 2016 SPIE and IS&T
DBID AAYXX
CITATION
DOI 10.1117/1.JEI.25.1.013008
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Visual Arts
Engineering
EISSN 1560-229X
EndPage 013008
ExternalDocumentID 10_1117_1_JEI_25_1_013008
GrantInformation_xml – fundername: Natural Sciences and Engineering Research Council of Canada
  grantid: STPGP 447339-13
GroupedDBID 0R
29K
5GY
ABPTK
ACGFS
AENEX
ALMA_UNASSIGNED_HOLDINGS
CS3
D-I
DU5
EBS
EJD
F5P
FQ0
G8K
HZ
ITE
M4W
M4X
NU.
O9-
P2P
RNS
SJN
SPBNH
TAE
ULE
UT2
.DC
0R~
4.4
AAJMC
AAYXX
ABDPE
ABJNI
ACGFO
ADMLS
AKROS
CITATION
HZ~
ID FETCH-LOGICAL-c361t-e111cdf03fc176cb6b0546209b3355edcf0f339a95abedb90779d01bbbcc897c3
ISSN 1017-9909
IngestDate Tue Jul 01 01:22:27 EDT 2025
Thu Apr 24 22:51:22 EDT 2025
Fri Jan 15 20:10:21 EST 2021
Fri May 31 16:22:02 EDT 2019
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords visual attention modeling
eye tracking
saliency prediction
stereoscopic video
three-dimensional video
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c361t-e111cdf03fc176cb6b0546209b3355edcf0f339a95abedb90779d01bbbcc897c3
PageCount 1
ParticipantIDs crossref_citationtrail_10_1117_1_JEI_25_1_013008
crossref_primary_10_1117_1_JEI_25_1_013008
spie_journals_10_1117_1_JEI_25_1_013008
ProviderPackageCode FQ0
SPBNH
UT2
CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2016-01-01
PublicationDateYYYYMMDD 2016-01-01
PublicationDate_xml – month: 01
  year: 2016
  text: 2016-01-01
  day: 01
PublicationDecade 2010
PublicationTitle Journal of electronic imaging
PublicationTitleAlternate J. Electron. Imaging
PublicationYear 2016
Publisher Society of Photo-Optical Instrumentation Engineers
Publisher_xml – name: Society of Photo-Optical Instrumentation Engineers
SSID ssj0011841
Score 2.1565917
SecondaryResourceType review_article
Snippet Visual attention models (VAMs) predict the location of image or video regions that are most likely to attract human attention. Although saliency detection is...
SourceID crossref
spie
SourceType Enrichment Source
Index Database
Publisher
StartPage 013008
Title Benchmark three-dimensional eye-tracking dataset for visual saliency prediction on stereoscopic three-dimensional video
URI http://www.dx.doi.org/10.1117/1.JEI.25.1.013008
Volume 25
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Zb9NAEF6F9AUeOAqopYD8gIREtMH38RhIqrYipRIt6pu1u14rVhvbit0i8qf5C8wecZyjVUGKLGe9R-L5dnZmdmYWoQ-J7YehkzoYVp8AuxE1ceh6Hky8MGA2DWwm49bGp_7RhXty6V12On9aXks3Ne2z-da4kv-hKpQBXUWU7D9QtukUCuAe6AtXoDBcH0TjL_BjJlMyuxLH7XCOE5GqX6XZ6PHfHNczwoQpvCf8QCteS5_C26wSISMVCOAy7LKcib0aJTjmPZE3gRciViVjW3oVUXvFHQJt60SdbCoPP1qaSXNxNgmnGR7yyRXou9KHYDDNGmieEhihLG6uldvfCBbDrGHbMAaZKyyOyaQiS8_utWZnJC9WDBnWuiFj4aUqXP8mRV3g76Wy5h_LVLpTHYqVN4kaW8ZMwVcwrKuK-XLNyn0T2-qg3obXqyDrFUwrxi02cM2wJQYsC7YsMjJNQf9kdNy3vb7Vbzdeyd2tNKwgtmKoG9se3Ki6j9CODXoNrCQ7g-H4249m4wsUbmkjWPwhvREPnXzeGHBFlOpWZcZbotH5c_RUQ8AYKIC-QB2e76JnWr8x9OpR7aInreSX8O2nBKJoVr1EvxosGxuoM9pYNjSWDcCyobBsLLBsLLFswKeN5S29Siy_QheHo_OvR1gfCoKZ41s15vAuWJKaTsqswGfUp6B0-LYZUQdEZ56w1EwdJyKRRyhPaGQGQZSYFqWUsTAKmPMadfMi53vI8GwvCSOSgITNXG7zyCIsdVPqJlLP9_eRuXi_MdMZ88XBLdfxnXTdR5-aJqVKF3Nf5Y-CaLHmKNV9NcerNZvn86xcryzLNMnPhocbj8skffPgkQ_Q4-VcfYu6MBP5OxDDa_peQ_cv13nfnQ
linkProvider EBSCOhost
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Benchmark+three-dimensional+eye-tracking+dataset+for+visual+saliency+prediction+on+stereoscopic+three-dimensional+video&rft.jtitle=Journal+of+electronic+imaging&rft.au=Banitalebi-Dehkordi%2C+Amin&rft.au=Nasiopoulos%2C+Eleni&rft.au=Pourazad%2C+Mahsa+T&rft.au=Nasiopoulos%2C+Panos&rft.date=2016-01-01&rft.pub=Society+of+Photo-Optical+Instrumentation+Engineers&rft.issn=1017-9909&rft.eissn=1560-229X&rft.volume=25&rft.issue=1&rft.spage=013008&rft.epage=013008&rft_id=info:doi/10.1117%2F1.JEI.25.1.013008&rft.externalDocID=10_1117_1_JEI_25_1_013008
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1017-9909&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1017-9909&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1017-9909&client=summon