Assessing Visual Quality of Omnidirectional Videos

In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are only a few visual quality assessment (VQA) methods, either...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 29; no. 12; pp. 3516 - 3530
Main Authors Xu, Mai, Li, Chen, Chen, Zhenzhong, Wang, Zulin, Guan, Zhenyu
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1051-8215
1558-2205
DOI10.1109/TCSVT.2018.2886277

Cover

Loading…
Abstract In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are only a few visual quality assessment (VQA) methods, either subjective or objective, for omnidirectional video coding. This paper proposes both subjective and objective methods for assessing the quality loss in encoding an omnidirectional video. Specifically, we first present a new database, which includes the viewing direction data from several subjects watching omnidirectional video sequences. Then, from our database, we find a high consistency in viewing directions across different subjects. The viewing directions are normally distributed in the center of the front regions, but they sometimes fall into other regions, related to the video content. Given this finding, we present a subjective VQA method for measuring the difference mean opinion score (DMOS) of the whole and regional omnidirectional video, in terms of overall DMOS and vectorized DMOS, respectively. Moreover, we propose two objective VQA methods for the encoded omnidirectional video, in light of the human perception characteristics of the omnidirectional video. One method weighs the distortion of pixels with regard to their distances to the center of front regions, which considers human preference in a panorama. The other method predicts viewing directions according to the video content, and then the predicted viewing directions are leveraged to allocate weights to the distortion of each pixel in our objective VQA method. Finally, our experimental results verify that both the subjective and objective methods proposed in this paper advance the state-of-the-art VQA for omnidirectional videos.
AbstractList In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are only a few visual quality assessment (VQA) methods, either subjective or objective, for omnidirectional video coding. This paper proposes both subjective and objective methods for assessing the quality loss in encoding an omnidirectional video. Specifically, we first present a new database, which includes the viewing direction data from several subjects watching omnidirectional video sequences. Then, from our database, we find a high consistency in viewing directions across different subjects. The viewing directions are normally distributed in the center of the front regions, but they sometimes fall into other regions, related to the video content. Given this finding, we present a subjective VQA method for measuring the difference mean opinion score (DMOS) of the whole and regional omnidirectional video, in terms of overall DMOS and vectorized DMOS, respectively. Moreover, we propose two objective VQA methods for the encoded omnidirectional video, in light of the human perception characteristics of the omnidirectional video. One method weighs the distortion of pixels with regard to their distances to the center of front regions, which considers human preference in a panorama. The other method predicts viewing directions according to the video content, and then the predicted viewing directions are leveraged to allocate weights to the distortion of each pixel in our objective VQA method. Finally, our experimental results verify that both the subjective and objective methods proposed in this paper advance the state-of-the-art VQA for omnidirectional videos.
Author Li, Chen
Wang, Zulin
Xu, Mai
Guan, Zhenyu
Chen, Zhenzhong
Author_xml – sequence: 1
  givenname: Mai
  orcidid: 0000-0002-0277-3301
  surname: Xu
  fullname: Xu, Mai
  email: maixu@buaa.edu.cn
  organization: School of Electronic and Information Engineering, Beihang University, Beijing, China
– sequence: 2
  givenname: Chen
  orcidid: 0000-0002-9085-2922
  surname: Li
  fullname: Li, Chen
  email: jnlichen123@buaa.edu.cn
  organization: School of Electronic and Information Engineering, Beihang University, Beijing, China
– sequence: 3
  givenname: Zhenzhong
  orcidid: 0000-0002-7882-1066
  surname: Chen
  fullname: Chen, Zhenzhong
  email: zzchen@whu.edu.cn
  organization: School of Computer Science, Wuhan University, Wuhan, China
– sequence: 4
  givenname: Zulin
  surname: Wang
  fullname: Wang, Zulin
  email: wzulin@buaa.edu.cn
  organization: School of Electronic and Information Engineering, Beihang University, Beijing, China
– sequence: 5
  givenname: Zhenyu
  orcidid: 0000-0002-3959-338X
  surname: Guan
  fullname: Guan, Zhenyu
  email: guanzhenyu@buaa.edu.cn
  organization: School of Electronic and Information Engineering, Beihang University, Beijing, China
BookMark eNp9UE1LAzEQDVLBtvoH9LLgeWsyaTbZYyl-QaGItdeQbmYlpd3UZPfQf2_WFg8evMwMzHvz5r0RGTS-QUJuGZ0wRsuH1fx9vZoAZWoCShUg5QUZMiFUDkDFIM1UsFwBE1dkFOOWUjZVUzkkMIsRY3TNZ7Z2sTO77C0V1x4zX2fLfeOsC1i1zjdptXYWfbwml7XZRbw59zH5eHpczV_yxfL5dT5b5BWUos05NYC1orZIUlYV0lg06TVRb5AbrqgAAAQsNgIYlVDLUhWVpNTaoiqN4mNyf7p7CP6rw9jqre9C-iNq4CBY2XtKKDihquBjDFjrQ3B7E46aUd1no3-y0X02-pxNIqk_pMq1pnfZBuN2_1PvTlSHiL9aSkiQnPNvAphy2Q
CODEN ITCTEM
CitedBy_id crossref_primary_10_1016_j_jvcir_2021_103419
crossref_primary_10_1109_ACCESS_2023_3292335
crossref_primary_10_1109_TCSVT_2021_3103544
crossref_primary_10_1109_TCSVT_2021_3095843
crossref_primary_10_1109_TCSVT_2020_3015186
crossref_primary_10_1016_j_image_2024_117176
crossref_primary_10_1016_j_imavis_2024_105151
crossref_primary_10_1109_JSTSP_2019_2955024
crossref_primary_10_1109_JSTSP_2019_2956631
crossref_primary_10_1093_jcde_qwab022
crossref_primary_10_1109_TMM_2020_2990075
crossref_primary_10_1155_2021_9947288
crossref_primary_10_1109_TCSVT_2023_3319020
crossref_primary_10_1016_j_ijleo_2021_166858
crossref_primary_10_1109_TBC_2022_3231101
crossref_primary_10_1109_TCSVT_2023_3349130
crossref_primary_10_1109_TCSVT_2021_3081162
crossref_primary_10_1109_TMM_2020_2987682
crossref_primary_10_1016_j_egyr_2021_07_041
crossref_primary_10_1016_j_knosys_2021_107529
crossref_primary_10_1109_TMC_2021_3058099
crossref_primary_10_3390_sym13010080
crossref_primary_10_1007_s00366_020_01215_4
crossref_primary_10_1109_OJCAS_2021_3073891
crossref_primary_10_1109_TCSVT_2021_3112120
crossref_primary_10_1145_3517240
crossref_primary_10_1007_s10668_021_01382_4
crossref_primary_10_1109_TCSVT_2024_3449696
crossref_primary_10_1109_TCSVT_2020_3024882
crossref_primary_10_1109_TCSVT_2022_3165878
crossref_primary_10_1109_TCSVT_2022_3172135
crossref_primary_10_1109_TCSVT_2020_2971357
crossref_primary_10_1109_JSTSP_2019_2957982
crossref_primary_10_3390_electronics10131527
crossref_primary_10_1049_iet_ipr_2019_1663
crossref_primary_10_1007_s00366_021_01363_1
crossref_primary_10_1109_TMM_2021_3124080
crossref_primary_10_3389_frsip_2023_1193523
crossref_primary_10_1016_j_compeleceng_2021_107270
crossref_primary_10_1007_s00366_021_01288_9
crossref_primary_10_1016_j_compbiomed_2021_104609
crossref_primary_10_1109_JSTSP_2019_2956408
crossref_primary_10_1016_j_egyr_2021_06_064
crossref_primary_10_1109_TMM_2020_3003642
crossref_primary_10_1016_j_knosys_2020_106684
crossref_primary_10_1016_j_comcom_2024_108011
crossref_primary_10_1109_JSTSP_2020_2968182
crossref_primary_10_1109_TBC_2021_3056231
crossref_primary_10_1109_TCSVT_2021_3081182
crossref_primary_10_1109_TCSVT_2021_3057368
crossref_primary_10_1049_cmu2_12274
crossref_primary_10_1109_JSTSP_2020_2966864
crossref_primary_10_1007_s11042_023_15739_6
crossref_primary_10_1109_OJCOMS_2024_3498334
crossref_primary_10_1109_TCSVT_2019_2934136
crossref_primary_10_1109_TIP_2022_3226417
crossref_primary_10_1109_TMM_2022_3232229
crossref_primary_10_1007_s11432_019_2757_1
crossref_primary_10_1109_JSTSP_2020_2968772
crossref_primary_10_1109_TIP_2021_3052073
crossref_primary_10_3389_feart_2021_663678
crossref_primary_10_1109_TBC_2021_3136748
crossref_primary_10_1109_JSTSP_2023_3250956
crossref_primary_10_1016_j_compbiomed_2021_104427
crossref_primary_10_1109_MMUL_2022_3156185
crossref_primary_10_1007_s00530_024_01285_0
crossref_primary_10_3390_app12157581
crossref_primary_10_1109_ACCESS_2024_3359167
crossref_primary_10_1007_s11036_022_01914_w
crossref_primary_10_1016_j_enconman_2021_114484
crossref_primary_10_1007_s11042_021_10862_8
crossref_primary_10_1109_JSTSP_2019_2953950
crossref_primary_10_1007_s11042_024_19457_5
crossref_primary_10_1109_TVCG_2022_3140875
crossref_primary_10_1155_2021_9988803
crossref_primary_10_1007_s00366_021_01464_x
crossref_primary_10_1007_s11042_022_12073_1
crossref_primary_10_1007_s11432_024_4133_3
crossref_primary_10_1007_s00366_021_01359_x
crossref_primary_10_1109_TMM_2020_3044458
crossref_primary_10_1109_TCSVT_2019_2898732
crossref_primary_10_1007_s00366_021_01388_6
crossref_primary_10_1016_j_comcom_2021_06_029
crossref_primary_10_1007_s10825_021_01726_3
crossref_primary_10_1155_2021_5579547
crossref_primary_10_1109_TCSVT_2021_3050157
crossref_primary_10_1109_ACCESS_2020_2974010
crossref_primary_10_1109_COMST_2020_3006999
crossref_primary_10_1007_s00500_021_05932_w
crossref_primary_10_1109_ACCESS_2019_2953983
crossref_primary_10_1016_j_jmrt_2021_03_048
crossref_primary_10_1155_2021_5229073
crossref_primary_10_1016_j_ijleo_2020_165887
crossref_primary_10_1109_TPAMI_2020_3028509
crossref_primary_10_1007_s00366_021_01377_9
crossref_primary_10_1007_s41233_020_00032_3
crossref_primary_10_1155_2021_5616826
crossref_primary_10_1155_2021_9963246
crossref_primary_10_1109_TBC_2024_3358749
crossref_primary_10_1109_TVCG_2021_3050888
crossref_primary_10_1109_ACCESS_2019_2920443
Cites_doi 10.1109/JSTSP.2014.2314864
10.1145/3083187.3083210
10.1109/ICME.2017.8019460
10.1109/ICIP.2015.7350899
10.1109/6046.985561
10.1109/ICCV.2015.30
10.1117/12.201231
10.1109/PCS.2016.7906378
10.1109/TIP.2010.2042111
10.1109/CVPR.2013.152
10.1109/TIP.2007.901820
10.1109/VR.2017.7892319
10.1109/QoMEX.2017.7965634
10.1109/QoMEX.2017.7965657
10.1109/TCSVT.2016.2589878
10.1109/TCSVT.2010.2045912
10.1109/ISM.2016.0126
10.1109/ICME.2014.6890152
10.1109/QoMEX.2017.7965658
10.1109/34.400568
10.1016/j.imavis.2010.07.001
10.1145/3126686.3126768
10.1109/TIP.2003.819861
10.1109/ICIP.2017.8296517
10.1145/2858036.2858140
10.1109/TMM.2009.2017626
10.1109/ISMAR.2015.12
10.1109/76.488822
10.1007/s11220-017-0160-0
10.1109/QoMEX.2017.7965660
10.1109/TIP.2005.859389
10.1109/ICME.2017.8019351
10.1109/TIP.2009.2030969
10.1109/TCSVT.2013.2255425
10.1109/TMM.2017.2721544
10.1145/3083187.3083215
10.1117/12.845382
10.1109/97.995823
10.1109/TPAMI.2018.2858783
10.1117/12.651056
10.1117/12.509908
10.1109/TCSVT.2015.2477916
10.1016/j.sigpro.2018.01.004
10.1109/TCSVT.2005.854240
10.1109/QoMEX.2017.7965659
10.1117/12.2235885
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2018.2886277
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 3530
ExternalDocumentID 10_1109_TCSVT_2018_2886277
8572733
Genre orig-research
GrantInformation_xml – fundername: National Natural Science Foundation of China
  grantid: 61876013; 61573037; 61471022
  funderid: 10.13039/501100001809
– fundername: Fok Ying Tung Education Foundation
  grantid: 151061
  funderid: 10.13039/501100010261
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-30a2ef80d6148d867adea8625fbe3a3805222e2e6b521072f7986c700dd6c9a83
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Mon Jun 30 04:44:12 EDT 2025
Tue Jul 01 00:41:12 EDT 2025
Thu Apr 24 23:07:31 EDT 2025
Wed Aug 27 06:28:52 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-30a2ef80d6148d867adea8625fbe3a3805222e2e6b521072f7986c700dd6c9a83
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-0277-3301
0000-0002-9085-2922
0000-0002-7882-1066
0000-0002-3959-338X
PQID 2325191051
PQPubID 85433
PageCount 15
ParticipantIDs proquest_journals_2325191051
crossref_primary_10_1109_TCSVT_2018_2886277
crossref_citationtrail_10_1109_TCSVT_2018_2886277
ieee_primary_8572733
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2019-12-01
PublicationDateYYYYMMDD 2019-12-01
PublicationDate_xml – month: 12
  year: 2019
  text: 2019-12-01
  day: 01
PublicationDecade 2010
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2019
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References (ref59) 2016
na (ref36) 2014; 24
ref56
liaw (ref57) 2002; 2
ref58
ref14
ref53
ref52
ref10
guo (ref55) 2010; 19
(ref29) 2015
ref16
ref19
ref18
(ref12) 2008
seshadrinathan (ref15) 2010; 7527
snyder (ref47) 1987
ref51
(ref13) 2012
besharse (ref54) 2011
ref46
ref45
ref48
ref42
ref41
ref44
ref43
(ref1) 2017
ref8
ref9
ref3
ref6
ref5
li (ref24) 2016
ref40
(ref11) 1998
van dijk (ref49) 1995; 2451
ref35
ref34
ref37
ref31
ref30
ref33
ref32
ref39
ref38
dimitrijevi? (ref50) 2016; 31
zakharchenko (ref7) 2016; 9970
(ref4) 2016
ref23
ref26
ref25
ref20
ref21
lee (ref17) 2002; 4
ref28
ref27
sarmiento (ref2) 2009
he (ref22) 2017
ref60
andrews (ref61) 2012
References_xml – ident: ref34
  doi: 10.1109/JSTSP.2014.2314864
– ident: ref45
  doi: 10.1145/3083187.3083210
– year: 1987
  ident: ref47
  publication-title: Map Projections A Working Manual
– ident: ref20
  doi: 10.1109/ICME.2017.8019460
– ident: ref53
  doi: 10.1109/ICIP.2015.7350899
– volume: 4
  start-page: 129
  year: 2002
  ident: ref17
  article-title: Foveated video quality assessment
  publication-title: IEEE Trans Multimedia
  doi: 10.1109/6046.985561
– ident: ref48
  doi: 10.1109/ICCV.2015.30
– volume: 2451
  start-page: 90
  year: 1995
  ident: ref49
  article-title: Quality asessment of coded images using numerical category scaling
  publication-title: Proc SPIE
  doi: 10.1117/12.201231
– ident: ref6
  doi: 10.1109/PCS.2016.7906378
– ident: ref27
  doi: 10.1109/TIP.2010.2042111
– start-page: 1
  year: 2017
  ident: ref22
  article-title: Motion compensated prediction with geometry padding for 360 video coding
  publication-title: Proc IEEE VCIP
– ident: ref58
  doi: 10.1109/CVPR.2013.152
– ident: ref40
  doi: 10.1109/TIP.2007.901820
– ident: ref52
  doi: 10.1109/VR.2017.7892319
– ident: ref43
  doi: 10.1109/QoMEX.2017.7965634
– year: 2011
  ident: ref54
  publication-title: The Retina and its Disorders
– ident: ref8
  doi: 10.1109/QoMEX.2017.7965657
– year: 2016
  ident: ref59
– ident: ref60
  doi: 10.1109/TCSVT.2016.2589878
– ident: ref33
  doi: 10.1109/TCSVT.2010.2045912
– ident: ref51
  doi: 10.1109/ISM.2016.0126
– ident: ref28
  doi: 10.1109/ICME.2014.6890152
– ident: ref9
  doi: 10.1109/QoMEX.2017.7965658
– volume: 31
  start-page: 259
  year: 2016
  ident: ref50
  article-title: Comparison of spherical cube map projections used in planet-sized terrain rendering
  publication-title: Facta Univ Ser Math Inform
– ident: ref56
  doi: 10.1109/34.400568
– start-page: 173
  year: 2009
  ident: ref2
  article-title: Panoramic immersive videos-3D production and visualization framework
  publication-title: IEEE Sig Proc
– volume: 2
  start-page: 18
  year: 2002
  ident: ref57
  article-title: Classification and regression by randomforest
  publication-title: R News
– ident: ref32
  doi: 10.1016/j.imavis.2010.07.001
– year: 2012
  ident: ref13
  publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures
– ident: ref10
  doi: 10.1145/3126686.3126768
– year: 2012
  ident: ref61
  article-title: Computation time comparison between MATLAB and c++ using launch windows
– ident: ref38
  doi: 10.1109/TIP.2003.819861
– ident: ref21
  doi: 10.1109/ICIP.2017.8296517
– ident: ref3
  doi: 10.1145/2858036.2858140
– ident: ref5
  doi: 10.1109/TMM.2009.2017626
– ident: ref16
  doi: 10.1109/ISMAR.2015.12
– ident: ref30
  doi: 10.1109/76.488822
– ident: ref35
  doi: 10.1007/s11220-017-0160-0
– ident: ref41
  doi: 10.1109/QoMEX.2017.7965660
– year: 2008
  ident: ref12
  publication-title: Subjective video quality assessment methods for multimedia applications
– ident: ref39
  doi: 10.1109/TIP.2005.859389
– year: 2016
  ident: ref24
– ident: ref23
  doi: 10.1109/ICME.2017.8019351
– volume: 19
  start-page: 185
  year: 2010
  ident: ref55
  article-title: A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression
  publication-title: IEEE Trans Image Process
  doi: 10.1109/TIP.2009.2030969
– volume: 24
  start-page: 320
  year: 2014
  ident: ref36
  article-title: A novel no-reference PSNR estimation method with regard to deblocking filtering effect in H.264/AVC bitstreams
  publication-title: IEEE Trans Circuits Syst Video Technol
  doi: 10.1109/TCSVT.2013.2255425
– ident: ref18
  doi: 10.1109/TMM.2017.2721544
– ident: ref44
  doi: 10.1145/3083187.3083215
– year: 1998
  ident: ref11
  publication-title: Subjective Assessment Methods for Image Quality in High-definition Television
– volume: 7527
  start-page: 75270h
  year: 2010
  ident: ref15
  article-title: A subjective study to evaluate video quality assessment algorithms
  publication-title: Proc SPIE
  doi: 10.1117/12.845382
– ident: ref37
  doi: 10.1109/97.995823
– ident: ref46
  doi: 10.1109/TPAMI.2018.2858783
– ident: ref26
  doi: 10.1117/12.651056
– ident: ref25
  doi: 10.1117/12.509908
– ident: ref14
  doi: 10.1109/TCSVT.2015.2477916
– year: 2015
  ident: ref29
  publication-title: Subjective Methods for the Assessment of Stereoscopic 3dtv Systems
– year: 2017
  ident: ref1
  article-title: VR data report
– ident: ref19
  doi: 10.1016/j.sigpro.2018.01.004
– year: 2016
  ident: ref4
  publication-title: Summary of Survey on Virtual Reality
– ident: ref31
  doi: 10.1109/TCSVT.2005.854240
– ident: ref42
  doi: 10.1109/QoMEX.2017.7965659
– volume: 9970
  start-page: 99700c
  year: 2016
  ident: ref7
  article-title: Quality metric for spherical panoramic video
  publication-title: Proc SPIE
  doi: 10.1117/12.2235885
SSID ssj0014847
Score 2.6058028
Snippet In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3516
SubjectTerms Distortion
Helmet mounted displays
Measurement
Methods
Omnidirectional video coding
Pixels
Quality assessment
Two dimensional displays
Video coding
Videos
Viewing
viewing direction
visual quality assessment (VQA)
Visualization
Title Assessing Visual Quality of Omnidirectional Videos
URI https://ieeexplore.ieee.org/document/8572733
https://www.proquest.com/docview/2325191051
Volume 29
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLa2neDAayAGA_XADbplTR_pEU2gCQk4sE27VUnjShOwItYe4NfjpN3ES4hbDk4V2U6_L7FjA5wRpUVUCl3tD9D1teSuQsxcRdxDKBUxRJttcReOJv7NLJg14GL9FgYRbfIZ9szQxvJ1npbmqqwvAoO2vAlNOrhVb7XWEQNf2GZiRBcGriAcWz2QYXF_PHyYjk0Wl-h5ghh8FH0BIdtV5cev2OLL9TbcrlZWpZU89spC9dL3b0Ub_7v0HdiqiaZzWXnGLjRwsQebn8oPtsGrIr40dqbzZUnSVUWNNyfPnPvnxbzCO3tZSBIa8-U-TK6vxsORW_dQcFMvDgqXM-lhJpg2BT-1CCOpUZIOgkwhl9w0NPA89DBUhOMs8rIoFmEaMaZ1mMZS8ANoLfIFHoIjZBbxOAv9DDXRLiZS0rkfc-RhGggpOzBYKTVJ6wLjps_FU2IPGixOrCESY4ikNkQHztdzXqryGn9Kt41m15K1UjvQXdkuqXfgMiGmSOTUuMPR77OOYYO-HVepKV1oFa8lnhDBKNSp9awP_ujLtg
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDLbGOAAHXgMxGNADN-jImj7SI5qYBmzjwDbtViWNK03Ahlh7gF9PkrYTLyFuOThqZLv119j-DHCmIC2iEGhLt4W2Kzm1BWJiC4U9mBABQTTVFgO_O3JvJ96kAhfLXhhENMVn2NRLk8uX8zjTV2WXzNPRlq7AqqebcfNurWXOwGVmnJgCDC2bqUhWtsiQ8HLYfhgPdR0XazpMYfgg-BKGzFyVHx9jE2E6W9Avz5YXljw2s1Q04_dvtI3_Pfw2bBZQ07rKfWMHKjjbhY1PBIQ1cPKcr1pb4-kiU9I5p8abNU-s--fZNI945rpQSUicL_Zg1Lketrt2MUXBjp3QS21KuIMJI1JTfkrmB1wiVzrwEoGUUz3SwHHQQV-oSE4CJwlC5scBIVL6ccgZ3YfqbD7DA7AYTwIaJr6boFTAi7BY6dwNKVI_9hjndWiVSo3igmJcT7p4isyvBgkjY4hIGyIqDFGH8-Wel5xg40_pmtbsUrJQah0ape2i4h1cRAorKniq3eHw912nsNYd9ntR72ZwdwTr6jlhXqjSgGr6muGxghupODFe9gHMgM7-
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Assessing+Visual+Quality+of+Omnidirectional+Videos&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Xu%2C+Mai&rft.au=Li%2C+Chen&rft.au=Chen%2C+Zhenzhong&rft.au=Wang%2C+Zulin&rft.date=2019-12-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=29&rft.issue=12&rft.spage=3516&rft.epage=3530&rft_id=info:doi/10.1109%2FTCSVT.2018.2886277&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2018_2886277
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon