Assessing Visual Quality of Omnidirectional Videos
In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are only a few visual quality assessment (VQA) methods, either...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 29; no. 12; pp. 3516 - 3530 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In contrast with traditional videos, omnidirectional videos enable spherical viewing direction with support for head-mounted displays, providing an interactive and immersive experience. Unfortunately, to the best of our knowledge, there are only a few visual quality assessment (VQA) methods, either subjective or objective, for omnidirectional video coding. This paper proposes both subjective and objective methods for assessing the quality loss in encoding an omnidirectional video. Specifically, we first present a new database, which includes the viewing direction data from several subjects watching omnidirectional video sequences. Then, from our database, we find a high consistency in viewing directions across different subjects. The viewing directions are normally distributed in the center of the front regions, but they sometimes fall into other regions, related to the video content. Given this finding, we present a subjective VQA method for measuring the difference mean opinion score (DMOS) of the whole and regional omnidirectional video, in terms of overall DMOS and vectorized DMOS, respectively. Moreover, we propose two objective VQA methods for the encoded omnidirectional video, in light of the human perception characteristics of the omnidirectional video. One method weighs the distortion of pixels with regard to their distances to the center of front regions, which considers human preference in a panorama. The other method predicts viewing directions according to the video content, and then the predicted viewing directions are leveraged to allocate weights to the distortion of each pixel in our objective VQA method. Finally, our experimental results verify that both the subjective and objective methods proposed in this paper advance the state-of-the-art VQA for omnidirectional videos. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2018.2886277 |