Blind Omnidirectional Image Quality Assessment With Viewport Oriented Graph Convolutional Networks

Quality assessment of omnidirectional images has become increasingly urgent due to the rapid growth of virtual reality applications. Different from traditional 2D images and videos, omnidirectional contents can provide consumers with freely changeable viewports and a larger field of view covering th...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 31; no. 5; pp. 1724 - 1737
Main Authors Xu, Jiahua, Zhou, Wei, Chen, Zhibo
Format Journal Article
LanguageEnglish
Published New York IEEE 01.05.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Quality assessment of omnidirectional images has become increasingly urgent due to the rapid growth of virtual reality applications. Different from traditional 2D images and videos, omnidirectional contents can provide consumers with freely changeable viewports and a larger field of view covering the <inline-formula> <tex-math notation="LaTeX">360^{\circ }\times 180^{\circ } </tex-math></inline-formula> spherical surface, which makes the objective quality assessment of omnidirectional images more challenging. In this paper, motivated by the characteristics of the human vision system (HVS) and the viewing process of omnidirectional contents, we propose a novel Viewport oriented Graph Convolution Network (VGCN) for blind omnidirectional image quality assessment (IQA). Generally, observers tend to give the subjective rating of a 360-degree image after passing and aggregating different viewports information when browsing the spherical scenery. Therefore, in order to model the mutual dependency of viewports in the omnidirectional image, we build a spatial viewport graph. Specifically, the graph nodes are first defined with selected viewports with higher probabilities to be seen, which is inspired by the HVS that human beings are more sensitive to structural information. Then, these nodes are connected by spatial relations to capture interactions among them. Finally, reasoning on the proposed graph is performed via graph convolutional networks. Moreover, we simultaneously obtain global quality using the entire omnidirectional image without viewport sampling to boost the performance according to the viewing experience. Experimental results demonstrate that our proposed model outperforms state-of-the-art full-reference and no-reference IQA metrics on two public omnidirectional IQA databases.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2020.3015186