Spherical DNNs and Their Applications in 360 Images and Videos

Spherical images or videos, as typical non-euclidean data, are usually stored in the form of 2D panoramas obtained through an equirectangular projection, which is neither equal area nor conformal. The distortion caused by the projection limits the performance of vanilla Deep Neural Networks (DNNs) d...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on pattern analysis and machine intelligence Vol. 44; no. 10; pp. 7235 - 7252
Main Authors Xu, Yanyu, Zhang, Ziheng, Gao, Shenghua
Format Journal Article
LanguageEnglish
Published United States The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 01.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Spherical images or videos, as typical non-euclidean data, are usually stored in the form of 2D panoramas obtained through an equirectangular projection, which is neither equal area nor conformal. The distortion caused by the projection limits the performance of vanilla Deep Neural Networks (DNNs) designed for traditional euclidean data. In this paper, we design a novel Spherical Deep Neural Network (DNN) to deal with the distortion caused by the equirectangular projection. Specifically, we customize a set of components, including a spherical convolution, a spherical pooling, a spherical ConvLSTM cell and a spherical MSE loss, as the replacements of their counterparts in vanilla DNNs for spherical data. The core idea is to change the identical behavior of the conventional operations in vanilla DNNs across different feature patches so that they will be adjusted to the distortion caused by the variance of sampling rate among different feature patches. We demonstrate the effectiveness of our Spherical DNNs for saliency detection and gaze estimation in 360 videos. For saliency detection, we take the temporal coherence of an observer's viewing process into consideration and propose to use a Spherical U-Net and a Spherical ConvLSTM to predict the saliency maps for each frame sequentially. As for gaze prediction, we propose to leverage a Spherical Encoder Module to extract spatial panoramic features, then we combine them with the gaze trajectory feature extracted by an LSTM for future gaze prediction. To facilitate the study of the 360 videos saliency detection, we further construct a large-scale 360 video saliency detection dataset that consists of 104 360 videos viewed by 20+ human subjects. Comprehensive experiments validate the effectiveness of our proposed Spherical DNNs for 360 handwritten digit classification and sport classification, saliency detection and gaze tracking in 360 videos. We also visualize the regions contributing to the classification decisions in our proposed Spherical DNNs via the Grad-CAM technique in the classification task, and the results show that our Spherical DNNs constantly leverage reasonable and important regions for decision making, regardless the large distortions. All codes and dataset are available on https://github.com/svip-lab/SphericalDNNs.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0162-8828
1939-3539
2160-9292
1939-3539
DOI:10.1109/TPAMI.2021.3100259