Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes

Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 12087 - 12096
Main Authors Zhong, Yiran, Ji, Pan, Wang, Jianyuan, Dai, Yuchao, Li, Hongdong
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical flow method which incorporates global geometric constraints into network learning. In particular, we investigate multiple ways of enforcing the epipolar constraint in flow estimation. To alleviate a ``chicken-and-egg'' type of problem encountered in dynamic scenes where multiple motions may be present, we propose a low-rank constraint as well as a union-of-subspaces constraint for training. Experimental results on various benchmarking datasets show that our method achieves competitive performance compared with supervised methods and outperforms state-of-the-art unsupervised deep-learning methods.
ISSN:2575-7075
DOI:10.1109/CVPR.2019.01237