MonoDiffusion: Self-Supervised Monocular Depth Estimation Using Diffusion Model

Over the past few years, self-supervised monocular depth estimation that does not depend on ground-truth during the training phase has received widespread attention. Most efforts focus on designing different types of network architectures and loss functions or handling edge cases, e.g., occlusion an...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Shao, Shuwei, Zhongcai Pei, Chen, Weihai, Sun, Dingchi, Chen, Peter C Y, Li, Zhengguo
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 13.11.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Over the past few years, self-supervised monocular depth estimation that does not depend on ground-truth during the training phase has received widespread attention. Most efforts focus on designing different types of network architectures and loss functions or handling edge cases, e.g., occlusion and dynamic objects. In this work, we introduce a novel self-supervised depth estimation framework, dubbed MonoDiffusion, by formulating it as an iterative denoising process. Because the depth ground-truth is unavailable in the training phase, we develop a pseudo ground-truth diffusion process to assist the diffusion in MonoDiffusion. The pseudo ground-truth diffusion gradually adds noise to the depth map generated by a pre-trained teacher model. Moreover,the teacher model allows applying a distillation loss to guide the denoised depth. Further, we develop a masked visual condition mechanism to enhance the denoising ability of model. Extensive experiments are conducted on the KITTI and Make3D datasets and the proposed MonoDiffusion outperforms prior state-of-the-art competitors. The source code will be available at https://github.com/ShuweiShao/MonoDiffusion.
ISSN:2331-8422