Domain-Transferred Synthetic Data Generation for Improving Monocular Depth Estimation
A major obstacle to the development of effective monocular depth estimation algorithms is the difficulty in obtaining high-quality depth data that corresponds to collected RGB images. Collecting this data is time-consuming and costly, and even data collected by modern sensors has limited range or re...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
02.05.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2405.01113 |
Cover
Loading…
Summary: | A major obstacle to the development of effective monocular depth estimation
algorithms is the difficulty in obtaining high-quality depth data that
corresponds to collected RGB images. Collecting this data is time-consuming and
costly, and even data collected by modern sensors has limited range or
resolution, and is subject to inconsistencies and noise. To combat this, we
propose a method of data generation in simulation using 3D synthetic
environments and CycleGAN domain transfer. We compare this method of data
generation to the popular NYUDepth V2 dataset by training a depth estimation
model based on the DenseDepth structure using different training sets of real
and simulated data. We evaluate the performance of the models on newly
collected images and LiDAR depth data from a Husky robot to verify the
generalizability of the approach and show that GAN-transformed data can serve
as an effective alternative to real-world data, particularly in depth
estimation. |
---|---|
DOI: | 10.48550/arxiv.2405.01113 |