InSpaceType: Dataset and Benchmark for Reconsidering Cross-Space Type Performance in Indoor Monocular Depth
Indoor monocular depth estimation helps home automation, including robot navigation or AR/VR for surrounding perception. Most previous methods primarily experiment with the NYUv2 Dataset and concentrate on the overall performance in their evaluation. However, their robustness and generalization to d...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Indoor monocular depth estimation helps home automation, including robot
navigation or AR/VR for surrounding perception. Most previous methods primarily
experiment with the NYUv2 Dataset and concentrate on the overall performance in
their evaluation. However, their robustness and generalization to diversely
unseen types or categories for indoor spaces (spaces types) have yet to be
discovered. Researchers may empirically find degraded performance in a released
pretrained model on custom data or less-frequent types. This paper studies the
common but easily overlooked factor-space type and realizes a model's
performance variances across spaces. We present InSpaceType Dataset, a
high-quality RGBD dataset for general indoor scenes, and benchmark 13 recent
state-of-the-art methods on InSpaceType. Our examination shows that most of
them suffer from performance imbalance between head and tailed types, and some
top methods are even more severe. The work reveals and analyzes underlying bias
in detail for transparency and robustness. We extend the analysis to a total of
4 datasets and discuss the best practice in synthetic data curation for
training indoor monocular depth. Further, dataset ablation is conducted to find
out the key factor in generalization. This work marks the first in-depth
investigation of performance variances across space types and, more
importantly, releases useful tools, including datasets and codes, to closely
examine your pretrained depth models. Data and code:
https://depthcomputation.github.io/DepthPublic/ |
---|---|
DOI: | 10.48550/arxiv.2408.13708 |