A Real World Dataset for Multi-view 3D Reconstruction

We present a dataset of 998 3D models of everyday tabletop objects along with their 847,000 real world RGB and depth images. Accurate annotations of camera poses and object poses for each image are performed in a semi-automated fashion to facilitate the use of the dataset for myriad 3D applications...

Full description

Saved in:
Bibliographic Details
Main Authors Shrestha, Rakesh, Hu, Siqi, Gou, Minghao, Liu, Ziyuan, Tan, Ping
Format Journal Article
LanguageEnglish
Published 21.03.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a dataset of 998 3D models of everyday tabletop objects along with their 847,000 real world RGB and depth images. Accurate annotations of camera poses and object poses for each image are performed in a semi-automated fashion to facilitate the use of the dataset for myriad 3D applications like shape reconstruction, object pose estimation, shape retrieval etc. We primarily focus on learned multi-view 3D reconstruction due to the lack of appropriate real world benchmark for the task and demonstrate that our dataset can fill that gap. The entire annotated dataset along with the source code for the annotation tools and evaluation baselines is available at http://www.ocrtoc.org/3d-reconstruction.html.
DOI:10.48550/arxiv.2203.11397