RenderOcc: Vision-Centric 3D Occupancy Prediction with 2D Rendering Supervision

3D occupancy prediction holds significant promise in the fields of robot perception and autonomous driving, which quantifies 3D scenes into grid cells with semantic labels. Recent works mainly utilize complete occupancy labels in 3D voxel space for supervision. However, the expensive annotation proc...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pan, Mingjie, Liu, Jiaming, Zhang, Renrui, Huang, Peixiang, Li, Xiaoqi, Wang, Bing, Xie, Hongwei, Liu, Li, Zhang, Shanghang
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 04.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:3D occupancy prediction holds significant promise in the fields of robot perception and autonomous driving, which quantifies 3D scenes into grid cells with semantic labels. Recent works mainly utilize complete occupancy labels in 3D voxel space for supervision. However, the expensive annotation process and sometimes ambiguous labels have severely constrained the usability and scalability of 3D occupancy models. To address this, we present RenderOcc, a novel paradigm for training 3D occupancy models only using 2D labels. Specifically, we extract a NeRF-style 3D volume representation from multi-view images, and employ volume rendering techniques to establish 2D renderings, thus enabling direct 3D supervision from 2D semantics and depth labels. Additionally, we introduce an Auxiliary Ray method to tackle the issue of sparse viewpoints in autonomous driving scenarios, which leverages sequential frames to construct comprehensive 2D rendering for each object. To our best knowledge, RenderOcc is the first attempt to train multi-view 3D occupancy models only using 2D labels, reducing the dependence on costly 3D occupancy annotations. Extensive experiments demonstrate that RenderOcc achieves comparable performance to models fully supervised with 3D labels, underscoring the significance of this approach in real-world applications.
ISSN:2331-8422