Improving 3D Occupancy Prediction through Class-balancing Loss and Multi-scale Representation
3D environment recognition is essential for autonomous driving systems, as autonomous vehicles require a comprehensive understanding of surrounding scenes. Recently, the predominant approach to define this real-life problem is through 3D occupancy prediction. It attempts to predict the occupancy sta...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
25.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | 3D environment recognition is essential for autonomous driving systems, as
autonomous vehicles require a comprehensive understanding of surrounding
scenes. Recently, the predominant approach to define this real-life problem is
through 3D occupancy prediction. It attempts to predict the occupancy states
and semantic labels for all voxels in 3D space, which enhances the perception
capability. Birds-Eye-View(BEV)-based perception has achieved the SOTA
performance for this task. Nonetheless, this architecture fails to represent
various scales of BEV features. In this paper, inspired by the success of UNet
in semantic segmentation tasks, we introduce a novel UNet-like Multi-scale
Occupancy Head module to relieve this issue. Furthermore, we propose the
class-balancing loss to compensate for rare classes in the dataset. The
experimental results on nuScenes 3D occupancy challenge dataset show the
superiority of our proposed approach over baseline and SOTA methods. |
---|---|
DOI: | 10.48550/arxiv.2405.16099 |