Robust 3D Object Detection Based on Point Feature Enhancement in Driving Scenes
Object detection in complex scenes and long-distance small object detection are still challenging issues that need to be addressed in autonomous driving perception tasks. This article proposes a novel method that utilizes 2D image detection results to enhance point cloud semantic and positional feat...
Saved in:
Published in | 2024 IEEE Intelligent Vehicles Symposium (IV) pp. 2791 - 2798 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
02.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Object detection in complex scenes and long-distance small object detection are still challenging issues that need to be addressed in autonomous driving perception tasks. This article proposes a novel method that utilizes 2D image detection results to enhance point cloud semantic and positional features (PFE). Firstly, the method utilizes 2D image object detection box projection to generate a 3D frustum, and excludes non-object point cloud data such as ground and background through filtering. Then, the class results from 2D object detection are used to provide semantic features of point clouds, enhancing the identification ability of small objects with sparse point cloud data. Moreover, by projecting a 3D point cloud onto a 2D image and defining positional features based on the distance from the projection point to the center of the 2D detection box, the discrimination ability of point cloud data in complex scenes is further enhanced. The experimental results show that the method proposed in this paper significantly improves the performance of existing LiDAR detection models in complex scenes of the KITTI and NuScenes datasets, and achieves state-of-the-art detection accuracy on long-distance small objects. |
---|---|
ISSN: | 2642-7214 |
DOI: | 10.1109/IV55156.2024.10588490 |