Influence of Camera-LiDAR Configuration on 3D Object Detection for Autonomous Driving

Cameras and LiDARs are both important sensors for autonomous driving, playing critical roles in 3D object detection. Camera-LiDAR Fusion has been a prevalent solution for robust and accurate driving perception. In contrast to the vast majority of existing arts that focus on how to improve the perfor...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Li, Ye, Hu, Hanjiang, Liu, Zuxin, Xu, Xiaohao, Huang, Xiaonan, Zhao, Ding
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 02.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cameras and LiDARs are both important sensors for autonomous driving, playing critical roles in 3D object detection. Camera-LiDAR Fusion has been a prevalent solution for robust and accurate driving perception. In contrast to the vast majority of existing arts that focus on how to improve the performance of 3D target detection through cross-modal schemes, deep learning algorithms, and training tricks, we devote attention to the impact of sensor configurations on the performance of learning-based methods. To achieve this, we propose a unified information-theoretic surrogate metric for camera and LiDAR evaluation based on the proposed sensor perception model. We also design an accelerated high-quality framework for data acquisition, model training, and performance evaluation that functions with the CARLA simulator. To show the correlation between detection performance and our surrogate metrics, We conduct experiments using several camera-LiDAR placements and parameters inspired by self-driving companies and research institutions. Extensive experimental results of representative algorithms on nuScenes dataset validate the effectiveness of our surrogate metric, demonstrating that sensor configurations significantly impact point-cloud-image fusion based detection models, which contribute up to 30% discrepancy in terms of the average precision.
ISSN:2331-8422