A LiDAR Multi-Object Detection Algorithm for Autonomous Driving

Three-dimensional object detection is the core of an autonomous driving perception system, which detects and analyzes targets around the vehicle to obtain their sizes, shapes, and categories to provide reliable operational decisions for achieving autonomous driving. To improve the detection and loca...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 13; no. 23; p. 12747
Main Authors Wang, Shuqi, Chen, Meng
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Three-dimensional object detection is the core of an autonomous driving perception system, which detects and analyzes targets around the vehicle to obtain their sizes, shapes, and categories to provide reliable operational decisions for achieving autonomous driving. To improve the detection and localization accuracy of multi-object targets such as surrounding vehicles and pedestrians in autonomous driving scenarios, based on PointPillars fast object detection network, a three-dimensional object detection algorithm based on the channel attention mechanism, ECA Modules-PointPillars, is proposed. Firstly, the improved algorithm uses point cloud columnarization features to convert a three-dimensional point cloud into a two-dimensional pseudo-image. Then, combining the 2D backbone network for feature extraction with the Efficient Channel Attention (ECA) modules to achieve the enhancement of the positional feature information in the pseudo-image and the weakening of the irrelevant feature information such as background noise. Finally, the single-shot multibox detector (SSD) algorithm was used to complete the 3D object detection task. The experimental results show that the improved algorithm improves the mAP by 3.84% and 4.04% in BEV mode and 3D mode, respectively, compared to PointPillars, which improves the mAP by 4.64% and 5.89% in BEV mode and 3D mode, respectively, compared to F-PointNet, improves the mAP by 11.78% and 14.19% in BEV mode and 3D mode, respectively, compared to VoxelNet, and improves the mAP by 9.47% and 6.55% in BEV mode and 3D mode, respectively, compared to SECOND, demonstrating the effectiveness and reliability of the improved algorithms in autonomous driving scenarios.
ISSN:2076-3417
2076-3417
DOI:10.3390/app132312747