Multi-feature enhancement based on sparse networks for single-stage 3D object detection

In the field of autonomous driving, the accuracy and real-time requirements for 3D object detection technology continue to improve, which is directly related to the commercialization process and market popularity of autonomous vehicles. Despite the efficiency of pillar-based coding for onboard syste...

Full description

Saved in:
Bibliographic Details
Published inAlexandria engineering journal Vol. 111; pp. 123 - 135
Main Authors Ke, Zunwang, Lin, Chenyu, Zhang, Tao, Jia, Tingting, Du, Minghua, Wang, Gang, Zhang, Yugui
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.01.2025
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the field of autonomous driving, the accuracy and real-time requirements for 3D object detection technology continue to improve, which is directly related to the commercialization process and market popularity of autonomous vehicles. Despite the efficiency of pillar-based coding for onboard systems, it falls short in terms of accuracy and the reduction of incorrect positives. In this paper, we will examine how to solve the problem of high incorrect rate and low accuracy of existing methods. Firstly, a MAP coding module is introduced to optimize previous point cloud feature coding modules, allowing for the efficient extraction of fine-grained features from point cloud data. Then, we introduce an innovative sparse dual attention (SDA) to efficiently filter out irrelevant details in feature extraction, thereby improving the pertinence and efficiency of information extraction. Finally, to address the potential loss of information from single local feature extraction, a local and global fusion module (CTGC) is introduced. Our method has proactively demonstrated its efficiency and accuracy through rigorous experimentation across diverse datasets. Analysis of the results leads to the conclusion that our solutions provide accurate and robust detection results. Code will be available at https://github.com/lcy199905/MyOpenPCDet.git.
ISSN:1110-0168
DOI:10.1016/j.aej.2024.10.061