Boost 3-D Object Detection via Point Clouds Segmentation and Fused 3-D GIoU-L₁ Loss
The 3-D object detection is crucial for many real-world applications, attracting many researchers' attention. Beyond 2-D object detection, 3-D object detection usually needs to extract appearance, depth, position, and orientation information from light detection and ranging (LiDAR) and camera s...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 33; no. 2; pp. 762 - 773 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.02.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The 3-D object detection is crucial for many real-world applications, attracting many researchers' attention. Beyond 2-D object detection, 3-D object detection usually needs to extract appearance, depth, position, and orientation information from light detection and ranging (LiDAR) and camera sensors. However, due to more degrees of freedom and vertices, existing detection methods that directly transform from 2-D to 3-D still face several challenges, such as exploding increase of anchors' number and inefficient or hard-to-optimize objective. To this end, we present a fast segmentation method for 3-D point clouds to reduce anchors, which can largely decrease the computing cost. Moreover, taking advantage of 3-D generalized Intersection of Union (GIoU) and <inline-formula> <tex-math notation="LaTeX">L_{1} </tex-math></inline-formula> losses, we propose a fused loss to facilitate the optimization of 3-D object detection. A series of experiments show that the proposed method has alleviated the abovementioned issues effectively. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2020.3028964 |