3D-MiniNet: Learning a 2D Representation From Point Clouds for Fast and Efficient 3D LIDAR Semantic Segmentation
LIDAR semantic segmentation is an essential task that provides 3D semantic information about the environment to robots. Fast and efficient semantic segmentation methods are needed to match the strong computational and temporal restrictions of many real-world robotic applications. This work presents...
Saved in:
Published in | IEEE robotics and automation letters Vol. 5; no. 4; pp. 5432 - 5439 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.10.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | LIDAR semantic segmentation is an essential task that provides 3D semantic information about the environment to robots. Fast and efficient semantic segmentation methods are needed to match the strong computational and temporal restrictions of many real-world robotic applications. This work presents 3D-MiniNet, a novel approach for LIDAR semantic segmentation that combines 3D and 2D learning layers. It first learns a 2D representation from the raw points through a novel projection which extracts local and global information from the 3D data. This representation is fed to an efficient 2D Fully Convolutional Neural Network (FCNN) that produces a 2D semantic segmentation. These 2D semantic labels are re-projected back to the 3D space and enhanced through a post-processing module. The main novelty in our strategy relies on the projection learning module. Our detailed ablation study shows how each component contributes to the final performance of 3D-MiniNet. We validate our approach on well known public benchmarks (SemanticKITTI and KITTI), where 3D-MiniNet gets state-of-the-art results while being faster and more parameter-efficient than previous methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2020.3007440 |