Simultaneous Viewpoint- and Condition-Invariant Loop Closure Detection Based on LiDAR Descriptor for Outdoor Large-Scale Environments

Loop closure detection is the crucial issue of simultaneous localization and mapping in the field of autonomous driving and robotics. In outdoor large-scale and complex environments, the existing LiDAR-based methods still inevitably suffer from viewpoint, condition changes, and perceptual aliasing....

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on industrial electronics (1982) Vol. 70; no. 2; pp. 2117 - 2127
Main Authors Kong, Dong, Li, Xu, Cen, Yanqing, Xu, Qimin, Wang, Aimin
Format Journal Article
LanguageEnglish
Published New York IEEE 01.02.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Loop closure detection is the crucial issue of simultaneous localization and mapping in the field of autonomous driving and robotics. In outdoor large-scale and complex environments, the existing LiDAR-based methods still inevitably suffer from viewpoint, condition changes, and perceptual aliasing. To effectively fill the aforementioned drawbacks, in this article, a novel LiDAR-based multimodule cascaded Siamese convolutional neural networks is developed, named MMCS-Net, which simulates the human-eye mechanism to extract more discriminative and generic feature descriptors. The MMCS-Net is mainly composed of three complementary modules: Siamese full convolutional (CA_SFC) module with cascaded attention, rotation-invariant and topological feature enhancement (RT_E) module, and feature uniqueness enhancement and aggregation compression (UE_AC) module. In particular, the graph structure employed in RT_E can explicitly encode the local topological correlations of point clouds in terms of intensity and geometric clues in parallel. Extensive comparative experiments on KITTI, NCLT, LGSVL, and real vehicle datasets prove that our proposed method outperforms the state-of-the-art methods, and shows high robustness while ensuring the real-time requirements of resource-constrained robots.
ISSN:0278-0046
1557-9948
DOI:10.1109/TIE.2022.3163511