BVMatch: Lidar-Based Place Recognition Using Bird's-Eye View Images

Recognizing places using Lidar in large-scale environments is challenging due to the sparse nature of point cloud data. In this letter we present BVMatch, a Lidar-based frame-to-frame place recognition framework, that is capable of estimating 2D relative poses. Based on the assumption that the groun...

Full description

Saved in:
Bibliographic Details
Published inIEEE robotics and automation letters Vol. 6; no. 3; pp. 6076 - 6083
Main Authors Luo, Lun, Cao, Si-Yuan, Han, Bin, Shen, Hui-Liang, Li, Junwei
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.07.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recognizing places using Lidar in large-scale environments is challenging due to the sparse nature of point cloud data. In this letter we present BVMatch, a Lidar-based frame-to-frame place recognition framework, that is capable of estimating 2D relative poses. Based on the assumption that the ground area can be approximated as a plane, we uniformly discretize the ground area into grids and project 3D Lidar scans to bird's-eye view (BV) images. We further use a bank of Log-Gabor filters to build a maximum index map (MIM) that encodes the orientation information of the structures in the images. We analyze the orientation characteristics of MIM theoretically and introduce a novel descriptor called bird's-eye view feature transform (BVFT). The proposed BVFT is insensitive to rotation and intensity variations of BV images. Leveraging the BVFT descriptors, we unify the Lidar place recognition and pose estimation tasks into the BVMatch framework. The experiments conducted on three large-scale datasets show that BVMatch outperforms the state-of-the-art methods in terms of both recall rate of place recognition and pose estimation accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2021.3091386