A Comprehensive SLAM Dataset for Indoor Exhibition Environments: Data Collection, Processing, and Comparative Evaluation

The field of Simultaneous Localization and Mapping(SLAM) has witnessed significant advancements in recent years. However, due to the difficulty of data obtains and ground-truth makes, there is a lack of real indoor scenarios datasets, especially the scenarios with less-texture, repeat-texture, extre...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Robotics and Biomimetics (ROBIO) pp. 1 - 6
Main Authors Yuan, Songyu, Zhang, Jing, Pan, Chen, Zhang, Chunlong, Wan, Minhong, Zheng, Tao, Gu, Jason
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The field of Simultaneous Localization and Mapping(SLAM) has witnessed significant advancements in recent years. However, due to the difficulty of data obtains and ground-truth makes, there is a lack of real indoor scenarios datasets, especially the scenarios with less-texture, repeat-texture, extreme light condition, causing the slowly progress in indoor SLAM. In this paper, we present a dataset captured in a real indoor exhibition hall with diverse type of sensor data. Which contributes to diverse school of SLAM that using different sensors. We provide ground-truth 6DoF pose captured by motion capture devices, RGB-D data captured by a RealSense D455 camera. Laser-data captured by Ouster. And a high-resolution color point cloud captured by a laser scanner. This dataset offers three distinct sources of ground truth from real-world scenarios. It combines motion capture, ICP based laser localization, and visual localization methods to provide comprehensive and accurate reference data. This dataset is designed for evaluating positioning methods that employ various types of sensors, moving beyond a singular matrix-based evaluation approach. Besides, We show results of several state-of-the-art SLAM method in our datasets, illustrates the utility of our dataset and served as a reference for other researcher. We converge LiDAR-based SLAM with ground truth, made a good closure. An ensemble approach-based Visual SLAM, achieves results of odometry errors of 4% and loop closure position errors of 40 cm. Additionally, visual localization demonstrated an absolute position error of 21 cm using high-resolution point clouds. We highlight the potentials of our indoor datasets in LiDAR SLAM, RGB-D SLAM, and visual localization and combines method. The datasets are available at https://github.com/IMBAfarmer09/ExhibitionDataset
DOI:10.1109/ROBIO58561.2023.10354619