2.5D SLAM Algorithm with Novel Data Fusion Method Between 2D-Lidar and Camera

The SLAM algorithm has been extensively researched and has gained significant relevance in our daily lives, particularly with the advancements in robot technology and autonomous driving. Currently, SLAM can be largely divided into 2D SLAM and 3D SLAM. In the case of 2D SLAM, accurate mapping and loc...

Full description

Saved in:
Bibliographic Details
Published in2023 23rd International Conference on Control, Automation and Systems (ICCAS) pp. 1904 - 1907
Main Authors Oh, Sang Hyeon, Hwan, Kwak Dong, Lim, Hyun Tek
Format Conference Proceeding
LanguageEnglish
Published ICROS 17.10.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The SLAM algorithm has been extensively researched and has gained significant relevance in our daily lives, particularly with the advancements in robot technology and autonomous driving. Currently, SLAM can be largely divided into 2D SLAM and 3D SLAM. In the case of 2D SLAM, accurate mapping and localization of the plane can be achieved with low computational requirements. However, it has the limitation of not considering 3D information. In contrast, 3D SLAM can generate maps and perform localization in complex indoor spaces with obstacles by incorporating 3D data on objects. However, it may face challenges in real-time implementation in embedded systems due to high computational demands. In this paper, we propose a 2.5D SLAM using the Life Long Feature(LLF) algorithm. First, 2.5D SLAM is a projection of 3D camera feature points that are not perceived by the 2D-Lidar onto the 2D-Lidar plane. The transformation matrix to project camera feature to 2D-Lidar plane is calculated by optimizing least-square problem of the same points recognized by the camera and 2D-Lidar. Additionally, the LLF algorithm enables the updating and maintenance of camera feature points based on the robot's current position, even when the camera is not perceiving objects. The difference in Field Of View(FOV) between the camera and the LiDAR causes the problem of being recognized as a dynamic object and removing feature points. The LLF algorithm can solve this problem even when the object is within the range of the 2D-LiDAR but not visible in the camera's FOV. In experiment, we confirmed our proposed 2.5D SLAM with LLF algorithm show better performance in detecting 3D objects and Mapping than pervious 2D SLAM.
ISSN:2642-3901
DOI:10.23919/ICCAS59377.2023.10316979