RGBD Navigation: A 2D navigation framework for visual SLAM with pose compensation

The breakthrough in SLAM(Simultaneous Localization and Mapping) technology has greatly driven the development of robot navigation. Currently, LiDAR navigation based on the development of LiDAR SLAM and ROS navigation stack is very mature. However, due to the inability of open-source vSLAM(visual Sim...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE International Conference on Real-time Computing and Robotics (RCAR) pp. 644 - 649
Main Authors Zhang, Teng, Wang, Pengfei, Zha, Fusheng, Guo, Wei, Li, Mantian
Format Conference Proceeding
LanguageEnglish
Published IEEE 17.07.2023
Subjects
Online AccessGet full text
DOI10.1109/RCAR58764.2023.10249297

Cover

More Information
Summary:The breakthrough in SLAM(Simultaneous Localization and Mapping) technology has greatly driven the development of robot navigation. Currently, LiDAR navigation based on the development of LiDAR SLAM and ROS navigation stack is very mature. However, due to the inability of open-source vSLAM(visual Simultaneous Localization and Mapping) and VO(Visual Odometry) to establish dense maps suitable for navigation, poor adaptability of visual information to ROS navigation stack, and map positioning problems caused by sensor characteristics, the development of visual navigation faces difficulties. To address the above three issues, this work provides a framework called RGBD Navigation, which utilizes RGBD sensors to achieve vision-based navigation for robots. This framework establishes a dense point cloud map based on the pose information provided by vSLAM/VO, depth images and RGB images, and converts it into a 2D occupancy grid map. And, convert the depth image into two-dimensional laserscan information. Finally, establish a "map-to-odom" positioning node based on the pose provided by SLAM/VO to achieve robot positioning on the map. This framework solves the three main problems of visual sensor adaptation to the ROS Navigation stack, and achieves the establishment of a vision-based 2D navigation system.
DOI:10.1109/RCAR58764.2023.10249297