RWT-SLAM: Robust Visual SLAM for Weakly Textured Environments

As a fundamental task for intelligent robots, visual SLAM has made significant progress in recent years. However, robust SLAM in weakly textured environments remains a challenging task. In this paper, we present a novel visual Robust SLAM for Weak-Textured environments (RWT-SLAM) to address this pro...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE Intelligent Vehicles Symposium (IV) pp. 913 - 919
Main Authors Peng, Qihao, Zhao, Xijun, Dang, Ruina, Xiang, Zhiyu
Format Conference Proceeding
LanguageEnglish
Published IEEE 02.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As a fundamental task for intelligent robots, visual SLAM has made significant progress in recent years. However, robust SLAM in weakly textured environments remains a challenging task. In this paper, we present a novel visual Robust SLAM for Weak-Textured environments (RWT-SLAM) to address this problem. Unlike existing methods that use detector-based deep networks for interest point detection, we propose extracting distinctive features from a detector-free based network, namely LoFTR, to avoid the difficulty of manual annotations of feature points in weakly textured images. We generate multi-level feature vectors from LoFTR to form dense descriptors for each pixel in the input image. A keypoint localization component is then proposed to measure the saliency of the descriptors and select the distinctive pixels as keypoints. We integrate this new keypoint into the popular ORB-SLAM framework and compare it with the state-of-the-art methods. Extensive experiments on popular TUM RGB-D, OpenLORIS-Scene, as well as our own dataset are carried out. The results demonstrate the superior performance of our method in weakly textured environments.
ISSN:2642-7214
DOI:10.1109/IV55156.2024.10588822