High Precision ORB-SLAM Dense Reconstruction Based on Depth Visual Odometer in Dynamic Environments
Most SLAM systems are based on the assumption of a static environment, while most of the real scenes with dynamic will cause mismatching in the process of camera pose estimation and affect localization accuracy and system robustness. To improve the effect and accuracy of SLAM dense reconstruction in...
Saved in:
Published in | 2023 9th International Conference on Virtual Reality (ICVR) pp. 41 - 48 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
12.05.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Most SLAM systems are based on the assumption of a static environment, while most of the real scenes with dynamic will cause mismatching in the process of camera pose estimation and affect localization accuracy and system robustness. To improve the effect and accuracy of SLAM dense reconstruction in dynamic environments and to increase the real-time performance of advanced tasks such as navigation and human-computer interaction, this paper proposes a real-time SLAM dense reconstruction method based on object detection in dynamic environments. Firstly, we use YOLOv5 for frame-by-frame detection and semantic marking of images to accurately identify the image feature points under dynamic semantic masks. Further, this paper proposed a keyframe filtering algorithm based on dynamic semantic marking to ensure the accuracy of dense map reconstruction by eliminating frames containing dynamic objects, which solves the interference of point cloud redundancy and moving objects in dynamic environments. Finally, this paper takes the scene depth information as an important reference for characterizing the geometric structure of the scene. By introducing the depth factor into the traditional feature extraction algorithm, we explored a new joint-depth-information based feature extraction and feature descriptor calculation method, and proposed a high precision visual SLAM based on deep joint visual odometer. Experiments on the TUM RGBD public dataset show that the trajectory accuracy of constructing dense point cloud maps under partial dynamic scene sequences is improved by 16.1% over DS-SLAM and 82.3% over ORB-SLAM2. |
---|---|
ISSN: | 2331-9569 |
DOI: | 10.1109/ICVR57957.2023.10169708 |