Robust RGB-D SLAM for Dynamic Environments Based on YOLOv4

At present, most of the current Simultaneous Localization and Mapping (SLAM) algorithms are limited to work in static environments. However, the world is not static and dynamic objects are typically present in the environments, leading to the failure of the general SLAM. In this paper, a dynamic obj...

Full description

Saved in:
Bibliographic Details
Published in2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall) pp. 1 - 6
Main Authors Rong, Hanxiao, Ramirez-Serrano, Alex, Guan, Lianwu, Cong, Xiaodan
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:At present, most of the current Simultaneous Localization and Mapping (SLAM) algorithms are limited to work in static environments. However, the world is not static and dynamic objects are typically present in the environments, leading to the failure of the general SLAM. In this paper, a dynamic object removal method combining semantic detection with depth image segmentation is proposed and applied in the real-time SLAM library for cameras ORB-SLAM2 system to achieve robust RGB-D SLAM for dynamic environments. In the proposed method, the potential dynamic regions are captured via YOLOv4 and K-means image segmentation. Different from the general purposes of processing dynamic regions, the potential dynamic regions are redetected by dynamic outliers rejection, which improves the reliability of dynamic object removal. Experiments using the TUM RGB-D dataset demonstrate that the proposed method performs with increased robustness and accuracy when compared to the original ORB-SLAM2 without dynamic object removal in dynamic environments.
ISSN:2577-2465
DOI:10.1109/VTC2020-Fall49728.2020.9348738