Outdoor Monocular Visual Odometry Enhancement Using Depth Map and Semantic Segmentation

An outdoor environment is challenging for the localization of a mobile robot. For robust visual odometry, accurate feature matching and triangulation are essential. The features extracted from the windows of buildings and car surfaces lead to wrong triangulation results due to reflective features. T...

Full description

Saved in:
Bibliographic Details
Published in2020 20th International Conference on Control, Automation and Systems (ICCAS) pp. 1040 - 1045
Main Authors Kim, Jee-Seong, Kim, Chul-Hong, Shin, Yong-Min, Cho, Il-Soo, Cho, Dong-Il Dan
Format Conference Proceeding
LanguageEnglish
Published Institute of Control, Robotics, and Systems - ICROS 13.10.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:An outdoor environment is challenging for the localization of a mobile robot. For robust visual odometry, accurate feature matching and triangulation are essential. The features extracted from the windows of buildings and car surfaces lead to wrong triangulation results due to reflective features. The landmarks at short-distances affect the feature matching performance and the landmarks at long-distances cause triangulation errors. Inaccurate feature matching and triangulation error lead to the localization error of the robot pose. In this paper, an outdoor monocular visual odometry using the pre-trained depth estimation network and semantic segmentation network is proposed. By using the pre-trained semantic segmentation network, a semantic label is predicted for every pixel. Also, by using the pre-trained depth map estimation network, the depth of every pixel is predicted. Using semantic constraints for feature matching and depth constraint for triangulation, the accuracy of these procedures is enhanced. Additionally, pose graph optimization is performed on every estimated robot pose and landmark position. The performance of the proposed method is evaluated using dataset-based experiments. The experiments showed that the proposed algorithm is more accurate than the visual odometry algorithm that uses Oriented FAST and rotated BRIEF (ORB) features.
ISSN:2642-3901
DOI:10.23919/ICCAS50221.2020.9268347