Robust RGB-D Visual Odometry Based on the Line Intersection Structure Feature in Low-Textured Scenes

Most methods of estimating camera motion for visual odometry are based on point features. Point-based algorithms often fail in low-texture scenes because it is difficult to build a large number of features. Instead, line segments are often very rich. However, the instability of the endpoints of line...

Full description

Saved in:
Bibliographic Details
Published in2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS) pp. 390 - 394
Main Authors Li, Xianlong, Zhang, Chongyang
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most methods of estimating camera motion for visual odometry are based on point features. Point-based algorithms often fail in low-texture scenes because it is difficult to build a large number of features. Instead, line segments are often very rich. However, the instability of the endpoints of line segments and the loss of line segment connectivity make the matching of the line segments difficult. This paper proposes a new odometry algorithm based on line intersection structure feature (LISF). LISFs are formed by the adjacent line pairs in 2D images that are coplanar in 3D, which include two line segments and their joint point. These LISFs are then described with the proposed combined descriptor consists of structure feature and gradient feature for matching. Also, we implement a RGB-D odometry system that utilizes LISFs by adopting a RANSAC-based motion estimation, followed by a g 2 o-based motion refinement. In experiments, using data sets in weak texture scenes, results show that proposed method can achieve high continuity and accuracy.
DOI:10.1109/CCIS.2018.8691213