A Motion Planning Strategy for the Active Vision-Based Mapping of Ground-Level Structures

This paper presents a strategy to guide a mobile ground robot equipped with a camera or depth sensor, in order to autonomously map the visible part of a bounded 3-D structure. We describe motion planning algorithms that determine appropriate successive viewpoints and attempt to fill holes automatica...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automation science and engineering Vol. 15; no. 1; pp. 356 - 368
Main Authors Srinivasan Ramanagopal, Manikandasriram, Nguyen, Andre Phu-Van, Le Ny, Jerome
Format Journal Article
LanguageEnglish
Published IEEE 01.01.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a strategy to guide a mobile ground robot equipped with a camera or depth sensor, in order to autonomously map the visible part of a bounded 3-D structure. We describe motion planning algorithms that determine appropriate successive viewpoints and attempt to fill holes automatically in a point cloud produced by the sensing and perception layer. The emphasis is on accurately reconstructing a 3-D model of a structure of moderate size rather than mapping large open environments, with applications for example in architecture, construction, and inspection. The proposed algorithms do not require any initialization in the form of a mesh model or a bounding box, and the paths generated are well adapted to situations where the vision sensor is used simultaneously for mapping and for localizing the robot, in the absence of additional absolute positioning system. We analyze the coverage properties of our policy, and compare its performance with the classic frontier-based exploration algorithm. We illustrate its efficacy for different structure sizes, levels of localization accuracy, and range of the depth sensor, and validate our design on a real-world experiment. Note to Practitioners -The objective of this paper is to automate the process of building a 3-D model of a structure of interest that is as complete as possible, using a mobile camera or depth sensor, in the absence of any prior information about this structure. Given that increasingly robust solutions for the visual simultaneous localization and mapping problem are now readily available, the key challenge that we address here is to develop motion planning policies to control the trajectory of the sensor in a way that improves the mapping performance. We target in particular scenarios where no external absolute positioning system is available, such as mapping certain indoor environments where GPS signals are blocked. In this case, it is often important to revisit previously seen locations relatively quickly, in order to avoid excessive drift in the dead-reckoning localization system. Our system works by first determining the boundaries of the structure, before attempting to fill the holes in the constructed model. Its performance is illustrated through simulations, and a real-world experiment performed with a depth sensor carried by a mobile manipulator.
ISSN:1545-5955
1558-3783
DOI:10.1109/TASE.2017.2762088