Fitting primitive shapes in point clouds: a practical approach to improve autonomous underwater grasp specification of unknown objects

This article presents research on the subject of autonomous underwater robot manipulation. Ongoing research in underwater robotics intends to increase the autonomy of intervention operations that require physical interaction in order to achieve social benefits in fields such as archaeology or biolog...

Full description

Saved in:
Bibliographic Details
Published inJournal of experimental & theoretical artificial intelligence Vol. 28; no. 1-2; pp. 369 - 384
Main Authors Fornas, D., Sales, J., Peñalver, A., Pérez, J., Fernández, J.J., Marín, R., Sanz, P.J.
Format Journal Article
LanguageEnglish
Published Abingdon Taylor & Francis 03.03.2016
Taylor & Francis Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This article presents research on the subject of autonomous underwater robot manipulation. Ongoing research in underwater robotics intends to increase the autonomy of intervention operations that require physical interaction in order to achieve social benefits in fields such as archaeology or biology that cannot afford the expenses of costly underwater operations using remote operated vehicles. Autonomous grasping is still a very challenging skill, especially in underwater environments, with highly unstructured scenarios, limited availability of sensors and adverse conditions that affect the robot perception and control systems. To tackle these issues, we propose the use of vision and segmentation techniques that aim to improve the specification of grasping operations on underwater primitive shaped objects. Several sources of stereo information are used to gather 3D information in order to obtain a model of the object. Using a RANSAC segmentation algorithm, the model parameters are estimated and a set of feasible grasps are computed. This approach is validated in both simulated and real underwater scenarios.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0952-813X
1362-3079
DOI:10.1080/0952813X.2015.1024495