Learning RGB-D descriptors of garment parts for informed robot grasping

Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially gra...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 35; pp. 246 - 258
Main Authors Ramisa, Arnau, Alenyà, Guillem, Moreno-Noguer, Francesc, Torras, Carme
Format Journal Article Publication
LanguageEnglish
Published Elsevier Ltd 01.10.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robotic handling of textile objects in household environments is an emerging application that has recently received considerable attention thanks to the development of domestic robots. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields a desired configuration. In this work we propose a vision-based method, built on the Bag of Visual Words approach, that combines appearance and 3D information to detect parts suitable for grasping in clothes, even when they are highly wrinkled. We also contribute a new, annotated, garment part dataset that can be used for benchmarking classification, part detection, and segmentation algorithms. The dataset is used to evaluate our approach and several state-of-the-art 3D descriptors for the task of garment part detection. Results indicate that appearance is a reliable source of information, but that augmenting it with 3D information can help the method perform better with new clothing items.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2014.06.025