ViSE: Vision-Based 3D Online Shape Estimation of Continuously Deformable Robots

The precise control of soft and continuum robots requires knowledge of their shape. The shape of these robots has, in contrast to classical rigid robots, infinite degrees of freedom. To partially reconstruct the shape, proprioceptive techniques use built-in sensors resulting in inaccurate results an...

Full description

Saved in:
Bibliographic Details
Main Authors Zheng, Hehui, Pinzello, Sebastian, Cangan, Barnabas Gavin, Buchner, Thomas, Katzschmann, Robert K
Format Journal Article
LanguageEnglish
Published 09.11.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The precise control of soft and continuum robots requires knowledge of their shape. The shape of these robots has, in contrast to classical rigid robots, infinite degrees of freedom. To partially reconstruct the shape, proprioceptive techniques use built-in sensors resulting in inaccurate results and increased fabrication complexity. Exteroceptive methods so far rely on placing reflective markers on all tracked components and triangulating their position using multiple motion-tracking cameras. Tracking systems are expensive and infeasible for deformable robots interacting with the environment due to marker occlusion and damage. Here, we present a regression approach for 3D shape estimation using a convolutional neural network. The proposed approach takes advantage of data-driven supervised learning and is capable of real-time marker-less shape estimation during inference. Two images of a robotic system are taken simultaneously at 25 Hz from two different perspectives, and are fed to the network, which returns for each pair the parameterized shape. The proposed approach outperforms marker-less state-of-the-art methods by a maximum of 4.4% in estimation accuracy while at the same time being more robust and requiring no prior knowledge of the shape. The approach can be easily implemented due to only requiring two color cameras without depth and not needing an explicit calibration of the extrinsic parameters. Evaluations on two types of soft robotic arms and a soft robotic fish demonstrate our method's accuracy and versatility on highly deformable systems in real-time. The robust performance of the approach against different scene modifications (camera alignment and brightness) suggests its generalizability to a wider range of experimental setups, which will benefit downstream tasks such as robotic grasping and manipulation.
DOI:10.48550/arxiv.2211.05222