Towards Safe Control of Continuum Manipulator Using Shielded Multiagent Reinforcement Learning
Continuum robotic manipulators are increasingly adopted in minimal invasive surgery. However, their nonlinear behavior is challenging to model accurately, especially when subject to external interaction, potentially leading to poor control performance. In this letter, we investigate the feasibility...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
15.06.2021
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2106.07892 |
Cover
Summary: | Continuum robotic manipulators are increasingly adopted in minimal invasive
surgery. However, their nonlinear behavior is challenging to model accurately,
especially when subject to external interaction, potentially leading to poor
control performance. In this letter, we investigate the feasibility of adopting
a model-free multiagent reinforcement learning (RL), namely multiagent deep Q
network (MADQN), to control a 2-degree of freedom (DoF) cable-driven continuum
surgical manipulator. The control of the robot is formulated as a one-DoF, one
agent problem in the MADQN framework to improve the learning efficiency.
Combined with a shielding scheme that enables dynamic variation of the action
set boundary, MADQN leads to efficient and importantly safer control of the
robot. Shielded MADQN enabled the robot to perform point and trajectory
tracking with submillimeter root mean square errors under external loads, soft
obstacles, and rigid collision, which are common interaction scenarios
encountered by surgical manipulators. The controller was further proven to be
effective in a miniature continuum robot with high structural nonlinearitiy,
achieving trajectory tracking with submillimeter accuracy under external
payload. |
---|---|
DOI: | 10.48550/arxiv.2106.07892 |