Deep learning-based pose prediction for visual servoing of robotic manipulators using image similarity
The accuracy of pose prediction is crucial in learning-based visual servoing. Motivated by the fact that the more similar observed images are, the closer the camera poses, we propose a joint training strategy with a two-part loss function in this paper. One part is the least absolute deviation (L1)...
Saved in:
Published in | Neurocomputing (Amsterdam) Vol. 491; pp. 343 - 352 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
28.06.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The accuracy of pose prediction is crucial in learning-based visual servoing. Motivated by the fact that the more similar observed images are, the closer the camera poses, we propose a joint training strategy with a two-part loss function in this paper. One part is the least absolute deviation (L1) loss function, which is defined by the error between the predicted pose and the pose label. The other is the mean similarity image measurement loss function (MSIM), which is related to the image’s brightness, contrast, and structure similarity and is determined by the differences between the input image and the image corresponding to the predicted pose. Meanwhile, a data generator based on spherical projection is created to generate data uniformly for training a CNN model, and position-based visual servoing (PBVS) is designed for a robotic manipulator after pose prediction. A numeric simulation and real experiments are conducted in a virtual environment and with a UR3 manipulator. The results show that the proposed method can realize more accurate pose prediction and is robust to occlusion disturbance, and PBVS is achieved by using monocular images. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2022.03.045 |