Deep Convolutional Generative Adversarial Network for Inverse Kinematics of Self-Assembly Robotic Arm based on the Depth Sensor
In this study, we propose a new Deep Convolutional Generative Adversarial Kinematics Network (DCGAKN) to establish inverse kinematics of self-assembly robotic arm. We design that the robot system uses a depth sensor detecting an object by You Only Look Once v4 (YOLOv4) algorithm, and then our propos...
Saved in:
Published in | IEEE sensors journal Vol. 23; no. 1; p. 1 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this study, we propose a new Deep Convolutional Generative Adversarial Kinematics Network (DCGAKN) to establish inverse kinematics of self-assembly robotic arm. We design that the robot system uses a depth sensor detecting an object by You Only Look Once v4 (YOLOv4) algorithm, and then our proposed DCGAKN is with generator and discriminator of adversarial evolution training inverse kinematics model for controlling self-assembly robotic arm to solve the limited solution space to be more adaptive in the dynamic environment. The following is advancements of our proposed method. (1) Generator neural network is trained by few-shot training data to control self-assembly robotic arm to achieve high accuracy position. (2) Generator is evaluated with discriminator not only depending on training data, but also on adaptive evolutionary. (3) The self-assembly robotic arm is like humanoid arm not traditional robotic arm structure and it is easy for self-assembly model to build inverse kinematics without computing inverse kinematics matrix. (4) The object is detected by the depth information based on YOLOv4. (5) Through generator evolutionary, the activity range of robotic arm is not limited with training data range. The proposed DCGAKN is compared with CNN and DNN that accuracy rate and distance error achieve 87%, 1.26cm separately. The source code of this work is at: https://github.com/YiZengHsieh/DCGAKN. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2022.3222332 |