Realization and analysis of giant-swing motion using Q-Learning
Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid r...
Saved in:
Published in | 2010 IEEE/SICE International Symposium on System Integration pp. 365 - 372 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2010
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Many research papers have reported studies on sports robots that realize giant-swing motion. However, almost all these robots were controlled using trajectory planning methods, and few robots realized giant-swing motion by learning. Consequently, in this study, we attempted to construct a humanoid robot that realizes giant-swing motion by Q-learning, a reinforcement learning technique. The significant aspect of our study is that few robotic models were constructed beforehand; the robot learns giant-swing motion only by interaction with the environment during simulations. Our implementation faced several problems such as imperfect perception of the velocity state and robot posture issues caused by using only the arm angle. However, our real robot realized giant-swing motion by averaging the Q value and by using rewards - the absolute angle of the foot angle and the angular velocity of the arm angle-in the simulated learning data; the sampling time was 250 ms. Furthermore, the feasibility of generalization of learning for realizing selective motion in the forward and backward rotational directions was investigated; it was revealed that the generalization of learning is feasible as long as it does not interfere with the robot's motions. |
---|---|
ISBN: | 1424493161 9781424493166 |
DOI: | 10.1109/SII.2010.5708353 |