Comparison of Reinforcement Learning algorithms applied to the Cart Pole problem
Designing optimal controllers continues to be challenging as systems are becoming complex and are inherently nonlinear. The principal advantage of reinforcement learning (RL) is its ability to learn from the interaction with the environment and provide optimal control strategy. In this paper, RL is...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , |
Format | Paper Journal Article |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
03.10.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Designing optimal controllers continues to be challenging as systems are becoming complex and are inherently nonlinear. The principal advantage of reinforcement learning (RL) is its ability to learn from the interaction with the environment and provide optimal control strategy. In this paper, RL is explored in the context of control of the benchmark cartpole dynamical system with no prior knowledge of the dynamics. RL algorithms such as temporal-difference, policy gradient actor-critic, and value function approximation are compared in this context with the standard LQR solution. Further, we propose a novel approach to integrate RL and swing-up controllers. |
---|---|
ISSN: | 2331-8422 |
DOI: | 10.48550/arxiv.1810.01940 |