Comparison of Reinforcement Learning algorithms applied to the Cart Pole problem

Designing optimal controllers continues to be challenging as systems are becoming complex and are inherently nonlinear. The principal advantage of reinforcement learning (RL) is its ability to learn from the interaction with the environment and provide optimal control strategy. In this paper, RL is...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Savinay Nagendra, Podila, Nikhil, Ugarakhod, Rashmi, Koshy, George
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 03.10.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Designing optimal controllers continues to be challenging as systems are becoming complex and are inherently nonlinear. The principal advantage of reinforcement learning (RL) is its ability to learn from the interaction with the environment and provide optimal control strategy. In this paper, RL is explored in the context of control of the benchmark cartpole dynamical system with no prior knowledge of the dynamics. RL algorithms such as temporal-difference, policy gradient actor-critic, and value function approximation are compared in this context with the standard LQR solution. Further, we propose a novel approach to integrate RL and swing-up controllers.
ISSN:2331-8422
DOI:10.48550/arxiv.1810.01940