Online Model-Free Reinforcement Learning for Output Feedback Tracking Control of a Class of Discrete-Time Systems With Input Saturation

In this paper, a new model-free Model-Actor (MA) reinforcement learning controller is developed for output feedback control of a class of discrete-time systems with input saturation constraints. The proposed controller is composed of two neural networks, namely a model-network and an actor network....

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 10; pp. 104966 - 104979
Main Authors Al-Mahasneh, Ahmad Jobran, Anavatti, Sreenatha G., Garratt, Matthew A.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, a new model-free Model-Actor (MA) reinforcement learning controller is developed for output feedback control of a class of discrete-time systems with input saturation constraints. The proposed controller is composed of two neural networks, namely a model-network and an actor network. The model-network is utilized to predict the output of the plant when a certain control action is applied to it. The actor network is utilized to estimate the optimal control action that is required to drive the output to the desired trajectory. The main advantages of the proposed controller over the previously proposed controllers are its ability to control systems in the absence of explicit knowledge of these systems' dynamics and its ability to start learning from scratch without any offline training. Also, it can explicitly handle the control constraints in the controller design. Comparison results with a previously published reinforcement learning output feedback controller and other controllers confirm the superiority of the proposed controller.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2022.3210136