Actor-Critic Reinforcement Learning Control of Non-Strict Feedback Nonaffine Dynamic Systems

The most focuses of the existing actor-critic reinforcement learning control (ARLC) are on dealing with continuous affine systems or discrete nonaffine systems. In this paper, I propose a new ARLC method for continuous nonaffine dynamic systems subject to unknown dynamics and external disturbances....

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 7; pp. 65569 - 65578
Main Author Bu, Xiangwei
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The most focuses of the existing actor-critic reinforcement learning control (ARLC) are on dealing with continuous affine systems or discrete nonaffine systems. In this paper, I propose a new ARLC method for continuous nonaffine dynamic systems subject to unknown dynamics and external disturbances. A new input-to-state stable system is developed to establish an augmented dynamic system, from which I further get a strict-feedback affine model that is convenient for control designing based on a model transformation approach. The Nussbaum function is connected with a fuzzy approximation to devise an actor network whose tracking performance is further enhanced via strengthening signals generated by a fuzzy critic network. The stability of the closed-loop control system is guaranteed by the Lyapunov synthesis. Finally, the comparison simulation results are presented to verify the design.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2917141