Integral reinforcement learning-based guaranteed cost control for unknown nonlinear systems subject to input constraints and uncertainties
•Firstly, a novel FTC strategy involved with event-triggered mechanism and ISMC method is proposed for nonlinear systems subject to actuator failures. The proposed control method can make the nonlinear systems stable, as well as lower the frequency of network communication of the controlled systems,...
Saved in:
Published in | Applied mathematics and computation Vol. 408; p. 126336 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.11.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •Firstly, a novel FTC strategy involved with event-triggered mechanism and ISMC method is proposed for nonlinear systems subject to actuator failures. The proposed control method can make the nonlinear systems stable, as well as lower the frequency of network communication of the controlled systems, and therefore avoid the waste of system resources.•Secondly, inspired by Vrabie and Lewis (2009)[35] and Liu et al. (2020)[36], the proposed control method is presented to drive an event-based H∞ control policy by applying RL algorithm without requiring a stable initial control law, which is necessary in the implemen-tation of event-triggered RL-based control approaches developed in Adib and Braun (2019)[25], Liu et al. (2019)[29], Wang et al. (2019)[33] and Vrabie et al. (2009)[34].•Thirdly, since the phenomenon of input constraints often exists in nonlinear practical control systems, different from Liu et al. (2019)[29], Liu et al. (2021)[37], Han et al. (2020)[38] and Liu et al. (2020)[39], the event-triggered H∞ control strategies is developed for nonlinear system dynamics subject to actuator failures.
This paper investigates guaranteed cost control (GCC) problem for nonlinear systems subject to input constraints and disturbances by utilizing the reinforcement-learning (RL) algorithm. Firstly, by establishing a modified Hamilton–Jacobi–Bellman (HJI) equation, which is difficult to be solved, a model-based policy iteration (PI) GCC algorithm is designed for input-constrained nonlinear systems with disturbances. Moreover, without requiring any knowledge of system dynamics, by designing an auxiliary system with a control law and an auxiliary disturbance policy, an online model-free GCC approach is developed by utilizing integral reinforcement learning (IRL) algorithm. To implement the proposed control algorithm, the actor and disturbance NNs are constructed to approximate the optimal control input and worst-case disturbance policy, while the critic NN is utilized to approximate optimal value function. Further, a synchronization weight update law is developed to minimize the NN approximation residual errors. The asymptotic stability of controlled systems is analyzed by applying the Lyapunov’s method. Finally, the effectiveness and feasibility of the proposed control method are verified by two nonlinear simulation examples. |
---|---|
ISSN: | 0096-3003 1873-5649 |
DOI: | 10.1016/j.amc.2021.126336 |