Integral reinforcement learning‐based optimal tracking control for uncertain nonlinear systems under input constraint and specified performance constraints

This article addresses the optimal tracking control problem with prescribed performance for uncertain nonlinear systems subject to input constraint and unknown disturbances. First, a fixed‐time monotonic convergence function is introduced to restrain tracking error, and a nonlinear mapping technique...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of robust and nonlinear control Vol. 34; no. 13; pp. 8802 - 8824
Main Authors Chang, Ru, Liu, Zhi‐Meng, Li, Xiao‐Bin, Sun, Chang‐Yin
Format Journal Article
LanguageEnglish
Published Bognor Regis Wiley Subscription Services, Inc 10.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This article addresses the optimal tracking control problem with prescribed performance for uncertain nonlinear systems subject to input constraint and unknown disturbances. First, a fixed‐time monotonic convergence function is introduced to restrain tracking error, and a nonlinear mapping technique is employed to transform the constrained error into an unconstrained variable, then the fixed‐time output tracking issue is boiled down to the boundedness problem of the transformed variable. With the aid of a nonquadratic cost function, the input constraint is encoded into the optimization problem. To solve the unknown disturbances, an auxiliary system and an auxiliary disturbance policy are constructed, and the optimal control problem is formulated as a two‐player zero‐sum game. Moreover, a Hamilton–Jacobi–Isaacs (HJI) equation associated with this nonquadratic zero‐sum game is established to give the optimal control and the worst‐case disturbance policy solution. Subsequently, to avoid using knowledge of the system dynamics, three neural network approximators, namely, actor, critic, and disturbance, which are tuned online and simultaneously for approximating the solution of HJI, are constructed based on the integral reinforcement learning algorithm. Theoretical analysis shows that the reconstructed error system states and the weight estimation errors are semi‐globally uniformly ultimately bounded. Finally, the simulation study further tests the availability of the proposed control strategy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1049-8923
1099-1239
DOI:10.1002/rnc.7415