A Neural-Reinforcement-Learning-Based Guaranteed Cost Control for Perturbed Tracking Systems
Artificial intelligence (AI)-based learning control plays a critical role in the evolution of intelligent control, particularly for complex network systems. Traditional intelligent control methods assume the agent can learn from safe data in the tasks. However, many application scenarios exist pertu...
Saved in:
Published in | IEEE transactions on artificial intelligence Vol. 5; no. 6; pp. 3205 - 3217 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
IEEE
01.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Artificial intelligence (AI)-based learning control plays a critical role in the evolution of intelligent control, particularly for complex network systems. Traditional intelligent control methods assume the agent can learn from safe data in the tasks. However, many application scenarios exist perturbations caused by noise and/or malicious attack, which make the received data unreliable and may cause the failure of the learning process. In this article, we focus on developing an intelligent guaranteed cost control method for nonlinear tracking systems subject to unknown matched and mismatched perturbations. By developing appropriate cost functions for the nominal plants, we transform the robust tracking control problem into a stabilization design for both kinds of perturbations. The explicit proofs are provided to show the equivalence of the transformation for these two situations respectively. Then, the neural-reinforcement-learning-based algorithm with guaranteed cost control is developed to learn the cost functions and optimal control laws adaptively. The designed method can also guarantee the boundedness of a given cost function. Three simulation studies are provided to demonstrate the effectiveness of the proposed method and also validate the theoretical analysis. |
---|---|
ISSN: | 2691-4581 2691-4581 |
DOI: | 10.1109/TAI.2023.3346334 |