Model-Free H Prescribed Performance Control of Adaptive Cruise Control Systems via Policy Learning

Model-free control does not require precise system dynamic information, but rather meets performance requirements by directly designing a control law. This method is particularly suitable for adaptive cruise control (ACC) systems in cyber-physical system (CPS) environments, as it can effectively han...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on intelligent transportation systems pp. 1 - 11
Main Authors Zhao, Jun, Jia, Bingyi, Zhao, Ziliang
Format Journal Article
LanguageEnglish
Published IEEE 2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Model-free control does not require precise system dynamic information, but rather meets performance requirements by directly designing a control law. This method is particularly suitable for adaptive cruise control (ACC) systems in cyber-physical system (CPS) environments, as it can effectively handle the dynamic uncertainty and external disturbances of the system. Thus, this paper develops a novel online adaptive <inline-formula> <tex-math notation="LaTeX">H_{\infty}</tex-math> </inline-formula> control scheme for ACC systems. The main contribution of our study lies in achieving model-free learning by using the homotopy strategy, removing the necessity of prior model knowledge of initial stabilizing control policies, which has been a significant challenge in existing policy learning and ACC studies. This resolves a long-standing issue and enhances the applicability of our findings. For this purpose, a continuous time ACC system is first constructed with unknown system dynamics. Then, based a designed offline policy learning algorithm, a novel online policy algorithm based on system input-output data is introduced to solve the Riccati equation without any model information. Finally, experimental results demonstrate that the proposed control method significantly improves system performance, especially in terms of computational speed, it has improved by about <inline-formula> <tex-math notation="LaTeX">45\%</tex-math> </inline-formula> compared to classical reinforcement learning (RL) algorithm.
ISSN:1524-9050
1558-0016
DOI:10.1109/TITS.2024.3485103