Adaptive Reconfigurable Learning Algorithm for Robust Optimal Longitudinal Motion Control of Unmanned Aerial Vehicles
This study presents the formulation and verification of a novel online adaptive reconfigurable learning control algorithm (RLCA) for improved longitudinal motion control and disturbance compensation in Unmanned Aerial Vehicles (UAVs). The proposed algorithm is formulated to track the optimal traject...
Saved in:
Published in | Algorithms Vol. 18; no. 4; p. 180 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
01.04.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This study presents the formulation and verification of a novel online adaptive reconfigurable learning control algorithm (RLCA) for improved longitudinal motion control and disturbance compensation in Unmanned Aerial Vehicles (UAVs). The proposed algorithm is formulated to track the optimal trajectory yielded by the baseline Linear Quadratic Integral (LQI) controller. However, it also leverages reconfigurable dissipative and anti-dissipative actions to enhance adaptability under varying system dynamics. The anti-dissipative actor delivers an aggressive control effort to compensate for large errors, while the dissipative actor minimizes control energy expenditure under low error conditions to improve the control economy. The dissipative and anti-dissipative actors are augmented with state-error-driven hyperbolic scaling functions, which autonomously reconfigure the associated learning gains to mitigate disturbances and uncertainties, ensuring superior performance metrics such as tracking precision and disturbance rejection. By integrating the reconfigurable dissipative and anti-dissipative actions in its formulation, the proposed RLCA adaptively steers the control trajectory as the state conditions vary. The enhanced performance of the proposed RLCA in controlling the longitudinal motion of a small UAV model is validated via customized MATLAB simulations. The simulation results demonstrate the proposed control algorithm’s efficacy in achieving rapid error convergence, disturbance rejection, and seamless adaptation to dynamic variations, as compared to the baseline LQI controller. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1999-4893 1999-4893 |
DOI: | 10.3390/a18040180 |