Operational Optimal Tracking Control for Industrial Multirate Systems Subject to Unknown Disturbances

It is well common for industrial processes to employ a hierarchical control structure involving a basic loop process and an operation loop process with two timescales. However, the control system suffers from another multirate challenge where control and sampling rates may differ even within a singl...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on systems, man, and cybernetics. Systems Vol. 54; no. 1; pp. 180 - 192
Main Authors Zhang, Lingzhi, Xie, Lei, Dai, Wei, Lu, Shan, Su, Hongye
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:It is well common for industrial processes to employ a hierarchical control structure involving a basic loop process and an operation loop process with two timescales. However, the control system suffers from another multirate challenge where control and sampling rates may differ even within a single loop. Additionally, the underlying complex mechanism of the operation loop further complicates the accurate modeling of its dynamics, especially in the presence of external unknown disturbances. This gives rise to the difficulty in obtaining desired control performance. To overcome these problems, this article develops a novel operational optimal tracking control method for a class of multirate systems subject to unknown disturbances. To this end, a lifting technique is integrated with a general model predictive controller for the basic loop process, aimed at handling the asynchronism phenomenon and achieving loop setpoint tracking control. Furthermore, a nonlinear disturbance observer is used for estimating the unknown external disturbance of the operation loop process. In this way, offset-free tracking control of the system, along with loop setpoints optimization, can be achieved using the policy iteration reinforcement learning algorithm. The convergence of the proposed method is analyzed and tangible improvements are verified by simulations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2023.3305245