Reinforcement Learning based Data-driven Optimal Control Strategy for Systems with Disturbance

This paper proposes a partially model-free optimal control strategy for a class of continuous-time systems in a data-driven way. Although a series of optimal control have achieving superior performance, the following challenges still exist: (i) The controller designed based on the nominal system is...

Full description

Saved in:
Bibliographic Details
Published in2023 IEEE 12th Data Driven Control and Learning Systems Conference (DDCLS) pp. 567 - 572
Main Authors Fan, Zhong-Xin, Li, Shihua, Liu, Rongjie
Format Conference Proceeding
LanguageEnglish
Published IEEE 12.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes a partially model-free optimal control strategy for a class of continuous-time systems in a data-driven way. Although a series of optimal control have achieving superior performance, the following challenges still exist: (i) The controller designed based on the nominal system is difficult to cope with sudden disturbances. (ii) Feedback control is highly dependent on system dynamics and generally requires full state information. A novel composite control method combining output feedback reinforcement learning and input-output disturbance observer for these two challenges is concluded in this paper. Firstly, an output feedback policy iteration (PI) algorithm is given to acquire the feedback gain iteratively. Simultaneously, the observer continuously provides estimates of the disturbance. System dynamic information and states information are not needed to be known in advance in our approach, thus offering a higher degree of robustness and practical implementation prospects. Finally, an example is given to show the effectiveness of the proposed controller.
ISSN:2767-9861
DOI:10.1109/DDCLS58216.2023.10167230