H Static Output-Feedback Control Design for Discrete-Time Systems Using Reinforcement Learning

This paper provides necessary and sufficient conditions for the existence of the static output-feedback (OPFB) solution to the H ∞ control problem for linear discrete-time systems. It is shown that the solution of the static OPFB H ∞ control is a Nash equilibrium point. Furthermore, a Q-learning alg...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 31; no. 2; pp. 396 - 406
Main Authors Valadbeigi, Amir Parviz, Sedigh, Ali Khaki, Lewis, F. L.
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper provides necessary and sufficient conditions for the existence of the static output-feedback (OPFB) solution to the H ∞ control problem for linear discrete-time systems. It is shown that the solution of the static OPFB H ∞ control is a Nash equilibrium point. Furthermore, a Q-learning algorithm is developed to find the H ∞ OPFB solution online using data measured along the system trajectories and without knowing the system matrices. This is achieved by solving a game algebraic Riccati equation online and using the measured data. A simulation example shows the effectiveness of the proposed method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2019.2901889