Global convergence for adaptive one-step-ahead optimal controllers based on input matching

This paper establishes global convergence for adaptive one-step-ahead optimal controllers applied to a class of linear discrete time single-input single-output systems. The class of systems includes all stable systems whether they are minimum phase or not, all minimum phase systems whether they are...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on automatic control Vol. 26; no. 6; pp. 1269 - 1273
Main Authors Goodwin, G., Johnson, C., Kwai Sin
Format Journal Article
LanguageEnglish
Published IEEE 01.12.1981
Subjects
Online AccessGet full text
ISSN0018-9286
DOI10.1109/TAC.1981.1102813

Cover

More Information
Summary:This paper establishes global convergence for adaptive one-step-ahead optimal controllers applied to a class of linear discrete time single-input single-output systems. The class of systems includes all stable systems whether they are minimum phase or not, all minimum phase systems whether they are stable or not, and some unstable nonminimum phase systems. The key substantive assumption is that the one-step-ahead optimal controller designed using the true system parameters leads to a stable closed-loop system. Subject to this natural restriction, it is shown that a simple adaptive control algorithm based on input matching is globally convergent in the sense that the system inputs and outputs remain bounded for all time and the input converges to the one-step-ahead optimal input. Both deterministic and stochastic cases are treated.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0018-9286
DOI:10.1109/TAC.1981.1102813