Online Data-Driven Inverse Reinforcement Learning for Deterministic Systems
In this paper, we present an online data-driven inverse reinforcement learning (IRL) method for estimating the cost function of continuous-time linear and nonlinear deterministic systems from state and input measurements. Our approach utilizes Bellman error, obtained from integral reinforcement lear...
Saved in:
Published in | 2022 IEEE Symposium Series on Computational Intelligence (SSCI) pp. 884 - 889 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
04.12.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/SSCI51031.2022.10022226 |
Cover
Loading…
Summary: | In this paper, we present an online data-driven inverse reinforcement learning (IRL) method for estimating the cost function of continuous-time linear and nonlinear deterministic systems from state and input measurements. Our approach utilizes Bellman error, obtained from integral reinforcement learning, with error derived from the closed-form equation of an optimal controller as the performance metric to develop a recursive IRL technique. Our proposed scheme does not require the time derivative of states or the drift dynamics of a system. We describe a Lyapunov-based analysis to show the ultimate boundedness of the estimation errors. Simulation studies demonstrate the effectiveness of the proposed method. |
---|---|
DOI: | 10.1109/SSCI51031.2022.10022226 |