Online Data-Driven Inverse Reinforcement Learning for Deterministic Systems

In this paper, we present an online data-driven inverse reinforcement learning (IRL) method for estimating the cost function of continuous-time linear and nonlinear deterministic systems from state and input measurements. Our approach utilizes Bellman error, obtained from integral reinforcement lear...

Full description

Saved in:
Bibliographic Details
Published in2022 IEEE Symposium Series on Computational Intelligence (SSCI) pp. 884 - 889
Main Authors Asl, Hamed Jabbari, Uchibe, Eiji
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2022
Subjects
Online AccessGet full text
DOI10.1109/SSCI51031.2022.10022226

Cover

Loading…
More Information
Summary:In this paper, we present an online data-driven inverse reinforcement learning (IRL) method for estimating the cost function of continuous-time linear and nonlinear deterministic systems from state and input measurements. Our approach utilizes Bellman error, obtained from integral reinforcement learning, with error derived from the closed-form equation of an optimal controller as the performance metric to develop a recursive IRL technique. Our proposed scheme does not require the time derivative of states or the drift dynamics of a system. We describe a Lyapunov-based analysis to show the ultimate boundedness of the estimation errors. Simulation studies demonstrate the effectiveness of the proposed method.
DOI:10.1109/SSCI51031.2022.10022226