Meta-inverse Reinforcement Learning Method Based on Relative Entropy

Aiming at the problem that traditional inverse reinforcement learning algorithms are slow,imprecise,or even unsolvable when solving the reward function owing to insufficient expert demonstration samples and unknown state transition probabilitie,a meta-reinforcement learning method based on relative...

Full description

Saved in:
Bibliographic Details
Published inJi suan ji ke xue Vol. 48; no. 9; pp. 257 - 263
Main Author WU Shao-bo, FU Qi-ming, CHEN Jian-ping, WU Hong-jie, LU You
Format Journal Article
LanguageChinese
Published Editorial office of Computer Science 01.09.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Aiming at the problem that traditional inverse reinforcement learning algorithms are slow,imprecise,or even unsolvable when solving the reward function owing to insufficient expert demonstration samples and unknown state transition probabilitie,a meta-reinforcement learning method based on relative entropy is proposed.Using meta-learning methods,the target task learning prior is constructed by integrating a set of meta-training sets that meet the same distribution as the target task.In the model-free reinforcement learning problem,the relative entropy probability model is used to model the reward function and combined with the prior to achieve the goal of quickly solving the reward function of the target task using a small number of samples of the target task.The proposed algorithm and the RE IRL algorithm are applied to the classic Gridworld and Object World pro-blems.Experiments show that the proposed algorithm can still solve the reward function better when the target task lacks a sufficient number of expe
ISSN:1002-137X
DOI:10.11896/jsjkx.200700044