Using State Predictions for Value Regularization in Curiosity Driven Deep Reinforcement Learning

Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards. One promising strategy is inspired by human curiosity, requiring the agent to learn to predict the future. In this paper a curiosity-driven agent is extended to use...

Full description

Saved in:
Bibliographic Details
Published in2018 IEEE 30th International Conference on Tools with Artificial Intelligence (ICTAI) pp. 25 - 29
Main Authors Brunner, Gino, Fritsche, Manuel, Richter, Oliver, Wattenhofer, Roger
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards. One promising strategy is inspired by human curiosity, requiring the agent to learn to predict the future. In this paper a curiosity-driven agent is extended to use these predictions directly for training. To achieve this, the agent predicts the value function of the next state at any point in time. Subsequently, the consistency of this prediction with the current value function is measured, which is then used as a regularization term in the loss function of the algorithm. Experiments were made on grid-world environments as well as on a 3D navigation task, both with sparse rewards. In the first case the extended agent is able to learn significantly faster than the baselines.
ISSN:2375-0197
DOI:10.1109/ICTAI.2018.00015