Reinforcement learning in multi-dimensional state-action space using random rectangular coarse coding and Gibbs sampling

This paper presents a coarse coding technique and an action selection scheme for reinforcement learning (RL) in multi-dimensional and continuous state-action spaces following conventional and sound RL manners. RL in high-dimensional continuous domains includes two issues: One is a generalization pro...

Full description

Saved in:
Bibliographic Details
Published in2007 IEEE/RSJ International Conference on Intelligent Robots and Systems pp. 88 - 95
Main Author Kimura, H.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.10.2007
Subjects
Online AccessGet full text
ISBN9781424409112
142440911X
ISSN2153-0858
DOI10.1109/IROS.2007.4399401

Cover

Loading…
More Information
Summary:This paper presents a coarse coding technique and an action selection scheme for reinforcement learning (RL) in multi-dimensional and continuous state-action spaces following conventional and sound RL manners. RL in high-dimensional continuous domains includes two issues: One is a generalization problem for value-function approximation, and the other is a sampling problem for action selection over multi-dimensional continuous action spaces. The proposed method combines random rectangular coarse coding with an action selection scheme using Gibbs-sampling. The random rectangular coarse coding is very simple and quite suited both to approximate Q-functions in high-dimensional spaces and to execute Gibbs sampling. Gibbs sampling enables us to execute action selection following Boltsmann distribution over high-dimensional action space. The algorithm is demonstrated through Rod in maze problem and a redundant-arm reaching task comparing with conventional regular grid approaches.
ISBN:9781424409112
142440911X
ISSN:2153-0858
DOI:10.1109/IROS.2007.4399401