A Temporal Difference GNG-Based Approach for the State Space Quantization in Reinforcement Learning Environments

The main issue when using reinforcement learning algorithms is how the estimation of the value function can be mapped into states. In very few cases it is possible to use tables but in the majority of cases, the number of states either can be too large to be kept into computer memory or it is comput...

Full description

Saved in:
Bibliographic Details
Published in2013 IEEE 25th International Conference on Tools with Artificial Intelligence pp. 561 - 568
Main Authors Vieira, Davi C. L., Adeodato, Paulo J. L., Goncalves, Paulo M.
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.01.2013
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The main issue when using reinforcement learning algorithms is how the estimation of the value function can be mapped into states. In very few cases it is possible to use tables but in the majority of cases, the number of states either can be too large to be kept into computer memory or it is computationally too expensive to visit all states. State aggregation models like the self-organizing maps have been used to make this possible by generalizing the input space and mapping the value functions into the states. This paper proposes a new algorithm called TD-GNG that uses the Growing Neural Gas (GNG) network to solve reinforcement learning problems by providing a way to map value functions into states. In experimental comparison against TD-AVQ and uniform discretization in three reinforcement problems, the TD-GNG showed improvements in three aspects, namely, 1) reduction of the dimensionality of the problem, 2) increase the generalization and 3) reduction of the convergence time. Experiments have also show that TD-GNG found a solution using less memory than TD-AVQ and uniform discretization without loosing quality in the policy obtained.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1082-3409
2375-0197
DOI:10.1109/ICTAI.2013.89