Forgetting Beneficial Knowledge in Decomposition-Based Reinforcement Learning Using Evolutionary Computation

This work demonstrates that critical information can easily and prematurely be removed from a decomposition-based reinforcement learning system. One possible effect to the forgotten knowledge is the complete loss in ability to solve a previously learned problem when the system is given a new problem...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the International Conference on Genetic and Evolutionary Methods (GEM) p. 1
Main Authors Mondesire, Sean, Wiegand, R Paul
Format Conference Proceeding
LanguageEnglish
Published Athens The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) 01.01.2014
Online AccessGet full text

Cover

Loading…
More Information
Summary:This work demonstrates that critical information can easily and prematurely be removed from a decomposition-based reinforcement learning system. One possible effect to the forgotten knowledge is the complete loss in ability to solve a previously learned problem when the system is given a new problem to optimize. In artificial neural networks, this is called catastrophic forgetting and has been shown to cripple performance. We study this phenomenon to understand its effects on problem performance and to investigate suspected consequences experienced by other decomposition-based approaches. Furthermore, using an abstract decomposition-based reinforcement learning paradigm with a simple evolutionary algorithm, we analyze the role stability-plasticity imbalance has in the premature loss of critical knowledge.