Human and machine learning in non-Markovian decision making

Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next s...

Full description

Saved in:
Bibliographic Details
Published inPloS one Vol. 10; no. 4; p. e0123105
Main Authors Clarke, Aaron Michael, Friedrich, Johannes, Tartaglia, Elisa M, Marchesotti, Silvia, Senn, Walter, Herzog, Michael H
Format Journal Article
LanguageEnglish
Published United States Public Library of Science 21.04.2015
Public Library of Science (PLoS)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Humans can learn under a wide variety of feedback conditions. Reinforcement learning (RL), where a series of rewarded decisions must be made, is a particularly important type of learning. Computational and behavioral studies of RL have focused mainly on Markovian decision processes, where the next state depends on only the current state and action. Little is known about non-Markovian decision making, where the next state depends on more than the current state and action. Learning is non-Markovian, for example, when there is no unique mapping between actions and feedback. We have produced a model based on spiking neurons that can handle these non-Markovian conditions by performing policy gradient descent [1]. Here, we examine the model's performance and compare it with human learning and a Bayes optimal reference, which provides an upper-bound on performance. We find that in all cases, our population of spiking neurons model well-describes human performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Conceived and designed the experiments: MHH JF WS ET. Performed the experiments: ET SM AMC. Analyzed the data: AMC JF. Contributed reagents/materials/analysis tools: AMC JF ET SM WS MHH. Wrote the paper: AMC JF ET WS MHH.
Competing Interests: The authors have declared that no competing interests exist.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0123105