Structure Learning in Human Sequential Decision-Making

Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more c...

Full description

Saved in:
Bibliographic Details
Published inPLoS computational biology Vol. 6; no. 12; p. e1001003
Main Authors Acuña, Daniel E., Schrater, Paul
Format Journal Article
LanguageEnglish
Published United States Public Library of Science 02.12.2010
Public Library of Science (PLoS)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more complex, in that it also involves learning the structure of reward generation in the environment. We formulate the problem of structure learning in sequential decision tasks using Bayesian reinforcement learning, and show that learning the generative model for rewards qualitatively changes the behavior of an optimal learning agent. To test whether people exhibit structure learning, we performed experiments involving a mixture of one-armed and two-armed bandit reward models, where structure learning produces many of the qualitative behaviors deemed suboptimal in previous studies. Our results demonstrate humans can perform structure learning in a near-optimal manner.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Conceived and designed the experiments: DEA PS. Performed the experiments: DEA. Analyzed the data: DEA PS. Wrote the paper: DEA PS.
ISSN:1553-7358
1553-734X
1553-7358
DOI:10.1371/journal.pcbi.1001003