Model-based analysis of learning latent structures in probabilistic reversal learning task

Flexibility in decision making is essential for adapting to dynamically changing scenarios. A probabilistic reversal learning task is one of the experimental paradigms used to characterize the flexibility of a subject. Recent studies hypothesized that in addition to a reward history, a subject may a...

Full description

Saved in:
Bibliographic Details
Published inArtificial life and robotics Vol. 26; no. 3; pp. 275 - 282
Main Authors Masumi, Akira, Sato, Takashi
Format Journal Article
LanguageEnglish
Published Tokyo Springer Japan 01.08.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Flexibility in decision making is essential for adapting to dynamically changing scenarios. A probabilistic reversal learning task is one of the experimental paradigms used to characterize the flexibility of a subject. Recent studies hypothesized that in addition to a reward history, a subject may also utilize a “cognitive map” that represents the latent structures of the task. We conducted experiments on a probabilistic reversal learning task and performed model-based analysis using two types of reinforcement learning (RL) models, with and without state representations of the task. Based on statistical model selection, the RL model without state representations was selected for explaining the behavior of the average of all the subjects. However, the individual behaviors of approximately 20% subjects were explained using the RL model with state representation and by the probabilistic estimation of the current state. We inferred that these results possibly indicate the variations in the development of the orbitofrontal cortex of the subjects.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1433-5298
1614-7456
DOI:10.1007/s10015-020-00674-8