Dopamine-dependent prediction errors underpin reward-seeking behaviour in humans

Dopamine by Choice The brain messenger dopamine is traditionally known as the 'pleasure molecule', linked with our desire for food and sex, as well as drug and gambling addictions. The precise function of dopamine in humans has remained elusive, and theories have relied almost exclusively...

Full description

Saved in:
Bibliographic Details
Published inNature Vol. 442; no. 7106; pp. 1042 - 1045
Main Authors Pessiglione, Mathias, Seymour, Ben, Flandin, Guillaume, Dolan, Raymond J., Frith, Chris D.
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 31.08.2006
Nature Publishing
Nature Publishing Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Dopamine by Choice The brain messenger dopamine is traditionally known as the 'pleasure molecule', linked with our desire for food and sex, as well as drug and gambling addictions. The precise function of dopamine in humans has remained elusive, and theories have relied almost exclusively on animal experiments. Using brain imaging technology, Pessiglione et al. scanned healthy human volunteers as they gambled for money after taking drugs that interfere with dopamine signals. Volunteers with boosted dopamine became better gamblers than their dopamine-suppressed counterparts. When dopamine levels were either enhanced or reduced by drugs, the scans showed that both reward-related learning and associated striatal activity are modulated, confirming the critical role of dopamine in integrating reward information for generation future decisions. An fMRI study of healthy human volunteers finds that when dopamine levels are either enhanced or reduced by drugs, both reward-related learning and associated striatal activity are modulated, confirming the critical role of dopamine in integrating reward information for future decisions. Theories of instrumental learning are centred on understanding how success and failure are used to improve future decisions 1 . These theories highlight a central role for reward prediction errors in updating the values associated with available actions 2 . In animals, substantial evidence indicates that the neurotransmitter dopamine might have a key function in this type of learning, through its ability to modulate cortico-striatal synaptic efficacy 3 . However, no direct evidence links dopamine, striatal activity and behavioural choice in humans. Here we show that, during instrumental learning, the magnitude of reward prediction error expressed in the striatum is modulated by the administration of drugs enhancing (3,4-dihydroxy- l -phenylalanine; l -DOPA) or reducing (haloperidol) dopaminergic function. Accordingly, subjects treated with l -DOPA have a greater propensity to choose the most rewarding action relative to subjects treated with haloperidol. Furthermore, incorporating the magnitude of the prediction errors into a standard action-value learning algorithm accurately reproduced subjects' behavioural choices under the different drug conditions. We conclude that dopamine-dependent modulation of striatal activity can account for how the human brain uses reward prediction errors to improve future decisions.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0028-0836
1476-4687
1476-4687
1476-4679
DOI:10.1038/nature05051