Discounting of reward sequences: a test of competing formal models of hyperbolic discounting

Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in psychology Vol. 5; p. 178
Main Authors Zarr, Noah, Alexander, William H., Brown, Joshua W.
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 06.03.2014
Subjects
Online AccessGet full text
ISSN1664-1078
1664-1078
DOI10.3389/fpsyg.2014.00178

Cover

Abstract Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data.
AbstractList Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data.
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data.Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data.
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the µAgents model. The HDTD model and the µAgents model differ in one key respect, namely how they treat sequences of rewards. The µAgents model is a particular implementation of a parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a nonlinear interaction. To discriminate among these models, we ascertained how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the parallel model generally provides a better fit to the human data.
Author Alexander, William H.
Brown, Joshua W.
Zarr, Noah
AuthorAffiliation 1 Deparment of Psychological and Brain Sciences, Indiana University Bloomington, IN, USA
2 Department of Experimental Psychology, Ghent University Ghent, Belgium
AuthorAffiliation_xml – name: 1 Deparment of Psychological and Brain Sciences, Indiana University Bloomington, IN, USA
– name: 2 Department of Experimental Psychology, Ghent University Ghent, Belgium
Author_xml – sequence: 1
  givenname: Noah
  surname: Zarr
  fullname: Zarr, Noah
– sequence: 2
  givenname: William H.
  surname: Alexander
  fullname: Alexander, William H.
– sequence: 3
  givenname: Joshua W.
  surname: Brown
  fullname: Brown, Joshua W.
BackLink https://www.ncbi.nlm.nih.gov/pubmed/24639662$$D View this record in MEDLINE/PubMed
BookMark eNp1kk1v1DAQhi1URD_onRPKkcsu448kNgckVChUqsQFbkiW44y3rpw42Nmi_fc4u6W0SPji0cw7z1jj95QcjXFEQl5RWHMu1Vs35d1mzYCKNQBt5TNyQptGrCi08uhRfEzOc76FcgQwAPaCHDPRcNU07IT8-Oizjdtx9uOmiq5K-Mukvsr4c4ujxfyuMtWMeV5qNg4T7oUupsGEaog9hryUbnYTpi4Gb6v-L_Alee5MyHh-f5-R75efvl18WV1__Xx18eF6ZWsm55WDmrUdRQCjKEWFjjPnmORC1LWSrXSuxOCU6HpsnRBMUcEstE5BLWXHz8jVgdtHc6un5AeTdjoar_eJmDbapNnbgLpR0DaMd73tWtFJuhBbW8hgjXHGFdb7A2vadgP2Fsc5mfAE-rQy-hu9iXeaKyG4qgvgzT0gxbLEPOuhbARDMCPGbda0BtlIqjgr0tePZz0M-fM9RQAHgU0x54TuQUJBLybQexPoxQR6b4LS0vzTYv1sZh-X1_rw_8bf34C5Cw
CitedBy_id crossref_primary_10_1186_s13660_016_1288_5
crossref_primary_10_1093_beheco_arx145
Cites_doi 10.1098/rspb.1998.0534
10.1371/journal.pone.0047225
10.1038/nn1279
10.1214/aos/1176344136
10.1901/jeab.2002.77-129
10.1162/NECO_a_00376
10.3758/BF03213979
10.1016/S0376-6357(03)00144-X
10.3758/BF03209777
10.1038/nn2007
10.1016/j.mehy.2006.10.049
10.3389/fnbeh.2010.00184
10.1126/science.1105783
10.1901/jeab.2001.76-235
10.1016/j.conb.2008.08.003
10.1371/journal.pone.0007362
10.1111/j.1460-9568.2010.07282.x
10.1162/neco.2010.08-09-108
10.1037/0033-295X.112.4.841
10.1016/j.neuroimage.2005.09.061
10.1523/JNEUROSCI.1600-08.2008
10.1111/j.1467-9280.1995.tb00311.x
10.1037//0096-3445.126.1.54
10.1016/S0893-6080(02)00044-8
10.1016/0003-3472(95)80016-6
10.1177/1059712308095775
10.1016/S0893-6080(02)00052-7
10.1901/jeab.1995.64-263
10.1901/jeab.2003.79-233
10.1093/icb/36.4.496
10.1152/jn.1998.80.1.1
10.1016/j.beproc.2006.03.011
10.1126/science.1094285
10.2139/ssrn.1901533
ContentType Journal Article
Copyright Copyright © 2014 Zarr, Alexander and Brown. 2014
Copyright_xml – notice: Copyright © 2014 Zarr, Alexander and Brown. 2014
DBID AAYXX
CITATION
NPM
7X8
5PM
DOA
DOI 10.3389/fpsyg.2014.00178
DatabaseName CrossRef
PubMed
MEDLINE - Academic
PubMed Central (Full Participant titles)
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Psychology
EISSN 1664-1078
ExternalDocumentID oai_doaj_org_article_6907623bdcb74b81bde77c94b0caafaf
PMC3944395
24639662
10_3389_fpsyg_2014_00178
Genre Journal Article
GroupedDBID 53G
5VS
9T4
AAFWJ
AAKDD
AAYXX
ABIVO
ACGFO
ACGFS
ACHQT
ACXDI
ADBBV
ADRAZ
AEGXH
AFPKN
AIAGR
ALMA_UNASSIGNED_HOLDINGS
AOIJS
BAWUL
BCNDV
CITATION
DIK
EBS
EJD
F5P
GROUPED_DOAJ
GX1
HYE
IPNFZ
KQ8
M48
M~E
O5R
O5S
OK1
P2P
PGMZT
RIG
RNS
RPM
EMOBN
NPM
7X8
5PM
ID FETCH-LOGICAL-c528t-f0527b1e00a911e9ef32ff28344559878ff8340f94bde7f4429142c07f90588b3
IEDL.DBID M48
ISSN 1664-1078
IngestDate Wed Aug 27 01:27:29 EDT 2025
Thu Aug 21 18:21:16 EDT 2025
Sun Aug 24 03:18:39 EDT 2025
Thu Apr 03 07:00:53 EDT 2025
Thu Apr 24 22:56:53 EDT 2025
Tue Jul 01 01:44:53 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords behavioral research
recursive model
temporal difference learning
discounting
Parallel model
exponential discounting
hyperbolic discounting
model fitting
Language English
License This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c528t-f0527b1e00a911e9ef32ff28344559878ff8340f94bde7f4429142c07f90588b3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Edited by: Philip Beaman, University of Reading, UK
This article was submitted to Cognitive Science, a section of the journal Frontiers in Psychology.
Reviewed by: Zheng Wang, Ohio State University, USA; Timothy Pleskac, Michigan State University, USA
OpenAccessLink http://journals.scholarsportal.info/openUrl.xqy?doi=10.3389/fpsyg.2014.00178
PMID 24639662
PQID 1508681932
PQPubID 23479
ParticipantIDs doaj_primary_oai_doaj_org_article_6907623bdcb74b81bde77c94b0caafaf
pubmedcentral_primary_oai_pubmedcentral_nih_gov_3944395
proquest_miscellaneous_1508681932
pubmed_primary_24639662
crossref_primary_10_3389_fpsyg_2014_00178
crossref_citationtrail_10_3389_fpsyg_2014_00178
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2014-03-06
PublicationDateYYYYMMDD 2014-03-06
PublicationDate_xml – month: 03
  year: 2014
  text: 2014-03-06
  day: 06
PublicationDecade 2010
PublicationPlace Switzerland
PublicationPlace_xml – name: Switzerland
PublicationTitle Frontiers in psychology
PublicationTitleAlternate Front Psychol
PublicationYear 2014
Publisher Frontiers Media S.A
Publisher_xml – name: Frontiers Media S.A
References Daw (B5) 2002; 15
Green (B8) 2003; 79
Brown (B3) 2005; 307
Kirby (B16) 1995; 6
Priestley (B29) 1981
Dayan (B6) 2008; 18
Kobayashi (B17) 2008; 28
Kirby (B15) 2006; 72
Ainslie (B1) 1981; 9
Schultz (B30) 1998; 80
Schwarz (B32) 1978; 6
Peters (B28) 2012; 7
Krichmar (B19) 2008; 16
Kurth-Nelson (B21) 2010; 4
Myerson (B24) 1995; 64
Takahashi (B35) 2007; 69
O'Doherty (B26) 2004; 304
Green (B10) 1996; 36
Milosavljevic (B22) 2010; 5
Brunner (B4) 1995; 50
Johnson (B11) 2005; 112
Johnson (B12) 2002; 77
Kable (B13) 2007; 10
Paulus (B27) 2006; 30
Mitchell (B23) 2003; 64
Alexander (B2) 2010; 22
Kirby (B14) 1997; 126
Tanaka (B36) 2004; 7
Doya (B7) 2002; 15
Myerson (B25) 2001; 76
Green (B9) 1994; 1
Kurth-Nelson (B20) 2009; 4
Smith (B33) 2013; 25
Schultz (B31) 2010; 31
Sozou (B34) 1998; 265
15718473 - Science. 2005 Feb 18;307(5712):1118-21
20100071 - Neural Comput. 2010 Jun;22(6):1511-27
14580698 - Behav Processes. 2003 Oct 31;64(3):273-286
18708140 - Curr Opin Neurobiol. 2008 Apr;18(2):185-96
16262470 - Psychol Rev. 2005 Oct;112(4):841-61
24203522 - Psychon Bull Rev. 1994 Sep;1(3):383-9
12371507 - Neural Netw. 2002 Jun-Jul;15(4-6):495-506
11599641 - J Exp Anal Behav. 2001 Sep;76(2):235-43
20497474 - Eur J Neurosci. 2010 Jun;31(12):2124-35
22970879 - Neural Comput. 2013 Jan;25(1):221-58
15235607 - Nat Neurosci. 2004 Aug;7(8):887-93
17196757 - Med Hypotheses. 2007;69(1):195-8
19841749 - PLoS One. 2009 Oct 20;4(10):e7362
23226198 - PLoS One. 2012;7(11):e47225
21179584 - Front Behav Neurosci. 2010 Dec 14;4:184
16812772 - J Exp Anal Behav. 1995 Nov;64(3):263-76
11936247 - J Exp Anal Behav. 2002 Mar;77(2):129-46
12822689 - J Exp Anal Behav. 2003 Mar;79(2):233-42
17982449 - Nat Neurosci. 2007 Dec;10(12):1625-33
15087550 - Science. 2004 Apr 16;304(5669):452-4
16321546 - Neuroimage. 2006 Apr 1;30(2):668-77
9658025 - J Neurophysiol. 1998 Jul;80(1):1-27
16621330 - Behav Processes. 2006 Jun 1;72(3):273-82
18667616 - J Neurosci. 2008 Jul 30;28(31):7837-46
12371515 - Neural Netw. 2002 Jun-Jul;15(4-6):603-16
References_xml – volume: 265
  start-page: 2015
  year: 1998
  ident: B34
  article-title: On hyperbolic discounting and uncertain hazard rates
  publication-title: Proc. R. Soc. B Biol. Sci
  doi: 10.1098/rspb.1998.0534
– volume-title: Spectral Analysis and Time Series
  year: 1981
  ident: B29
– volume: 7
  start-page: e47225
  year: 2012
  ident: B28
  article-title: Formal comparison of dual-parameter temporal discounting models in controls and pathological gamblers
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0047225
– volume: 7
  start-page: 887
  year: 2004
  ident: B36
  article-title: Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops
  publication-title: Nat. Neurosci
  doi: 10.1038/nn1279
– volume: 6
  start-page: 461
  year: 1978
  ident: B32
  article-title: Estimating the dimension of a model
  publication-title: Ann. Stat
  doi: 10.1214/aos/1176344136
– volume: 77
  start-page: 129
  year: 2002
  ident: B12
  article-title: Within-subject comparison of real and hypothetical money rewards in delay discounting
  publication-title: J. Exp. Anal. Behav
  doi: 10.1901/jeab.2002.77-129
– volume: 25
  start-page: 221
  year: 2013
  ident: B33
  article-title: ANUBIS: artificial neuromodulation using a Bayesian inference system
  publication-title: Neural Comput
  doi: 10.1162/NECO_a_00376
– volume: 1
  start-page: 383
  year: 1994
  ident: B9
  article-title: Temporal discounting and preference reversals in choice between delayed outcomes
  publication-title: Psychon. Bull. Rev
  doi: 10.3758/BF03213979
– volume: 64
  start-page: 273
  year: 2003
  ident: B23
  article-title: Effects of multiple delayed rewards on delay discounting in an adjusting amount procedure
  publication-title: Behav. Processes
  doi: 10.1016/S0376-6357(03)00144-X
– volume: 9
  start-page: 476
  year: 1981
  ident: B1
  article-title: Preference reversal and delayed reinforcement
  publication-title: Anim. Learn. Behav
  doi: 10.3758/BF03209777
– volume: 10
  start-page: 1625
  year: 2007
  ident: B13
  article-title: The neural correlates of subjective value during intertemporal choice
  publication-title: Nat. Neurosci
  doi: 10.1038/nn2007
– volume: 69
  start-page: 195
  year: 2007
  ident: B35
  article-title: Hyperbolic discounting may be reduced to electrical coupling in dopaminergic neural circuits
  publication-title: Med. Hypotheses
  doi: 10.1016/j.mehy.2006.10.049
– volume: 4
  issue: 184
  year: 2010
  ident: B21
  article-title: A reinforcement learning model of precommitment in decision making
  publication-title: Front. Behav. Neurosci
  doi: 10.3389/fnbeh.2010.00184
– volume: 307
  start-page: 1118
  year: 2005
  ident: B3
  article-title: Learned predictions of error likelihood in the anterior cingulate cortex
  publication-title: Science
  doi: 10.1126/science.1105783
– volume: 76
  start-page: 235
  year: 2001
  ident: B25
  article-title: Area under the curve as a measure of discounting
  publication-title: J. Exp. Anal. Behav
  doi: 10.1901/jeab.2001.76-235
– volume: 18
  start-page: 185
  year: 2008
  ident: B6
  article-title: Reinforcement learning: the good, the bad and the ugly
  publication-title: Curr. Opin. Neurobiol
  doi: 10.1016/j.conb.2008.08.003
– volume: 4
  start-page: e7362
  year: 2009
  ident: B20
  article-title: Temporal-difference reinforcement learning with distributed representations
  publication-title: PLoS ONE
  doi: 10.1371/journal.pone.0007362
– volume: 31
  start-page: 2124
  year: 2010
  ident: B31
  article-title: Subjective neuronal coding of reward: temporal value discounting and risk
  publication-title: Eur. J. Neurosci
  doi: 10.1111/j.1460-9568.2010.07282.x
– volume: 22
  start-page: 1511
  year: 2010
  ident: B2
  article-title: Hyperbolically discounted temporal difference learning
  publication-title: Neural Comput
  doi: 10.1162/neco.2010.08-09-108
– volume: 112
  start-page: 841
  year: 2005
  ident: B11
  article-title: A dynamic, stochastic, computational model of preference reversal phenomena
  publication-title: Psychol. Rev
  doi: 10.1037/0033-295X.112.4.841
– volume: 30
  start-page: 668
  year: 2006
  ident: B27
  article-title: Anterior cingulate activity modulates nonlinear decision weight function of uncertain prospects
  publication-title: Neuroimage
  doi: 10.1016/j.neuroimage.2005.09.061
– volume: 28
  start-page: 7837
  year: 2008
  ident: B17
  article-title: Influence of reward delays on responses of dopamine neurons
  publication-title: J. Neurosci
  doi: 10.1523/JNEUROSCI.1600-08.2008
– volume: 6
  start-page: 83
  year: 1995
  ident: B16
  article-title: Preference reversals due to myopic discounting of delayed reward
  publication-title: Psychol. Sci
  doi: 10.1111/j.1467-9280.1995.tb00311.x
– volume: 126
  start-page: 54
  year: 1997
  ident: B14
  article-title: Bidding on the future: evidence against normative discounting of delayed rewards
  publication-title: J. Exp. Psychol. Gen
  doi: 10.1037//0096-3445.126.1.54
– volume: 15
  start-page: 495
  year: 2002
  ident: B7
  article-title: Metalearning and neuromodulation
  publication-title: Neural Netw
  doi: 10.1016/S0893-6080(02)00044-8
– volume: 50
  start-page: 1627
  year: 1995
  ident: B4
  article-title: Value of food aggregates: parallel versus serial discounting
  publication-title: Anim. Behav
  doi: 10.1016/0003-3472(95)80016-6
– volume: 16
  start-page: 385
  year: 2008
  ident: B19
  article-title: The neuromodulatory system: a framework for survival and adaptive behavior in a challenging world
  publication-title: Adapt. Behav
  doi: 10.1177/1059712308095775
– volume: 15
  start-page: 603
  year: 2002
  ident: B5
  article-title: Opponent interactions between serotonin and dopamine
  publication-title: Neural Netw
  doi: 10.1016/S0893-6080(02)00052-7
– volume: 64
  start-page: 263
  year: 1995
  ident: B24
  article-title: Discounting of delayed rewards: models of individual choice
  publication-title: J. Exp. Anal. Behav
  doi: 10.1901/jeab.1995.64-263
– volume: 79
  start-page: 233
  year: 2003
  ident: B8
  article-title: Preference reversals with food and water reinforcers in rats
  publication-title: J. Exp. Anal. Behav
  doi: 10.1901/jeab.2003.79-233
– volume: 36
  start-page: 496
  year: 1996
  ident: B10
  article-title: Exponential versus hyperbolic discounting of delayed outcomes: risk and waiting time
  publication-title: Am. Zool
  doi: 10.1093/icb/36.4.496
– volume: 80
  start-page: 1
  year: 1998
  ident: B30
  article-title: Predictive reward signal of dopamine neurons
  publication-title: J. Neurophysiol
  doi: 10.1152/jn.1998.80.1.1
– volume: 72
  start-page: 273
  year: 2006
  ident: B15
  article-title: The present values of delayed rewards are approximately additive
  publication-title: Behav. Processes
  doi: 10.1016/j.beproc.2006.03.011
– volume: 304
  start-page: 452
  year: 2004
  ident: B26
  article-title: Dissociable roles of ventral and dorsal striatum in instrumental conditioning
  publication-title: Science
  doi: 10.1126/science.1094285
– volume: 5
  start-page: 437
  year: 2010
  ident: B22
  article-title: The drift diffusion model can account for the accuracy and reaction time of value-based choices under high and low time pressure
  publication-title: Judgm. Decis. Mak
  doi: 10.2139/ssrn.1901533
– reference: 21179584 - Front Behav Neurosci. 2010 Dec 14;4:184
– reference: 16621330 - Behav Processes. 2006 Jun 1;72(3):273-82
– reference: 20100071 - Neural Comput. 2010 Jun;22(6):1511-27
– reference: 16812772 - J Exp Anal Behav. 1995 Nov;64(3):263-76
– reference: 15087550 - Science. 2004 Apr 16;304(5669):452-4
– reference: 19841749 - PLoS One. 2009 Oct 20;4(10):e7362
– reference: 15235607 - Nat Neurosci. 2004 Aug;7(8):887-93
– reference: 16262470 - Psychol Rev. 2005 Oct;112(4):841-61
– reference: 16321546 - Neuroimage. 2006 Apr 1;30(2):668-77
– reference: 11936247 - J Exp Anal Behav. 2002 Mar;77(2):129-46
– reference: 18667616 - J Neurosci. 2008 Jul 30;28(31):7837-46
– reference: 24203522 - Psychon Bull Rev. 1994 Sep;1(3):383-9
– reference: 11599641 - J Exp Anal Behav. 2001 Sep;76(2):235-43
– reference: 15718473 - Science. 2005 Feb 18;307(5712):1118-21
– reference: 17982449 - Nat Neurosci. 2007 Dec;10(12):1625-33
– reference: 9658025 - J Neurophysiol. 1998 Jul;80(1):1-27
– reference: 18708140 - Curr Opin Neurobiol. 2008 Apr;18(2):185-96
– reference: 12371507 - Neural Netw. 2002 Jun-Jul;15(4-6):495-506
– reference: 12822689 - J Exp Anal Behav. 2003 Mar;79(2):233-42
– reference: 17196757 - Med Hypotheses. 2007;69(1):195-8
– reference: 12371515 - Neural Netw. 2002 Jun-Jul;15(4-6):603-16
– reference: 22970879 - Neural Comput. 2013 Jan;25(1):221-58
– reference: 14580698 - Behav Processes. 2003 Oct 31;64(3):273-286
– reference: 20497474 - Eur J Neurosci. 2010 Jun;31(12):2124-35
– reference: 23226198 - PLoS One. 2012;7(11):e47225
SSID ssj0000402002
Score 2.0279644
Snippet Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until...
SourceID doaj
pubmedcentral
proquest
pubmed
crossref
SourceType Open Website
Open Access Repository
Aggregation Database
Index Database
Enrichment Source
StartPage 178
SubjectTerms Behavioral Research
discounting
Exponential discounting
hyperbolic discounting
model fitting
Psychology
temporal difference learning
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3NT9swFLcQp16mDdjWwZCRuHCI6sZx7HAbhapCggMfEodJlu34bUhVimg58N_zntNW6YS2y25RnpNYP7-8D_n59xg7dirkRahlJmRwGeYbdeakgizqso5RSnTxdN756rqc3BeXD-qh0-qLasJaeuAWuAFlb-iifR28LjwGWXXUOlSFF8E5cEDWV1Sik0wlG0xpEZXu0L4kZmHVAJ7mr7-olIvYsofUVa3jhxJd_3sx5p-lkh3fM_7IPiyDRv6jnewnthWbHdZb267XXfbz_HG-avvAZ8BvIlXD8ttVofQpd_wOP0myUQqVaeCYAtYpp35o0zmJJpiVPnuiCuadF-6x-_HF3WiSLRsnZEHlZpGBULn2wyiEQ1sWqwgyB8ippQbxsWsDgNcCEEVEEwr0ScMiD0JDJZQxXn5m282siV8ZN04akF6BdLIowRkQQRkZoNLoXiH02WAFow1LVnFqbjG1mF0Q8DYBbwl4m4Dvs5P1E08to8Zfxp7RyqzHERd2uoEaYpcaYv-lIX12tFpXi_8ObYi4Js5e5pa48EtDIWyffWnXef2pvMDYrSxRojc0YGMum5Lm8Xfi56azxrJS3_7H5PdZj-BIVW_lAdtePL_E7xgGLfxh0vg37lMJ2g
  priority: 102
  providerName: Directory of Open Access Journals
Title Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
URI https://www.ncbi.nlm.nih.gov/pubmed/24639662
https://www.proquest.com/docview/1508681932
https://pubmed.ncbi.nlm.nih.gov/PMC3944395
https://doaj.org/article/6907623bdcb74b81bde77c94b0caafaf
Volume 5
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1La9wwEBZNeskl9N3tI6jQSw9utJYtyYVS-gqhkJ66kENBSLImG1i8ib2B7r_vjOx1umUp9GZ75IdGj_nGGn3D2GtXhrwItcyEDC5Df6POnCwhi1rVMUqJJp72O599V6ez4tt5eX67PXpQYLfTtaN8UrN28fbX9foDDvj35HGivT2Gq259QVFaRIQ91WaP3UW7pMgVOxvAfpqXyVXqgxCVKnD-0aZft9z5kC07lej8d2HQv0Mp_7BNJ_fY4QAq-ce-F9xnd2LzgB2Mc9v6Ifv55bLbpIXgS-BtpGhZPgZSv-OOI-hckSwkKE0FE6Bd8JQtpyPRHL3W1hOVMK9vH_iIzU6-_vh8mg2JFbJQ5maVgShz7adRCIdzXawiyBwgp5QbxNeuDQAeC6gKX0cNBdqsaZEHoaESpTFePmb7zbKJTxk3ThqQvgTpZKHAGRChNDJApdH8Qpiw440abRhYxyn5xcKi90GKt0nxlhRvk-In7M14x1XPuPGPsp-oZcZyxJWdLizbCzsMPUv-P4I8XwevC48wHeukA9ZNBOfAwYS92rSrxbFFCyauicubzhJXvjIEcSfsSd_O46vyArGdUijRWz1g61u2Jc3lPPF3015kWZXP_qOiz9kBnaTgN_WC7a_am_gS0dDKH6W_CEepq_8GbRgLdQ
linkProvider Scholars Portal
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Discounting+of+reward+sequences%3A+a+test+of+competing+formal+models+of+hyperbolic+discounting&rft.jtitle=Frontiers+in+psychology&rft.au=Zarr%2C+Noah&rft.au=Alexander%2C+William+H.&rft.au=Brown%2C+Joshua+W.&rft.date=2014-03-06&rft.issn=1664-1078&rft.eissn=1664-1078&rft.volume=5&rft_id=info:doi/10.3389%2Ffpsyg.2014.00178&rft.externalDBID=n%2Fa&rft.externalDocID=10_3389_fpsyg_2014_00178
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1664-1078&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1664-1078&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1664-1078&client=summon