The emergence of economic rationality of GPT

As large language models (LLMs) like GPT become increasingly prevalent, it is essential that we assess their capabilities beyond language processing. This paper examines the economic rationality of GPT by instructing it to make budgetary decisions in four domains: risk, time, social, and food prefer...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the National Academy of Sciences - PNAS Vol. 120; no. 51; p. e2316205120
Main Authors Chen, Yiting, Liu, Tracy Xiao, Shan, You, Zhong, Songfa
Format Journal Article
LanguageEnglish
Published United States National Academy of Sciences 19.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As large language models (LLMs) like GPT become increasingly prevalent, it is essential that we assess their capabilities beyond language processing. This paper examines the economic rationality of GPT by instructing it to make budgetary decisions in four domains: risk, time, social, and food preferences. We measure economic rationality by assessing the consistency of GPT’s decisions with utility maximization in classic revealed preference theory. We find that GPT’s decisions are largely rational in each domain and demonstrate higher rationality score than those of human subjects in a parallel experiment and in the literature. Moreover, the estimated preference parameters of GPT are slightly different from human subjects and exhibit a lower degree of heterogeneity. We also find that the rationality scores are robust to the degree of randomness and demographic settings such as age and gender but are sensitive to contexts based on the language frames of the choice situations. These results suggest the potential of LLMs to make good decisions and the need to further understand their capabilities, limitations, and underlying mechanisms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
1Y.C., T.X.L., Y.S., and S.Z. contributed equally to this work.
Edited by Jose Scheinkman, Columbia University, New York, NY; received September 22, 2023; accepted November 13, 2023
ISSN:0027-8424
1091-6490
1091-6490
DOI:10.1073/pnas.2316205120