Human Simulacra: Benchmarking the Personification of Large Language Models

Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence. This capability has attracted attention from the social science community, who see the potential in leveraging LLMs to replace human participants in experiments, thereby reducing research costs a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Xie, Qiuejie, Feng, Qiming, Zhang, Tianqi, Li, Qingqiu, Yang, Linyi, Zhang, Yuejie, Feng, Rui, He, Liang, Gao, Shang, Zhang, Yue
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 10.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence. This capability has attracted attention from the social science community, who see the potential in leveraging LLMs to replace human participants in experiments, thereby reducing research costs and complexity. In this paper, we introduce a framework for large language models personification, including a strategy for constructing virtual characters' life stories from the ground up, a Multi-Agent Cognitive Mechanism capable of simulating human cognitive processes, and a psychology-guided evaluation method to assess human simulations from both self and observational perspectives. Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters. Our work is a preliminary exploration which offers great potential in practical applications. All the code and datasets will be released, with the hope of inspiring further investigations.
ISSN:2331-8422