Generative AI Meets Open-Ended Survey Responses: Research Participant Use of AI and Homogenization
The growing popularity of generative artificial intelligence (AI) tools presents new challenges for data quality in online surveys and experiments. This study examines participants’ use of large language models to answer open-ended survey questions and describes empirical tendencies in human versus...
Saved in:
Published in | Sociological methods & research Vol. 54; no. 3; pp. 1197 - 1242 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Los Angeles, CA
SAGE Publications
01.08.2025
SAGE PUBLICATIONS, INC |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The growing popularity of generative artificial intelligence (AI) tools presents new challenges for data quality in online surveys and experiments. This study examines participants’ use of large language models to answer open-ended survey questions and describes empirical tendencies in human versus large language model (LLM)-generated text responses. In an original survey of research participants recruited from a popular online platform for sourcing social science research subjects, 34 percent reported using LLMs to help them answer open-ended survey questions. Simulations comparing human-written responses from three pre-ChatGPT studies with LLM-generated text reveal that LLM responses are more homogeneous and positive, particularly when they describe social groups in sensitive questions. These homogenization patterns may mask important underlying social variation in attitudes and beliefs among human subjects, raising concerns about data validity. Our findings shed light on the scope and potential consequences of participants’ LLM use in online research. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0049-1241 1552-8294 |
DOI: | 10.1177/00491241251327130 |