Comparing ChatGPT with human perceptions of illusory faces

People see faces in inanimate objects (termed "face pareidolia"), with these illusory faces perceived as portraying specific emotional expressions, ages, and genders. Further, people tend to classify their gender as male rather than female. With recent studies quantifying ChatGPT's ab...

Full description

Saved in:
Bibliographic Details
Published inVisual cognition Vol. 33; no. 2; pp. 119 - 130
Main Author Kramer, Robin S. S.
Format Journal Article
LanguageEnglish
Published Routledge 07.02.2025
Subjects
Online AccessGet full text
ISSN1350-6285
1464-0716
DOI10.1080/13506285.2025.2529875

Cover

Loading…
More Information
Summary:People see faces in inanimate objects (termed "face pareidolia"), with these illusory faces perceived as portraying specific emotional expressions, ages, and genders. Further, people tend to classify their gender as male rather than female. With recent studies quantifying ChatGPT's abilities with human face perception, I investigated whether the chatbot's judgements of illusory faces aligned with those given by human viewers. Across four experiments, I collected ChatGPT's judgements of gender, emotion, and age, as well as the ease of seeing a face, for a set of illusory faces for which human responses had previously been collected. My results demonstrated that ChatGPT detected faces in the illusory stimuli. In addition, ChatGPT's perceptions of gender, emotion, and age were all associated with the modal responses provided by human viewers, with the chatbot demonstrating a "male bias" for judgements of gender. However, ChatGPT's concept of "person" was not inherently "male," failing to support one potential explanation for this bias. Taken together, these experiments demonstrated ChatGPT's alignment with human perceptions of illusory faces and highlight the potential to explore overgeneralisations in face processing for both humans and algorithms. Further, identifying biases in ChatGPT's responses may facilitate our understanding of human biases.
ISSN:1350-6285
1464-0716
DOI:10.1080/13506285.2025.2529875