Evaluating the alignment of AI with human emotions
Generative AI systems are increasingly capable of expressing emotions through text, imagery, voice, and video. Effective emotional expression is particularly relevant for AI systems designed to provide care, support mental health, or promote wellbeing through emotional interactions. This research ai...
Saved in:
Published in | Advanced Design Research Vol. 2; no. 2; pp. 88 - 97 |
---|---|
Main Authors | , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.12.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Generative AI systems are increasingly capable of expressing emotions through text, imagery, voice, and video. Effective emotional expression is particularly relevant for AI systems designed to provide care, support mental health, or promote wellbeing through emotional interactions. This research aims to enhance understanding of the alignment between AI-expressed emotions and human perception. How can we assess whether an AI system successfully conveys a specific emotion? To address this question, we designed a method to measure the alignment between emotions expressed by generative AI and human perceptions.
Three generative image models—DALL-E 2, DALL-E 3, and Stable Diffusion v1—were used to generate 240 images expressing five positive and five negative emotions in both humans and robots. Twenty-four participants recruited via Prolific rated the alignment of AI-generated emotional expressions with a string of text (e.g., “A robot expressing the emotion of amusement”).
Our results suggest that generative AI models can produce emotional expressions that align well with human emotions; however, the degree of alignment varies significantly depending on the AI model and the specific emotion expressed. We analyze these variations to identify areas for future improvement. The paper concludes with a discussion of the implications of our findings on the design of emotionally expressive AI systems.
•Benchmarking emotional alignment in generative AI quantifies improved expressiveness.•A within-subjects experiment (5760 item responses from 24 participants).•Emotional alignment is higher for human vs. robot images and for certain emotions.•Results inform the design of emotionally expressive robots and AI for wellbeing.•Machine psychology and design needed to mitigate risk of emotionally manipulative AI.
Representative improvements from Stable Diffusion v1 to DALL-E 2 to DALL-E 3. |
---|---|
ISSN: | 2949-7825 2949-7825 |
DOI: | 10.1016/j.ijadr.2024.10.002 |