Increased morality through social communication or decision situation worsens the acceptance of robo-advisors

This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very...

Full description

Saved in:
Bibliographic Details
Published inComputers in Human Behavior: Artificial Humans Vol. 5; p. 100173
Main Authors Arlinghaus, Clarissa Sabrina, Straßmann, Carolin, Dix, Annika
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.08.2025
Online AccessGet full text
ISSN2949-8821
2949-8821
DOI10.1016/j.chbah.2025.100173

Cover

Loading…
More Information
Summary:This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social). Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation. Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.
ISSN:2949-8821
2949-8821
DOI:10.1016/j.chbah.2025.100173