“I agree with you, bot!” How users (dis)engage with social bots on Twitter
This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely...
Saved in:
Published in | New media & society Vol. 26; no. 3; pp. 1505 - 1526 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
London, England
SAGE Publications
01.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This article investigates under which conditions users on Twitter engage with or react to social bots. Based on insights from human–computer interaction and motivated reasoning, we hypothesize that (1) users are more likely to engage with human-like social bot accounts and (2) users are more likely to engage with social bots which promote content congruent to the user’s partisanship. In a preregistered 3 × 2 within-subject experiment, we asked N = 223 US Americans to indicate whether they would engage with or react to different Twitter accounts. Accounts systematically varied in their displayed humanness (low humanness, medium humanness, and high humanness) and partisanship (congruent and incongruent). In line with our hypotheses, we found that the more human-like accounts are, the greater is the likelihood that users would engage with or react to them. However, this was only true for accounts that shared the same partisanship as the user. |
---|---|
ISSN: | 1461-4448 1461-7315 |
DOI: | 10.1177/14614448211072307 |