The Unappreciated Role of Intent in Algorithmic Moderation of Social Media Content

As social media has become a predominant mode of communication globally, the rise of abusive content threatens to undermine civil discourse. Recognizing the critical nature of this issue, a significant body of research has been dedicated to developing language models that can detect various types of...

Full description

Saved in:
Bibliographic Details
Main Authors Wang, Xinyu, Koneru, Sai, Venkit, Pranav Narayanan, Frischmann, Brett, Rajtmajer, Sarah
Format Journal Article
LanguageEnglish
Published 17.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As social media has become a predominant mode of communication globally, the rise of abusive content threatens to undermine civil discourse. Recognizing the critical nature of this issue, a significant body of research has been dedicated to developing language models that can detect various types of online abuse, e.g., hate speech, cyberbullying. However, there exists a notable disconnect between platform policies, which often consider the author's intention as a criterion for content moderation, and the current capabilities of detection models, which typically lack efforts to capture intent. This paper examines the role of intent in content moderation systems. We review state of the art detection models and benchmark training datasets for online abuse to assess their awareness and ability to capture intent. We propose strategic changes to the design and development of automated detection and moderation systems to improve alignment with ethical and policy conceptualizations of abuse.
DOI:10.48550/arxiv.2405.11030