Automated Hate Speech Detection and the Problem of Offensive Language

A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the International AAAI Conference on Web and Social Media Vol. 11; no. 1; pp. 512 - 515
Main Authors Davidson, Thomas, Warmsley, Dana, Macy, Michael, Weber, Ingmar
Format Journal Article
LanguageEnglish
Japanese
Published 03.05.2017
Online AccessGet full text

Cover

Loading…
More Information
Summary:A key challenge for automatic hate-speech detection on social media is the separation of hate speech from other instances of offensive language. Lexical detection methods tend to have low precision because they classify all messages containing particular terms as hate speech and previous work using supervised learning has failed to distinguish between the two categories. We used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords. We use crowd-sourcing to label a sample of these tweets into three categories: those containing hate speech, only offensive language, and those with neither. We train a multi-class classifier to distinguish between these different categories. Close analysis of the predictions and the errors shows when we can reliably separate hate speech from other offensive language and when this differentiation is more difficult. We find that racist and homophobic tweets are more likely to be classified as hate speech but that sexist tweets are generally classified as offensive. Tweets without explicit hate keywords are also more difficult to classify.
ISSN:2162-3449
2334-0770
DOI:10.1609/icwsm.v11i1.14955