Could a Conversational AI Identify Offensive Language?

In recent years, we have seen a wide use of Artificial Intelligence (AI) applications in the Internet and everywhere. Natural Language Processing and Machine Learning are important sub-fields of AI that have made Chatbots and Conversational AI applications possible. Those algorithms are built based...

Full description

Saved in:
Bibliographic Details
Published inInformation (Basel) Vol. 12; no. 10; p. 418
Main Authors da Silva, Daniela America, Louro, Henrique Duarte Borges, Goncalves, Gildarcio Sousa, Marques, Johnny Cardoso, Dias, Luiz Alberto Vieira, da Cunha, Adilson Marques, Tasinaffo, Paulo Marcelo
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In recent years, we have seen a wide use of Artificial Intelligence (AI) applications in the Internet and everywhere. Natural Language Processing and Machine Learning are important sub-fields of AI that have made Chatbots and Conversational AI applications possible. Those algorithms are built based on historical data in order to create language models, however historical data could be intrinsically discriminatory. This article investigates whether a Conversational AI could identify offensive language and it will show how large language models often produce quite a bit of unethical behavior because of bias in the historical data. Our low-level proof-of-concept will present the challenges to detect offensive language in social media and it will discuss some steps to propitiate strong results in the detection of offensive language and unethical behavior using a Conversational AI.
ISSN:2078-2489
2078-2489
DOI:10.3390/info12100418