Identification of Profane Words in Cyberbullying Incidents within Social Networks

The popularity of social networking sites (SNS) has facilitated communication between users. The usage of SNS helps users in their daily life in various ways such as sharing of opinions, keeping in touch with old friends, making new friends, and getting information. However, some users misuse SNS to...

Full description

Saved in:
Bibliographic Details
Published inJournal of information science theory and practice Vol. 9; no. 1; pp. 24 - 34
Main Authors Ali, Wan Noor Hamiza Wan, Mohd, Masnizah, Fauzi, Fariza
Format Journal Article
LanguageKorean
Published 2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The popularity of social networking sites (SNS) has facilitated communication between users. The usage of SNS helps users in their daily life in various ways such as sharing of opinions, keeping in touch with old friends, making new friends, and getting information. However, some users misuse SNS to belittle or hurt others using profanities, which is typical in cyberbullying incidents. Thus, in this study, we aim to identify profane words from the ASKfm corpus to analyze the profane word distribution across four different roles involved in cyberbullying based on lexicon dictionary. These four roles are: harasser, victim, bystander that assists the bully, and bystander that defends the victim. Evaluation in this study focused on occurrences of the profane word for each role from the corpus. The top 10 common words used in the corpus are also identified and represented in a graph. Results from the analysis show that these four roles used profane words in their conversation with different weightage and distribution, even though the profane words used are mostly similar. The harasser is the first ranked that used profane words in the conversation compared to other roles. The results can be further explored and considered as a potential feature in a cyberbullying detection model using a machine learning approach. Results in this work will contribute to formulate the suitable representation. It is also useful in modeling a cyberbullying detection model based on the identification of profane word distribution across different cyberbullying roles in social networks for future works.
Bibliography:KISTI1.1003/JNL.JAKO202117256243454
ISSN:2287-9099
2287-4577