Generating A Crowdsourced Conversation Dataset to Combat Cybergrooming

Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative im...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zhang, Xinyi, Wisniewski, Pamela J, Jin-hee, Cho, Huang, Lifu, Lee, Sang Won
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 21.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Cybergrooming emerges as a growing threat to adolescent safety and mental health. One way to combat cybergrooming is to leverage predictive artificial intelligence (AI) to detect predatory behaviors in social media. However, these methods can encounter challenges like false positives and negative implications such as privacy concerns. Another complementary strategy involves using generative artificial intelligence to empower adolescents by educating them about predatory behaviors. To this end, we envision developing state-of-the-art conversational agents to simulate the conversations between adolescents and predators for educational purposes. Yet, one key challenge is the lack of a dataset to train such conversational agents. In this position paper, we present our motivation for empowering adolescents to cope with cybergrooming. We propose to develop large-scale, authentic datasets through an online survey targeting adolescents and parents. We discuss some initial background behind our motivation and proposed design of the survey, such as situating the participants in artificial cybergrooming scenarios, then allowing participants to respond to the survey to obtain their authentic responses. We also present several open questions related to our proposed approach and hope to discuss them with the workshop attendees.
ISSN:2331-8422