Spoiler in a Textstack: How Much Can Transformers Help?

This paper presents our research regarding spoiler detection in reviews. In this use case, we describe the method of fine-tuning and organizing the available text-based model tasks with the latest deep learning achievements and techniques to interpret the models' results. Until now, spoiler res...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Wróblewska, Anna, Rzepiński, Paweł, Sysko-Romańczuk, Sylwia
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 24.12.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents our research regarding spoiler detection in reviews. In this use case, we describe the method of fine-tuning and organizing the available text-based model tasks with the latest deep learning achievements and techniques to interpret the models' results. Until now, spoiler research has been rarely described in the literature. We tested the transfer learning approach and different latest transformer architectures on two open datasets with annotated spoilers (ROC AUC above 81\% on TV Tropes Movies dataset, and Goodreads dataset above 88\%). We also collected data and assembled a new dataset with fine-grained annotations. To that end, we employed interpretability techniques and measures to assess the models' reliability and explain their results.
ISSN:2331-8422