Shuffle & Divide: Contrastive Learning for Long Text

We propose a self-supervised learning method for long text documents based on contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple text augmentation algorithm that sets up a pretext task required for contrastive updates to BERT-based document embedding. SaD splits a docume...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Lee, Joonseok, Seongho Joe, Park, Kyoungwon, Bogun Kim, Kang, Hoyoung, Park, Jaeseon, Gwon, Youngjune
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 19.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a self-supervised learning method for long text documents based on contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple text augmentation algorithm that sets up a pretext task required for contrastive updates to BERT-based document embedding. SaD splits a document into two sub-documents containing randomly shuffled words in the entire documents. The sub-documents are considered positive examples, leaving all other documents in the corpus as negatives. After SaD, we repeat the contrastive update and clustering phases until convergence. It is naturally a time-consuming, cumbersome task to label text documents, and our method can help alleviate human efforts, which are most expensive resources in AI. We have empirically evaluated our method by performing unsupervised text classification on the 20 Newsgroups, Reuters-21578, BBC, and BBCSport datasets. In particular, our method pushes the current state-of-the-art, SS-SB-MT, on 20 Newsgroups by 20.94% in accuracy. We also achieve the state-of-the-art performance on Reuters-21578 and exceptionally-high accuracy performances (over 95%) for unsupervised classification on the BBC and BBCSport datasets.
ISSN:2331-8422
DOI:10.48550/arxiv.2304.09374