Improved Language Identification Through Cross-Lingual Self-Supervised Learning

Language identification greatly impacts the success of downstream tasks such as automatic speech recognition. Recently, self-supervised speech representations learned by wav2vec 2.0 have been shown to be very effective for a range of speech tasks. We extend previous self-supervised work on language...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Tjandra, Andros, Choudhury, Diptanu Gon, Zhang, Frank, Singh, Kritika, Conneau, Alexis, Baevski, Alexei, Sela, Assaf, Saraf, Yatharth, Auli, Michael
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.10.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Language identification greatly impacts the success of downstream tasks such as automatic speech recognition. Recently, self-supervised speech representations learned by wav2vec 2.0 have been shown to be very effective for a range of speech tasks. We extend previous self-supervised work on language identification by experimenting with pre-trained models which were learned on real-world unconstrained speech in multiple languages and not just on English. We show that models pre-trained on many languages perform better and enable language identification systems that require very little labeled data to perform well. Results on a 26 languages setup show that with only 10 minutes of labeled data per language, a cross-lingually pre-trained model can achieve over 89.2% accuracy.
ISSN:2331-8422