Contrastive Semi-supervised Learning for ASR
Pseudo-labeling is the most adopted method for pre-training automatic speech recognition (ASR) models. However, its performance suffers from the supervised teacher model's degrading quality in low-resource setups and under domain transfer. Inspired by the successes of contrastive representation...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.03.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Pseudo-labeling is the most adopted method for pre-training automatic speech
recognition (ASR) models. However, its performance suffers from the supervised
teacher model's degrading quality in low-resource setups and under domain
transfer. Inspired by the successes of contrastive representation learning for
computer vision and speech applications, and more recently for supervised
learning of visual objects, we propose Contrastive Semi-supervised Learning
(CSL). CSL eschews directly predicting teacher-generated pseudo-labels in favor
of utilizing them to select positive and negative examples. In the challenging
task of transcribing public social media videos, using CSL reduces the WER by
8% compared to the standard Cross-Entropy pseudo-labeling (CE-PL) when 10hr of
supervised data is used to annotate 75,000hr of videos. The WER reduction jumps
to 19% under the ultra low-resource condition of using 1hr labels for teacher
supervision. CSL generalizes much better in out-of-domain conditions, showing
up to 17% WER reduction compared to the best CE-PL pre-trained model. |
---|---|
DOI: | 10.48550/arxiv.2103.05149 |