Accent-Robust Automatic Speech Recognition Using Supervised and Unsupervised Wav2vec Embeddings

Speech recognition models often obtain degraded performance when tested on speech with unseen accents. Domain-adversarial training (DAT) and multi-task learning (MTL) are two common approaches for building accent-robust ASR models. ASR models using accent embeddings is another approach for improving...

Full description

Saved in:
Bibliographic Details
Main Authors Li, Jialu, Manohar, Vimal, Chitkara, Pooja, Tjandra, Andros, Picheny, Michael, Zhang, Frank, Zhang, Xiaohui, Saraf, Yatharth
Format Journal Article
LanguageEnglish
Published 07.10.2021
Online AccessGet full text

Cover

Loading…
More Information
Summary:Speech recognition models often obtain degraded performance when tested on speech with unseen accents. Domain-adversarial training (DAT) and multi-task learning (MTL) are two common approaches for building accent-robust ASR models. ASR models using accent embeddings is another approach for improving robustness to accents. In this study, we perform systematic comparisons of DAT and MTL approaches using a large volume of English accent corpus (4000 hours of US English speech and 1244 hours of 20 non-US-English accents speech). We explore embeddings trained under supervised and unsupervised settings: a separate embedding matrix trained using accent labels, and embeddings extracted from a fine-tuned wav2vec model. We find that our DAT model trained with supervised embeddings achieves the best performance overall and consistently provides benefits for all testing datasets, and our MTL model trained with wav2vec embeddings are helpful learning accent-invariant features and improving novel/unseen accents. We also illustrate that wav2vec embeddings have more advantages for building accent-robust ASR when no accent labels are available for training supervised embeddings.
DOI:10.48550/arxiv.2110.03520