CTC-Segmentation of Large Corpora for German End-to-End Speech Recognition

Recent end-to-end Automatic Speech Recognition (ASR) systems demonstrated the ability to outperform conventional hybrid DNN/HMM ASR. Aside from architectural improvements in those systems, those models grew in terms of depth, parameters and model capacity. However, these models also require more tra...

Full description

Saved in:
Bibliographic Details
Published inSpeech and Computer Vol. 12335; pp. 267 - 278
Main Authors Kürzinger, Ludwig, Winkelbauer, Dominik, Li, Lujun, Watzel, Tobias, Rigoll, Gerhard
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2020
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN3030602753
9783030602758
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-60276-5_27

Cover

More Information
Summary:Recent end-to-end Automatic Speech Recognition (ASR) systems demonstrated the ability to outperform conventional hybrid DNN/HMM ASR. Aside from architectural improvements in those systems, those models grew in terms of depth, parameters and model capacity. However, these models also require more training data to achieve comparable performance. In this work, we combine freely available corpora for German speech recognition, including yet unlabeled speech data, to a big dataset of over 1700 h of speech data. For data preparation, we propose a two-stage approach that uses an ASR model pre-trained with Connectionist Temporal Classification (CTC) to boot-strap more training data from unsegmented or unlabeled training data. Utterances are then extracted from label probabilities obtained from the network trained with CTC to determine segment alignments. With this training data, we trained a hybrid CTC/attention Transformer model that achieves 12.8% WER on the Tuda-DE test set, surpassing the previous baseline of 14.4% of conventional hybrid DNN/HMM ASR.
Bibliography:L. Kürzinger and D. Winkelbauer—Contributed equally to this work.
ISBN:3030602753
9783030602758
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-60276-5_27