Semi-supervised End-to-end Speech Recognition Using Text-to-speech and Autoencoders

We introduce speech and text autoencoders that share encoders and decoders with an automatic speech recognition (ASR) model to improve ASR performance with large speech only and text only training datasets. To build the speech and text autoencoders, we leverage state-of-the-art ASR and text-to-speec...

Full description

Saved in:
Bibliographic Details
Published inICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 6166 - 6170
Main Authors Karita, Shigeki, Watanabe, Shinji, Iwata, Tomoharu, Delcroix, Marc, Ogawa, Atsunori, Nakatani, Tomohiro
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We introduce speech and text autoencoders that share encoders and decoders with an automatic speech recognition (ASR) model to improve ASR performance with large speech only and text only training datasets. To build the speech and text autoencoders, we leverage state-of-the-art ASR and text-to-speech (TTS) encoder decoder architectures. These autoencoders learn features from speech only and text only datasets by switching the encoders and decoders used in the ASR and TTS models. Simultaneously, they aim to encode features to be compatible with ASR and TTS models by a multi-task loss. Additionally, we anticipate that TTS joint training can also improve the ASR performance because both ASR and TTS models learn transformations between speech and text. The experimental result we obtained with our semi-supervised end-to-end ASR/TTS training revealed reductions from a model initially trained with a small paired subset of the LibriSpeech corpus in the character error rate from 10.4% to 8.4% and word error rate from 20.6% to 18.0% by retraining the model with a large unpaired subset of the corpus.
ISSN:2379-190X
DOI:10.1109/ICASSP.2019.8682890