Multilingual representations for low resource speech recognition and keyword search

This paper examines the impact of multilingual (ML) acoustic representations on Automatic Speech Recognition (ASR) and keyword search (KWS) for low resource languages in the context of the OpenKWS15 evaluation of the IARPA Babel program. The task is to develop Swahili ASR and KWS systems within two...

Full description

Saved in:
Bibliographic Details
Published in2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) pp. 259 - 266
Main Authors Jia Cui, Kingsbury, Brian, Ramabhadran, Bhuvana, Sethy, Abhinav, Audhkhasi, Kartik, Xiaodong Cui, Kislal, Ellen, Mangu, Lidia, Nussbaum-Thom, Markus, Picheny, Michael, Tuske, Zoltan, Golik, Pavel, Schluter, Ralf, Ney, Hermann, Gales, Mark J. F., Knill, Kate M., Ragni, Anton, Haipeng Wang, Woodland, Phil
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper examines the impact of multilingual (ML) acoustic representations on Automatic Speech Recognition (ASR) and keyword search (KWS) for low resource languages in the context of the OpenKWS15 evaluation of the IARPA Babel program. The task is to develop Swahili ASR and KWS systems within two weeks using as little as 3 hours of transcribed data. Multilingual acoustic representations proved to be crucial for building these systems under strict time constraints. The paper discusses several key insights on how these representations are derived and used. First, we present a data sampling strategy that can speed up the training of multilingual representations without appreciable loss in ASR performance. Second, we show that fusion of diverse multilingual representations developed at different LORELEI sites yields substantial ASR and KWS gains. Speaker adaptation and data augmentation of these representations improves both ASR and KWS performance (up to 8.7% relative). Third, incorporating un-transcribed data through semi-supervised learning, improves WER and KWS performance. Finally, we show that these multilingual representations significantly improve ASR and KWS performance (relative 9% for WER and 5% for MTWV) even when forty hours of transcribed audio in the target language is available. Multilingual representations significantly contributed to the LORELEI KWS systems winning the OpenKWS15 evaluation.
DOI:10.1109/ASRU.2015.7404803