A unified deep modeling approach to simultaneous speech dereverberation and recognition for the reverb challenge

We propose a unified deep neural network (DNN) approach to achieve both high-quality enhanced speech and high-accuracy automatic speech recognition (ASR) simultaneously on the recent REverberant Voice Enhancement and Recognition Benchmark (RE-VERB) Challenge. These two goals are accomplished by two...

Full description

Saved in:
Bibliographic Details
Published in2017 Hands-free Speech Communications and Microphone Arrays (HSCMA) pp. 36 - 40
Main Authors Bo Wu, Kehuang Li, Zhen Huang, Siniscalchi, Sabato Marco, Minglei Yang, Chin-Hui Lee
Format Conference Proceeding
LanguageEnglish
Published IEEE 2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a unified deep neural network (DNN) approach to achieve both high-quality enhanced speech and high-accuracy automatic speech recognition (ASR) simultaneously on the recent REverberant Voice Enhancement and Recognition Benchmark (RE-VERB) Challenge. These two goals are accomplished by two proposed techniques, namely DNN-based regression to enhance reverberant and noisy speech, followed by DNN-based multi-condition training that takes clean-condition, multi-condition and enhanced speech all into consideration. We first report on superior objective measures in enhanced speech to those listed in the 2014 REVERB Challenge Workshop. We then show that in clean-condition training, we obtain the best word error rate (WER) of 13.28% on the 1-channel REVERB simulated evaluation data with the proposed DNN-based pre-processing scheme. Similarly we attain a competitive single-system WER of 8.75% with the proposed multi-condition training strategy and the same less-discriminative log power spectrum features used in the enhancement stage. Finally by leveraging upon joint training with more discriminative ASR features and improved neural network based language models a state-of-the-art WER of 4.46% is attained with a single ASR system, and single-channel information. Another state-of-the-art WER of 4.10% is achieved through system combination.
DOI:10.1109/HSCMA.2017.7895557