Improved features and dynamic stream weight adaption for robust Audio-Visual Speech Recognition framework

This paper investigates the enhancement of a speech recognition system that uses both audio and visual speech information in noisy environments by presenting contributions in two main system stages: front-end and back-end. The double use of Gabor filters is proposed as a feature extractor in the fro...

Full description

Saved in:
Bibliographic Details
Published inDigital signal processing Vol. 89; pp. 17 - 29
Main Authors Saudi, Ali S., Khalil, Mahmoud I., Abbas, Hazem M.
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.06.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper investigates the enhancement of a speech recognition system that uses both audio and visual speech information in noisy environments by presenting contributions in two main system stages: front-end and back-end. The double use of Gabor filters is proposed as a feature extractor in the front-end stage of both modules to capture robust spectro-temporal features. The performance obtained from the resulted Gabor Audio Features (GAFs) and Gabor Visual Features (GVFs) is compared to the performance of other conventional features such as MFCC, PLP, RASTA-PLP audio features and DCT2 visual features. The experimental results show that a system utilizing GAFs and GVFs has a better performance, especially in a low-SNR scenario. To improve the back-end stage, a complete framework of synchronous Multi-Stream Hidden Markov Model (MSHMM) is used to solve the dynamic stream weight estimation problem for Audio-Visual Speech Recognition (AVSR). To demonstrate the usefulness of the dynamic weighting in the overall performance of AVSR system, we empirically show the preference of Late Integration (LI) compared to Early Integration (EI) especially when one of the modalities is corrupted. Results confirm the superior recognition accuracy for all SNR levels the superiority of the AVSR system with the Late Integration.
ISSN:1051-2004
1095-4333
DOI:10.1016/j.dsp.2019.02.016