A New Ratio Mask Representation for CASA-Based Speech Enhancement

In the computational auditory scene analysis method, the ideal ratio mask or alternatively the ideal binary mask is the key point to reconstruct the enhanced signal. The ratio mask in its Wiener filtering or its square root form is currently considered. However, this kind of ratio mask overlooked on...

Full description

Saved in:
Bibliographic Details
Published inIEEE/ACM transactions on audio, speech, and language processing Vol. 27; no. 1; pp. 7 - 19
Main Authors Feng Bao, Abdulla, Waleed H.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In the computational auditory scene analysis method, the ideal ratio mask or alternatively the ideal binary mask is the key point to reconstruct the enhanced signal. The ratio mask in its Wiener filtering or its square root form is currently considered. However, this kind of ratio mask overlooked one important issue. It does not exploit the inter-channel correlation (ICC) in the noisy speech, noise, and clean speech spectra. Thus, in this paper, we first propose a novel ratio mask representation by utilizing the ICC. In this way, we adaptively reallocate the power ratio of the speech and noise during the construction of ratio mask, thus, more speech and noise components are retained and masked at the same time, respectively. Second, the channel-weight contour based on the equal loudness hearing attribute is adopted to revise this new ratio mask in each Gammatone filterbank channel. Finally, the revised ratio mask is effectively used to train a five-layer structured deep neural network. Experiments show that the proposed ratio mask performs better than the conventional ratio mask representation and other series of enhancement algorithms in terms of speech quality, intelligibility, and spectral distortion under different signal to noise ratio conditions using six types of noises.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2018.2868407