A joint time-space-frequency filtering framework for multichannel speech enhancement via complex-valued tensor representations

Multichannel speech enhancement has become increasingly popular in both academia and industry. Most existing algorithms work on the use of spectral, temporal or spatial correlations in observed noisy speech data. Nevertheless, little attention has been paid to joint exploitation of correlations in t...

Full description

Saved in:
Bibliographic Details
Published inApplied acoustics Vol. 145; pp. 245 - 254
Main Authors Jia, Xiangyu, Tong, Renjie, Ye, Zhongfu
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multichannel speech enhancement has become increasingly popular in both academia and industry. Most existing algorithms work on the use of spectral, temporal or spatial correlations in observed noisy speech data. Nevertheless, little attention has been paid to joint exploitation of correlations in the time, space and frequency domain. In this paper, we propose to integrate joint time-space-frequency filtering into a unified framework by representing the short-time Fourier transform coefficients of observed multichannel speech data as a 3-dimensional complex-valued tensor. The spectral, temporal and spatial filters are iteratively updated to perform filtering on 3-dimensions of the tensor, respectively. A locally optimal solution can generally be obtained in just a few iterations. Experiments are conducted to test performances of the proposed framework on both the simulated and realistic acoustic systems. Experiment results on simulated acoustic systems show that the proposed framework outperforms some traditional multichannel speech enhancement algorithms in terms of objective measures. The performance in the real environment shows the proposed framework has an advantage over other tested algorithms in terms of subjective and objective measures. All the results show the proposed framework can achieve effective noise reduction with little distortion.
ISSN:0003-682X
1872-910X
DOI:10.1016/j.apacoust.2018.10.001