Muse: Multi-modal target speaker extraction with visual cues

Speaker extraction algorithm relies on the speech sample from the target speaker as the reference point to focus its attention. Such a reference speech is typically pre-recorded. On the other hand, the temporal synchronization between speech and lip movement also serves as an informative cue. Motiva...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pan, Zexu, Tao, Ruijie, Xu, Chenglin, Li, Haizhou
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 10.02.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Speaker extraction algorithm relies on the speech sample from the target speaker as the reference point to focus its attention. Such a reference speech is typically pre-recorded. On the other hand, the temporal synchronization between speech and lip movement also serves as an informative cue. Motivated by this idea, we study a novel technique to use speech-lip visual cues to extract reference target speech directly from mixture speech during inference time, without the need of pre-recorded reference speech. We propose a multi-modal speaker extraction network, named MuSE, that is conditioned only on a lip image sequence. MuSE not only outperforms other competitive baselines in terms of SI-SDR and PESQ, but also shows consistent improvement in cross-dataset evaluations.
ISSN:2331-8422