Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization

Voice Privacy Challenge 2024 at INTERSPEECH 2024, Sep 2024, KOS Island, Greece Human speech conveys prosody, linguistic content, and speaker identity. This article investigates a novel speaker anonymization approach using an end-to-end network based on a Vector-Quantized Variational Auto-Encoder (VQ...

Full description

Saved in:
Bibliographic Details
Main Authors Leang, Sotheara, Augusma, Anderson, Castelli, Eric, Letué, Frédérique, Sam, Sethserey, Vaufreydaz, Dominique
Format Journal Article
LanguageEnglish
Published 24.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Voice Privacy Challenge 2024 at INTERSPEECH 2024, Sep 2024, KOS Island, Greece Human speech conveys prosody, linguistic content, and speaker identity. This article investigates a novel speaker anonymization approach using an end-to-end network based on a Vector-Quantized Variational Auto-Encoder (VQ-VAE) to deal with these speech components. This approach is designed to disentangle these components to specifically target and modify the speaker identity while preserving the linguistic and emotionalcontent. To do so, three separate branches compute embeddings for content, prosody, and speaker identity respectively. During synthesis, taking these embeddings, the decoder of the proposed architecture is conditioned on both speaker and prosody information, allowing for capturing more nuanced emotional states and precise adjustments to speaker identification. Findings indicate that this method outperforms most baseline techniques in preserving emotional information. However, it exhibits more limited performance on other voice privacy tasks, emphasizing the need for further improvements.
DOI:10.48550/arxiv.2409.15882