Differentially Private Speaker Anonymization

Sharing real-world speech utterances is key to the training and deployment of voice-based services. However, it also raises privacy risks as speech contains a wealth of personal data. Speaker anonymization aims to remove speaker information from a speech utterance while leaving its linguistic and pr...

Full description

Saved in:
Bibliographic Details
Published inProceedings on Privacy Enhancing Technologies Vol. 2023; no. 1; pp. 98 - 114
Main Authors Shahin Shamsabadi, Ali, Mohan Lal Srivastava, Brij, Bellet, Aurélien, Vauquier, Nathalie, Vincent, Emmanuel, Maouche, Mohamed, Tommasi, Marc, Papernot, Nicolas
Format Journal Article
LanguageEnglish
Published 01.01.2023
Online AccessGet full text

Cover

Loading…
More Information
Summary:Sharing real-world speech utterances is key to the training and deployment of voice-based services. However, it also raises privacy risks as speech contains a wealth of personal data. Speaker anonymization aims to remove speaker information from a speech utterance while leaving its linguistic and prosodic attributes intact. State-of-the-art techniques operate by disentangling the speaker information (represented via a speaker embedding) from these attributes and re-synthesizing speech based on the speaker embedding of another speaker. Prior research in the privacy community has shown that anonymization often provides brittle privacy protection, even less so any provable guarantee. In this work, we show that disentanglement is indeed not perfect: linguistic and prosodic attributes still contain speaker information. We remove speaker information from these attributes by introducing differentially private feature extractors based on an autoencoder and an automatic speech recognizer, respectively, trained using noise layers. We plug these extractors in the state-of-the-art anonymization pipeline and generate, for the first time, private speech utterances with a provable upper bound on the speaker information they contain. We evaluate empirically the privacy and utility resulting from our differentially private speaker anonymization approach on the LibriSpeech data set. Experimental results show that the generated utterances retain very high utility for automatic speech recognition training and inference, while being much better protected against strong adversaries who leverage the full knowledge of the anonymization process to try to infer the speaker identity.
ISSN:2299-0984
2299-0984
DOI:10.56553/popets-2023-0007