High Epsilon Synthetic Data Vulnerabilities in MST and PrivBayes

Synthetic data generation (SDG) has become increasingly popular as a privacy-enhancing technology. It aims to maintain important statistical properties of its underlying training data, while excluding any personally identifiable information. There have been a whole host of SDG algorithms developed i...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Golob, Steven, Pentyala, Sikha, Maratkhan, Anuar, De Cock, Martine
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 09.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Synthetic data generation (SDG) has become increasingly popular as a privacy-enhancing technology. It aims to maintain important statistical properties of its underlying training data, while excluding any personally identifiable information. There have been a whole host of SDG algorithms developed in recent years to improve and balance both of these aims. Many of these algorithms provide robust differential privacy guarantees. However, we show here that if the differential privacy parameter \(\varepsilon\) is set too high, then unambiguous privacy leakage can result. We show this by conducting a novel membership inference attack (MIA) on two state-of-the-art differentially private SDG algorithms: MST and PrivBayes. Our work suggests that there are vulnerabilities in these generators not previously seen, and that future work to strengthen their privacy is advisable. We present the heuristic for our MIA here. It assumes knowledge of auxiliary "population" data, and also assumes knowledge of which SDG algorithm was used. We use this information to adapt the recent DOMIAS MIA uniquely to MST and PrivBayes. Our approach went on to win the SNAKE challenge in November 2023.
ISSN:2331-8422