Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a training set but are difficult to distinguish from the primary purposes of statistical machine learning, which is to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.'s membership in...

Full description

Saved in:
Bibliographic Details
Main Authors Suri, Anshuman, Evans, David
Format Journal Article
LanguageEnglish
Published 07.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Property inference attacks reveal statistical properties about a training set but are difficult to distinguish from the primary purposes of statistical machine learning, which is to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.'s membership inference framework, we propose a formal and generic definition of property inference attacks. The proposed notion describes attacks that can distinguish between possible training distributions, extending beyond previous property inference attacks that infer the ratio of a particular type of data in the training data set. In this paper, we show how our definition captures previous property inference attacks as well as a new attack that reveals the average degree of nodes of a training graph and report on experiments giving insight into the potential risks of property inference attacks.
DOI:10.48550/arxiv.2106.03699