Formalizing Distribution Inference Risks
Property inference attacks reveal statistical properties about a training set but are difficult to distinguish from the primary purposes of statistical machine learning, which is to produce models that capture statistical properties about a distribution. Motivated by Yeom et al.'s membership in...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.06.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Property inference attacks reveal statistical properties about a training set
but are difficult to distinguish from the primary purposes of statistical
machine learning, which is to produce models that capture statistical
properties about a distribution. Motivated by Yeom et al.'s membership
inference framework, we propose a formal and generic definition of property
inference attacks. The proposed notion describes attacks that can distinguish
between possible training distributions, extending beyond previous property
inference attacks that infer the ratio of a particular type of data in the
training data set. In this paper, we show how our definition captures previous
property inference attacks as well as a new attack that reveals the average
degree of nodes of a training graph and report on experiments giving insight
into the potential risks of property inference attacks. |
---|---|
DOI: | 10.48550/arxiv.2106.03699 |