The Privacy Onion Effect: Memorization is Relative

Machine learning models trained on private datasets have been shown to leak their private data. While recent work has found that the average data point is rarely leaked, the outlier samples are frequently subject to memorization and, consequently, privacy leakage. We demonstrate and analyse an Onion...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Carlini, Nicholas, Jagielski, Matthew, Zhang, Chiyuan, Papernot, Nicolas, Terzis, Andreas, Tramer, Florian
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 22.06.2022
Subjects
Online AccessGet full text
ISSN2331-8422

Cover

Loading…
More Information
Summary:Machine learning models trained on private datasets have been shown to leak their private data. While recent work has found that the average data point is rarely leaked, the outlier samples are frequently subject to memorization and, consequently, privacy leakage. We demonstrate and analyse an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack. We perform several experiments to study this effect, and understand why it occurs. The existence of this effect has various consequences. For example, it suggests that proposals to defend against memorization without training with rigorous privacy guarantees are unlikely to be effective. Further, it suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
Bibliography:content type line 50
SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
ISSN:2331-8422