Set-Membership Inference Attacks using Data Watermarking

In this work, we propose a set-membership inference attack for generative models using deep image watermarking techniques. In particular, we demonstrate how conditional sampling from a generative model can reveal the watermark that was injected into parts of the training data. Our empirical results...

Full description

Saved in:
Bibliographic Details
Main Authors Laszkiewicz, Mike, Lukovnikov, Denis, Lederer, Johannes, Fischer, Asja
Format Journal Article
LanguageEnglish
Published 22.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this work, we propose a set-membership inference attack for generative models using deep image watermarking techniques. In particular, we demonstrate how conditional sampling from a generative model can reveal the watermark that was injected into parts of the training data. Our empirical results demonstrate that the proposed watermarking technique is a principled approach for detecting the non-consensual use of image data in training generative models.
DOI:10.48550/arxiv.2307.15067