Model Synthesis for Zero-Shot Model Attribution
Nowadays, generative models are shaping various fields such as art, design, and human-computer interaction, yet accompanied by challenges related to copyright infringement and content management. In response, existing research seeks to identify the unique fingerprints on the images they generate, wh...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
29.07.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Nowadays, generative models are shaping various fields such as art, design,
and human-computer interaction, yet accompanied by challenges related to
copyright infringement and content management. In response, existing research
seeks to identify the unique fingerprints on the images they generate, which
can be leveraged to attribute the generated images to their source models.
Existing methods, however, are constrained to identifying models within a
static set included in the classifier training, failing to adapt to newly
emerged unseen models dynamically. To bridge this gap, we aim to develop a
generalized model fingerprint extractor capable of zero-shot attribution,
effectively attributes unseen models without exposure during training. Central
to our method is a model synthesis technique, which generates numerous
synthetic models mimicking the fingerprint patterns of real-world generative
models. The design of the synthesis technique is motivated by observations on
how the basic generative model's architecture building blocks and parameters
influence fingerprint patterns, and it is validated through two designed
metrics that examine synthetic models' fidelity and diversity. Our experiments
demonstrate that this fingerprint extractor, trained solely on synthetic
models, achieves impressive zero-shot generalization on a wide range of
real-world generative models, improving model identification and verification
accuracy on unseen models by over 40% and 15%, respectively, compared to
existing approaches. |
---|---|
DOI: | 10.48550/arxiv.2307.15977 |