Paper

Model Synthesis for Zero-Shot Model Attribution

Nowadays, generative models are shaping various fields such as art, design, and human-computer interaction, yet accompanied by challenges related to copyright infringement and content management. In response, existing research seeks to identify the unique fingerprints on the images they generate, which can be leveraged to attribute the generated images to their source models. Existing methods, however, are constrained to identifying models within a static set included in the classifier training, failing to adapt to newly emerged unseen models dynamically. To bridge this gap, we aim to develop a generalized model fingerprint extractor capable of zero-shot attribution, effectively attributes unseen models without exposure during training. Central to our method is a model synthesis technique, which generates numerous synthetic models mimicking the fingerprint patterns of real-world generative models. The design of the synthesis technique is motivated by observations on how the basic generative model's architecture building blocks and parameters influence fingerprint patterns, and it is validated through two designed metrics that examine synthetic models' fidelity and diversity. Our experiments demonstrate that this fingerprint extractor, trained solely on synthetic models, achieves impressive zero-shot generalization on a wide range of real-world generative models, improving model identification and verification accuracy on unseen models by over 40% and 15%, respectively, compared to existing approaches.

Results in Papers With Code
(↓ scroll down to see all results)