Exploring Federated Learning Dynamics for Black-and-White-Box DNN Traitor Tracing

As deep learning applications become more prevalent, the need for extensive training examples raises concerns for sensitive, personal, or proprietary data. To overcome this, Federated Learning (FL) enables collaborative model training across distributed data-owners, but it introduces challenges in s...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Rodriguez-Lois, Elena, Perez-Gonzalez, Fernando
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 02.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As deep learning applications become more prevalent, the need for extensive training examples raises concerns for sensitive, personal, or proprietary data. To overcome this, Federated Learning (FL) enables collaborative model training across distributed data-owners, but it introduces challenges in safeguarding model ownership and identifying the origin in case of a leak. Building upon prior work, this paper explores the adaptation of black-and-white traitor tracing watermarking to FL classifiers, addressing the threat of collusion attacks from different data-owners. This study reveals that leak-resistant white-box fingerprints can be directly implemented without a significant impact from FL dynamics, while the black-box fingerprints are drastically affected, losing their traitor tracing capabilities. To mitigate this effect, we propose increasing the number of black-box salient neurons through dropout regularization. Though there are still some open problems to be explored, such as analyzing non-i.i.d. datasets and over-parameterized models, results show that collusion-resistant traitor tracing, identifying all data-owners involved in a suspected leak, is feasible in an FL framework, even in early stages of training.
ISSN:2331-8422