Iterative and mixed-spaces image gradient inversion attack in federated learning

As a distributed learning paradigm, federated learning is supposed to protect data privacy without exchanging users’ local data. Even so, the gradient inversion attack , in which the adversary can reconstruct the original data from shared training gradients, has been widely deemed as a severe threat...

Full description

Saved in:
Bibliographic Details
Published inCybersecurity (Singapore) Vol. 7; no. 1; pp. 35 - 14
Main Authors Fang, Linwei, Wang, Liming, Li, Hongjia
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 05.04.2024
Springer Nature B.V
SpringerOpen
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As a distributed learning paradigm, federated learning is supposed to protect data privacy without exchanging users’ local data. Even so, the gradient inversion attack , in which the adversary can reconstruct the original data from shared training gradients, has been widely deemed as a severe threat. Nevertheless, most existing researches are confined to impractical assumptions and narrow range of applications. To mitigate these shortcomings, we propose a comprehensive framework for gradient inversion attack, with well-designed algorithms for image and label reconstruction. For image reconstruction, we fully utilize the generative image prior, which derives from wide-used generative models, to improve the reconstructed results, by additional means of iterative optimization on mixed spaces and gradient-free optimizer. For label reconstruction, we design an adaptive recovery algorithm regarding real data distribution, which can adjust previous attacks to more complex scenarios. Moreover, we incorporate a gradient approximation method to efficiently fit our attack for FedAvg scenario. We empirically verify our attack framework using benchmark datasets and ablation studies, considering loose assumptions and complicated circumstances. We hope this work can greatly reveal the necessity of privacy protection in federated learning, while urge more effective and robust defense mechanisms.
ISSN:2523-3246
2523-3246
DOI:10.1186/s42400-024-00227-7