CGIR: Conditional Generative Instance Reconstruction Attacks against Federated Learning

Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the re...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on dependable and secure computing Vol. 20; no. 6; pp. 1 - 13
Main Authors Xu, Xiangrui, Liu, Pengrui, Wang, Wei, Ma, Hong-Liang, Wang, Bin, Han, Zhen, Han, Yufei
Format Journal Article
LanguageEnglish
Published Washington IEEE 01.11.2023
IEEE Computer Society
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the reconstruction process. In this work, we propose a novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, we propose a batch label inference attack in non-IID FL scenarios, where multiple images can share the same labels. Based on the inferred labels, we conduct a "coarse-to-fine" image reconstruction process that provides a stable and effective data reconstruction. In addition, we equip the generator with a label condition restriction so that the contents and the labels of the reconstructed images are consistent. Our extensive evaluation results on two model architectures and five image datasets show that without the auxiliary assumptions, the CGIR attack outperforms the prior arts, even for complex datasets, deep models, and large batch sizes. Furthermore, we evaluate several existing defense methods. The experimental results suggest that pruning gradients can be used as a strategy to mitigate privacy risks in FL if a model tolerates a slight accuracy loss.
ISSN:1545-5971
1941-0018
DOI:10.1109/TDSC.2022.3228302