EGIA: An External Gradient Inversion Attack in Federated Learning

Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's pr...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on information forensics and security Vol. 18; p. 1
Main Authors Liang, Haotian, Li, Youqi, Zhang, Chuan, Liu, Ximeng, Zhu, Liehuang
Format Journal Article
LanguageEnglish
Published New York IEEE 01.01.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's private input in a white-box manner, where the adversary is assumed to be either the client or the server. However, if both the clients and the server are honest and fully trusted, is the FL secure? In this paper, we propose a novel method called External Gradient Inversion Attack (EGIA) in the grey-box settings. Specifically, we concentrate on the point that public-shared gradients in FL are always transmitted through the intermediary nodes, which has been widely ignored. On this basis, we demonstrate that an external adversary can reconstruct the private input using gradients even if both the clients and the server are honest and fully trusted. We also provide a comprehensive theoretical analysis of the black-box attack scenario in which the adversary has only the gradients. We perform extensive experiments on multiple real-world datasets to test the effectiveness of EGIA. The outcomes of our experiments validate that the EGIA method is highly effective.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2023.3302161