GAN-Based Privacy Abuse Attack on Federated Learning in IoT Networks
Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the attackers know data distribution. In this paper, we present a novel attack where FL participants carry out inference and privacy abus...
Saved in:
Published in | IEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) pp. 1 - 2 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
20.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the attackers know data distribution. In this paper, we present a novel attack where FL participants carry out inference and privacy abuse attacks against the FL by leveraging Generating Adversarial Networks (GANs). The attacker (impersonating a benign participant) uses GAN to generate a similar dataset to other participants, and then covertly poisons the data. We demonstrated the attack successfully and tested it on two datasets, the IoT network traffic dataset and MNIST. The results reveal that for FL to be successfully used in IoT applications, protection against such attacks is critically essential. |
---|---|
ISSN: | 2833-0587 |
DOI: | 10.1109/INFOCOMWKSHPS61880.2024.10620772 |