GAN-Based Privacy Abuse Attack on Federated Learning in IoT Networks

Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the attackers know data distribution. In this paper, we present a novel attack where FL participants carry out inference and privacy abus...

Full description

Saved in:
Bibliographic Details
Published inIEEE INFOCOM 2024 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) pp. 1 - 2
Main Authors Hao, Runzhe, Hussain, Rasheed, Parra-Ullauri, Juan Marcelo, Vasilakos, Xenofon, Nejabati, Reza, Simeonidou, Dimitra
Format Conference Proceeding
LanguageEnglish
Published IEEE 20.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated Learning (FL) is vulnerable to various attacks including poisoning and inference. However, the existing offensive security evaluation of FL assumes that the attackers know data distribution. In this paper, we present a novel attack where FL participants carry out inference and privacy abuse attacks against the FL by leveraging Generating Adversarial Networks (GANs). The attacker (impersonating a benign participant) uses GAN to generate a similar dataset to other participants, and then covertly poisons the data. We demonstrated the attack successfully and tested it on two datasets, the IoT network traffic dataset and MNIST. The results reveal that for FL to be successfully used in IoT applications, protection against such attacks is critically essential.
ISSN:2833-0587
DOI:10.1109/INFOCOMWKSHPS61880.2024.10620772