Privacy-enhanced federated learning scheme based on generative adversarial networks

Federated learning, a distributed machine learning paradigm, has gained a lot of attention due to its inherent privacy protection capability and heterogeneous collaboration.However, recent studies have revealed a potential privacy risk known as “gradient leakage”, where the gradients can be used to...

Full description

Saved in:
Bibliographic Details
Published in网络与信息安全学报 Vol. 9; no. 3; pp. 113 - 122
Main Author Feng YU, Qingxin LIN, Hui LIN, Xiaoding WANG
Format Journal Article
LanguageEnglish
Published POSTS&TELECOM PRESS Co., LTD 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated learning, a distributed machine learning paradigm, has gained a lot of attention due to its inherent privacy protection capability and heterogeneous collaboration.However, recent studies have revealed a potential privacy risk known as “gradient leakage”, where the gradients can be used to determine whether a data record with a specific property is included in another participant’s batch, thereby exposing the participant’s training data.Current privacy-enhanced federated learning methods may have drawbacks such as reduced accuracy, computational overhead, or new insecurity factors.To address this issue, a differential privacy-enhanced generative adversarial network model was proposed, which introduced an identifier into vanilla GAN, thus enabling the input data to be approached while satisfying differential privacy constraints.Then this model was applied to the federated learning framework, to improve the privacy protection capability without compromising model accuracy.The proposed method was verified through simulations under the client/server (C/S) federated learning architecture and was found to balance data privacy and practicality effectively compared with the DP-SGD method.Besides, the usability of the proposed model was theoretically analyzed under a peer-to-peer (P2P) architecture, and future research work was discussed.
ISSN:2096-109X
DOI:10.11959/j.issn.2096-109x.2023043