Improve individual fairness in federated learning via adversarial training

•We present an adversarial training approach for training FL models that satisfy individual fairness without requiring direct access to training data or the underlying data distribution, thereby protecting user privacy at the same time.•We show that our approach can guarantee fairness on the global...

Full description

Saved in:
Bibliographic Details
Published inComputers & security Vol. 132; p. 103336
Main Authors Li, Jie, Zhu, Tianqing, Ren, Wei, Raymond, Kim-Kwang
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•We present an adversarial training approach for training FL models that satisfy individual fairness without requiring direct access to training data or the underlying data distribution, thereby protecting user privacy at the same time.•We show that our approach can guarantee fairness on the global model as well as on client-side (local) data distribution because we conduct adversarial training on the client-side distributedly.•We implement and measure the proposed method on two datasets while testing various hyper-parameters, and the experimental results demonstrate its effectiveness. Federated learning (FL) has been widely investigated these years. Since FL will be universally applied in the real world, the fairness issue involved is worthy of attention, while there are few relevant studies. Unlike previous work on group fairness in FL or fairness in centralized machine learning, this paper firstly considers both privacy and individual fairness and proposes promoting individual fairness in FL through distributionally adversarial training without violating data privacy. Specifically, we assume a model satisfying individual fairness as one robust to certain sensitive perturbations, which aligns with the goal of adversarial training. Then we transform the task of training an individually fair FL model into an adversarial training task. To obey the FL requirement of keeping data on clients privately, we execute the adversarial training task on the client side distributionally. Extensive experimental results on two real datasets collectively demonstrate the effectiveness of our proposed method, which not only improves individual fairness significantly but improves group fairness at the same time.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2023.103336