Distribution-Free Fair Federated Learning with Small Samples

As federated learning gains increasing importance in real-world applications due to its capacity for decentralized data training, addressing fairness concerns across demographic groups becomes critically important. However, most existing machine learning algorithms for ensuring fairness are designed...

Full description

Saved in:
Bibliographic Details
Main Authors Yin, Qichuan, Wang, Zexian, Huang, Junzhou, Yao, Huaxiu, Zhang, Linjun
Format Journal Article
LanguageEnglish
Published 25.02.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As federated learning gains increasing importance in real-world applications due to its capacity for decentralized data training, addressing fairness concerns across demographic groups becomes critically important. However, most existing machine learning algorithms for ensuring fairness are designed for centralized data environments and generally require large-sample and distributional assumptions, underscoring the urgent need for fairness techniques adapted for decentralized and heterogeneous systems with finite-sample and distribution-free guarantees. To address this issue, this paper introduces FedFaiREE, a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples. Our approach accounts for unique challenges in decentralized environments, such as client heterogeneity, communication costs, and small sample sizes. We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
DOI:10.48550/arxiv.2402.16158