FedADMM: A Robust Federated Deep Learning Framework with Adaptivity to System Heterogeneity
Federated Learning (FL) is an emerging framework for distributed processing of large data volumes by edge devices subject to limited communication bandwidths, heterogeneity in data distributions and computational resources, as well as privacy considerations. In this paper, we introduce a new FL prot...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
07.04.2022
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2204.03529 |
Cover
Loading…
Summary: | Federated Learning (FL) is an emerging framework for distributed processing
of large data volumes by edge devices subject to limited communication
bandwidths, heterogeneity in data distributions and computational resources, as
well as privacy considerations. In this paper, we introduce a new FL protocol
termed FedADMM based on primal-dual optimization. The proposed method leverages
dual variables to tackle statistical heterogeneity, and accommodates system
heterogeneity by tolerating variable amount of work performed by clients.
FedADMM maintains identical communication costs per round as FedAvg/Prox, and
generalizes them via the augmented Lagrangian. A convergence proof is
established for nonconvex objectives, under no restrictions in terms of data
dissimilarity or number of participants per round of the algorithm. We
demonstrate the merits through extensive experiments on real datasets, under
both IID and non-IID data distributions across clients. FedADMM consistently
outperforms all baseline methods in terms of communication efficiency, with the
number of rounds needed to reach a prescribed accuracy reduced by up to 87%.
The algorithm effectively adapts to heterogeneous data distributions through
the use of dual variables, without the need for hyperparameter tuning, and its
advantages are more pronounced in large-scale systems. |
---|---|
DOI: | 10.48550/arxiv.2204.03529 |