Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization
Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data. Despite enhancing privacy and efficiency in information retrieval and knowledge management contexts, training and deploying FL model...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , , |
Format | Paper Journal Article |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
29.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data. Despite enhancing privacy and efficiency in information retrieval and knowledge management contexts, training and deploying FL models confront significant challenges such as communication bottlenecks, data heterogeneity, and memory limitations. To comprehensively address these challenges, we introduce FeDEQ, a novel FL framework that incorporates deep equilibrium learning and consensus optimization to harness compact global data representations for efficient personalization. Specifically, we design a unique model structure featuring an equilibrium layer for global representation extraction, followed by explicit layers tailored for local personalization. We then propose a novel FL algorithm rooted in the alternating directions method of multipliers (ADMM), which enables the joint optimization of a shared equilibrium layer and individual personalized layers across distributed datasets. Our theoretical analysis confirms that FeDEQ converges to a stationary point, achieving both compact global representations and optimal personalized parameters for each client. Extensive experiments on various benchmarks demonstrate that FeDEQ matches the performance of state-of-the-art personalized FL methods, while significantly reducing communication size by up to 4 times and memory footprint by 1.5 times during training. |
---|---|
ISSN: | 2331-8422 |
DOI: | 10.48550/arxiv.2309.15659 |