Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization

Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data. Despite enhancing privacy and efficiency in information retrieval and knowledge management contexts, training and deploying FL model...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Long Tan Le, Nguyen, Tuan Dung, Tung-Anh Nguyen, Hong, Choong Seon, Seneviratne, Suranga, Bao, Wei, Tran, Nguyen H
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 29.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data. Despite enhancing privacy and efficiency in information retrieval and knowledge management contexts, training and deploying FL models confront significant challenges such as communication bottlenecks, data heterogeneity, and memory limitations. To comprehensively address these challenges, we introduce FeDEQ, a novel FL framework that incorporates deep equilibrium learning and consensus optimization to harness compact global data representations for efficient personalization. Specifically, we design a unique model structure featuring an equilibrium layer for global representation extraction, followed by explicit layers tailored for local personalization. We then propose a novel FL algorithm rooted in the alternating directions method of multipliers (ADMM), which enables the joint optimization of a shared equilibrium layer and individual personalized layers across distributed datasets. Our theoretical analysis confirms that FeDEQ converges to a stationary point, achieving both compact global representations and optimal personalized parameters for each client. Extensive experiments on various benchmarks demonstrate that FeDEQ matches the performance of state-of-the-art personalized FL methods, while significantly reducing communication size by up to 4 times and memory footprint by 1.5 times during training.
ISSN:2331-8422
DOI:10.48550/arxiv.2309.15659