Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenar...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ohib, Riyasat, Thapaliya, Bishal, Gintare, Karolina Dziugaite, Liu, Jingyu, Calhoun, Vince, Plis, Sergey
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 15.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are communicated each round between the clients and the server. We validate SSFL's effectiveness using standard non-IID benchmarks, noting marked improvements in the sparsity--accuracy trade-offs. Finally, we deploy our method in a real-world federated learning framework and report improvement in communication time.
ISSN:2331-8422