Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning

In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenar...

Full description

Saved in:
Bibliographic Details
Main Authors Ohib, Riyasat, Thapaliya, Bishal, Dziugaite, Gintare Karolina, Liu, Jingyu, Calhoun, Vince, Plis, Sergey
Format Journal Article
LanguageEnglish
Published 14.05.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenarios, and then aggregated, to determine a global mask. Only the sparse model weights are communicated each round between the clients and the server. We validate SSFL's effectiveness using standard non-IID benchmarks, noting marked improvements in the sparsity--accuracy trade-offs. Finally, we deploy our method in a real-world federated learning framework and report improvement in communication time.
DOI:10.48550/arxiv.2405.09037