Unmasking Efficiency: Learning Salient Sparse Models in Non-IID Federated Learning
In this work, we propose Salient Sparse Federated Learning (SSFL), a streamlined approach for sparse federated learning with efficient communication. SSFL identifies a sparse subnetwork prior to training, leveraging parameter saliency scores computed separately on local client data in non-IID scenar...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
14.05.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this work, we propose Salient Sparse Federated Learning (SSFL), a
streamlined approach for sparse federated learning with efficient
communication. SSFL identifies a sparse subnetwork prior to training,
leveraging parameter saliency scores computed separately on local client data
in non-IID scenarios, and then aggregated, to determine a global mask. Only the
sparse model weights are communicated each round between the clients and the
server. We validate SSFL's effectiveness using standard non-IID benchmarks,
noting marked improvements in the sparsity--accuracy trade-offs. Finally, we
deploy our method in a real-world federated learning framework and report
improvement in communication time. |
---|---|
DOI: | 10.48550/arxiv.2405.09037 |