FedFQ: Federated Learning with Fine-Grained Quantization
Federated learning (FL) is a decentralized approach, enabling multiple participants to collaboratively train a model while ensuring the protection of data privacy. The transmission of updates from numerous edge clusters to the server creates a significant communication bottleneck in FL. Quantization...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.08.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated learning (FL) is a decentralized approach, enabling multiple
participants to collaboratively train a model while ensuring the protection of
data privacy. The transmission of updates from numerous edge clusters to the
server creates a significant communication bottleneck in FL. Quantization is an
effective compression technology, showcasing immense potential in addressing
this bottleneck problem. The Non-IID nature of FL renders it sensitive to
quantization. Existing quantized FL frameworks inadequately balance high
compression ratios and superior convergence performance by roughly employing a
uniform quantization bit-width on the client-side. In this work, we propose a
communication-efficient FL algorithm with a fine-grained adaptive quantization
strategy (FedFQ). FedFQ addresses the trade-off between achieving high
communication compression ratios and maintaining superior convergence
performance by introducing parameter-level quantization. Specifically, we have
designed a Constraint-Guided Simulated Annealing algorithm to determine
specific quantization schemes. We derive the convergence of FedFQ,
demonstrating its superior convergence performance compared to existing
quantized FL algorithms. We conducted extensive experiments on multiple
benchmarks and demonstrated that, while maintaining lossless performance, FedFQ
achieves a compression ratio of 27 times to 63 times compared to the baseline
experiment. |
---|---|
DOI: | 10.48550/arxiv.2408.08977 |