An Optimized Sparse Response Mechanism for Differentially Private Federated Learning
Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed para...
Saved in:
Published in | IEEE transactions on dependable and secure computing Vol. 21; no. 4; pp. 2285 - 2295 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Washington
IEEE
01.07.2024
IEEE Computer Society |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed parameters resulting in privacy leakage. Differentially private federated learning (DPFL) has recently been suggested to protect these parameters by introducing information noises. In this way, even if attackers get these parameters, they cannot exactly infer true parameters from these noisy information. Directly incorporating Differentially Private (DP) into FL however can severely affect model utility. In this article, we present an optimized sparse response mechanism (OSRM) that seamlessly incorporates DP into FL to diminish privacy budget consumption and improve model accuracy. Through OSRM, each FL client only exposes a selected set of large gradients, so as not to waste privacy budgets in protecting valueless gradients. We theoretically derive the convergence rate of DPFL with OSRM under non-convex loss. Then, OSRM is optimized by minimizing the loss of the convergence rate. Based on analysis, we present an effective algorithm for optimizing OSRM. Extensive experiments are conducted with public datasets, including MNIST, Fashion-MNIST and CIFAR-10. The results suggest that OSRM can achieve the average improvement of accuracy by 18.42% as compared to state-of-the-art baselines with a fixed privacy budget. |
---|---|
ISSN: | 1545-5971 1941-0018 |
DOI: | 10.1109/TDSC.2023.3302864 |