EIFFeL: Ensuring Integrity for Federated Learning
Federated learning (FL) enables clients to collaborate with a server to train a machine learning model. To ensure privacy, the server performs secure aggregation of updates from the clients. Unfortunately, this prevents verification of the well-formedness (integrity) of the updates as the updates ar...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
23.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Federated learning (FL) enables clients to collaborate with a server to train
a machine learning model. To ensure privacy, the server performs secure
aggregation of updates from the clients. Unfortunately, this prevents
verification of the well-formedness (integrity) of the updates as the updates
are masked. Consequently, malformed updates designed to poison the model can be
injected without detection. In this paper, we formalize the problem of ensuring
\textit{both} update privacy and integrity in FL and present a new system,
\textsf{EIFFeL}, that enables secure aggregation of \textit{verified} updates.
\textsf{EIFFeL} is a general framework that can enforce \textit{arbitrary}
integrity checks and remove malformed updates from the aggregate, without
violating privacy. Our empirical evaluation demonstrates the practicality of
\textsf{EIFFeL}. For instance, with $100$ clients and $10\%$ poisoning,
\textsf{EIFFeL} can train an MNIST classification model to the same accuracy as
that of a non-poisoned federated learner in just $2.4s$ per iteration. |
---|---|
DOI: | 10.48550/arxiv.2112.12727 |