ROBUST AGGREGATION FOR FEDERATED DATASET DISTILLATION

Robust federated dataset distillation is disclosed. A model (or models) is optimized with a distilled dataset at a central node. The models or model weights are transmitted to nodes, which generate loss evaluations by using the optimized models on real data. The loss evaluations are returned to the...

Full description

Saved in:
Bibliographic Details
Main Authors da Silva, Pablo Nascimento, Gottin, Vinicius Michel, Ferreira, Paulo Abelha
Format Patent
LanguageEnglish
Published 25.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Robust federated dataset distillation is disclosed. A model (or models) is optimized with a distilled dataset at a central node. The models or model weights are transmitted to nodes, which generate loss evaluations by using the optimized models on real data. The loss evaluations are returned to the central node. The loss evaluations are robustly aggregated to generate an average loss. Robust aggregation allows outliers or suspect loss evaluations to be excluded. Once the outliers or suspect loss evaluations are excluded, an update, which may include gradients, is applied to the distilled dataset and the process is repeated. The distilled dataset can be used at least when deploying a model to a new node that may not have sufficient data to train the model or for other reasons.
Bibliography:Application Number: US202318157966