Federated Learning Meets Multi-Objective Optimization

Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edgedevices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formu...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on network science and engineering Vol. 9; no. 4; pp. 2039 - 2051
Main Authors Hu, Zeou, Shaloudegi, Kiarash, Zhang, Guojun, Yu, Yaoliang
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.07.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edgedevices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of FedMGDA+ and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4697
2334-329X
DOI:10.1109/TNSE.2022.3169117