Fast-convergent federated learning with class-weighted aggregation

Recently, federated learning has attracted great attention due to its advantage of enabling model training in a distributed manner. Instead of uploading data for centralized training, it allows devices to keep local data private and only send parameters to server. Then the server aggregates local mo...

Full description

Saved in:
Bibliographic Details
Published inJournal of systems architecture Vol. 117; p. 102125
Main Authors Ma, Zezhong, Zhao, Mengying, Cai, Xiaojun, Jia, Zhiping
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, federated learning has attracted great attention due to its advantage of enabling model training in a distributed manner. Instead of uploading data for centralized training, it allows devices to keep local data private and only send parameters to server. Then the server aggregates local models to derive a global model. In this paper, we study the aggregation problem in federated learning, especially with non-independently and identically distributed data. Since existing scheme may degrade the representative of local models after aggregation, we propose to reallocate weights of local models based on contributions to each class. Then two class-weighted aggregation strategies are developed to improve the communication efficiency in federated learning. Evaluation shows that the proposed schemes reduce 30.49% and 23.59% of communication costs compared with FedAvg.
ISSN:1383-7621
1873-6165
DOI:10.1016/j.sysarc.2021.102125