RHFedMTL: Resource-Aware Hierarchical Federated Multitask Learning

The wide applications of artificial intelligence (AI) on massive Internet of Things or smartphones raises significant concerns about privacy, heterogeneity, and resource efficiency. Correspondingly, federated learning (FL) emerges as an effective way to enable AI over massively distributed nodes wit...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 11; no. 14; pp. 25227 - 25238
Main Authors Yi, Xingfu, Li, Rongpeng, Peng, Chenghui, Wang, Fei, Wu, Jianjun, Zhao, Zhifeng
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 15.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The wide applications of artificial intelligence (AI) on massive Internet of Things or smartphones raises significant concerns about privacy, heterogeneity, and resource efficiency. Correspondingly, federated learning (FL) emerges as an effective way to enable AI over massively distributed nodes without uploading the raw data. Conventional works mostly focus on learning a single unified model for one solitary task. Multitask learning (MTL) outperforms single-task learning by training multiple models concurrently, leading to reduced model sizes and increased flexibility. However, existing FL efforts often face challenges in efficiently managing MTL scenarios, particularly with the presence of stragglers, without incurring prohibitive computation and communication costs.In this article, inspired by the natural cloud-base station (BS)-terminal hierarchy of cellular networks, we provide a viable resource-aware hierarchical federated MTL (RHFedMTL) solution to meet the task heterogeneity corresponding to different nonindependent and identically distributed (IID) training data sets. Specifically, a primal-dual method has been leveraged to effectively transform the coupled MTL into some local optimization subproblems within BSs. Therefore, it enables solving different tasks within a BS and aggregating the multitask result in the cloud without uploading the raw data. Furthermore, compared with existing methods that reduce resource costs by simply changing the aggregation frequency, we dive into the intricate relationship between resource consumption and learning accuracy, and develop a resource-aware learning strategy for adjusting the iteration number on local terminals and BSs to meet the resource budget. Extensive simulation results demonstrate the effectiveness and superiority of RHFedMTL in terms of improving the learning accuracy and boosting the convergence rate.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2024.3392584