FLASH-RL: Federated Learning Addressing System and Static Heterogeneity using Reinforcement Learning
We propose FLASH-RL, a framework utilizing Double Deep Q-Learning (DDQL) to address system and static heterogeneity in Federated Learning (FL). FLASH-RL introduces a new reputation-based utility function to evaluate client contributions based on their current and past performances. Additionally, an...
Saved in:
Published in | 2023 IEEE 41st International Conference on Computer Design (ICCD) pp. 444 - 447 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
06.11.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We propose FLASH-RL, a framework utilizing Double Deep Q-Learning (DDQL) to address system and static heterogeneity in Federated Learning (FL). FLASH-RL introduces a new reputation-based utility function to evaluate client contributions based on their current and past performances. Additionally, an adapted DDQL algorithm is proposed to expedite the learning process. Experimental results on MNIST and CIFAR-10 datasets demonstrate that FLASH-RL strikes a balance between model performance and end-to-end latency, reducing latency by up to 24.83% compared to FedAVG and 24.67% compared to FAVOR. It also reduces training rounds by up to 60.44% compared to FedAVG and 76% compared to FAVOR. Similar improvements are observed on the MobiAct Dataset for fall detection, underscoring the real-world applicability of our approach. |
---|---|
ISSN: | 2576-6996 |
DOI: | 10.1109/ICCD58817.2023.00074 |