Knowledge Caching for Federated Learning in Wireless Cellular Networks
This work examines a novel wireless knowledge caching framework where machine learning models (i.e., knowledge) are cached at local small cell base-stations (SBSs) to facilitate both federated training and access of the models by users. We first consider a single-SBS scenario, where the caching deci...
Saved in:
Published in | IEEE transactions on wireless communications Vol. 23; no. 8; pp. 9235 - 9250 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This work examines a novel wireless knowledge caching framework where machine learning models (i.e., knowledge) are cached at local small cell base-stations (SBSs) to facilitate both federated training and access of the models by users. We first consider a single-SBS scenario, where the caching decision, user selection, and wireless resource allocation are jointly determined by minimizing a training error bound subject to constraints on the cache capacity, the communication and computation latency, and the energy consumption. The solution is obtained by first computing the minimum achievable training loss for each model, followed by the optimization of the binary caching variables, which reduces to a 0-1 knapsack problem. The proposed framework is then extended to the multiple-SBS scenario where the user association among SBSs is further examined. We adopt a dual-ascent method where Lagrange multipliers are introduced and updated in each iteration to regularize the dependence among user selection and association. Given the Lagrange multipliers, the caching decision, user selection, resource allocation and user association variables are optimized in turn using a block coordinate descent algorithm. Simulation results show that the proposed scheme can achieve a training error bound that is lower than preference-only and random caching policies in both scenarios. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1536-1276 1558-2248 |
DOI: | 10.1109/TWC.2024.3360642 |