Liquid State Machine Learning for Resource and Cache Management in LTE-U Unmanned Aerial Vehicle (UAV) Networks
In this paper, the problem of joint caching and resource allocation is investigated for a network of cache-enabled unmanned aerial vehicles (UAVs) that service wireless ground users over the LTE licensed and unlicensed bands. The considered model focuses on users that can access both licensed and un...
Saved in:
Published in | IEEE transactions on wireless communications Vol. 18; no. 3; pp. 1504 - 1517 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper, the problem of joint caching and resource allocation is investigated for a network of cache-enabled unmanned aerial vehicles (UAVs) that service wireless ground users over the LTE licensed and unlicensed bands. The considered model focuses on users that can access both licensed and unlicensed bands while receiving contents from either the cache units at the UAVs directly or via content server-UAV-user links. This problem is formulated as an optimization problem, which jointly incorporates user association, spectrum allocation, and content caching. To solve this problem, a distributed algorithm based on the machine learning framework of liquid state machine (LSM) is proposed. Using the proposed LSM algorithm, the cloud can predict the users' content request distribution while having only limited information on the network's and users' states. The proposed algorithm also enables the UAVs to autonomously choose the optimal resource allocation strategies that maximize the number of users with stable queues depending on the network states. Based on the users' association and content request distributions, the optimal contents that need to be cached at UAVs and the optimal resource allocation are derived. Simulation results using real datasets show that the proposed approach yields up to 17.8% and 57.1% gains, respectively, in terms of the number of users that have stable queues compared with two baseline algorithms: Q-learning with cache and Q-learning without cache. The results also show that the LSM significantly improves the convergence time of up to 20% compared with conventional learning algorithms such as Q-learning. |
---|---|
ISSN: | 1536-1276 1558-2248 |
DOI: | 10.1109/TWC.2019.2891629 |