A Deep Learning Model Generation Framework for Virtualized Multi-Access Edge Cache Management
To reduce the network traffic and service delay in next-generation networks, popular contents (videos and music) are proposed to be temporarily stored in the cache located at the edge nodes such as base stations. The challenging issue in the caching process is to correctly predict the popular conten...
Saved in:
Published in | IEEE access Vol. 7; pp. 62734 - 62749 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | To reduce the network traffic and service delay in next-generation networks, popular contents (videos and music) are proposed to be temporarily stored in the cache located at the edge nodes such as base stations. The challenging issue in the caching process is to correctly predict the popular contents to store, since the more popular the contents, the more reduction in the network traffic and the service delay occurs. Furthermore, network virtualization proposes an existing cellular network to decouple into infrastructure providers (InPs) and mobile virtual network operators (MVNOs) to reduce capital and operation costs. In this architecture, MVNOs lease the physical resources (network capacity and cache storage) from InPs, the owner of the resources, to provide services to their users. On the one hand, if an MVNO leases more resources than necessary, they will be wasted. On the other hand, if an MVNO leases fewer resources than necessary, the traffic and service delay will increase. Our objective is to lease enough resources without going under or over the required amount and store the most popular contents. Thus, we propose a deep learning-based prediction scheme to intelligently manage the resource leasing and caching process to improve MVNO's profit. The main challenging issue in utilizing the deep-learning is searching for the problem specific best-suited prediction model. Hence, we also propose a reinforcement learning-based model searching scheme to find the best suited deep-learning model. We implement the prediction models using the Keras and Tensorflow libraries and the performance of the cache leasing and caching schemes are tested with a Python-based simulator. In terms of utility, simulation results present that the proposed scheme outperforms 46% compared with the randomized caching with optimal cache leasing scheme. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2019.2916080 |