RLDRM: Closed Loop Dynamic Cache Allocation with Deep Reinforcement Learning for Network Function Virtualization
Network function virtualization (NFV) technology attracts tremendous interests from telecommunication industry and data center operators, as it allows service providers to assign resource for Virtual Network Functions (VNFs) on demand, achieving better flexibility, programmability, and scalability....
Saved in:
Published in | 2020 6th IEEE Conference on Network Softwarization (NetSoft) pp. 335 - 343 |
---|---|
Main Authors | , , , , , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2020
|
Online Access | Get full text |
Cover
Loading…
Summary: | Network function virtualization (NFV) technology attracts tremendous interests from telecommunication industry and data center operators, as it allows service providers to assign resource for Virtual Network Functions (VNFs) on demand, achieving better flexibility, programmability, and scalability. To improve server utilization, one popular practice is to deploy best effort (BE) workloads along with high priority (HP) VNFs when high priority VNF's resource usage is detected to be low. The key challenge of this deployment scheme is to dynamically balance the Service level objective (SLO) and the total cost of ownership (TCO) to optimize the data center efficiency under inherently fluctuating workloads. With the recent advancement in deep reinforcement learning, we conjecture that it has the potential to solve this challenge by adaptively adjusting resource allocation to reach the improved performance and higher server utilization. In this paper, we present a closed-loop automation system RLDRM 1 1 RLDRM: Reinforcement Learning Dynamic Resource Management to dynamically adjust Last Level Cache allocation between HP VNFs and BE workloads using deep reinforcement learning. The results demonstrate improved server utilization while maintaining required SLO for the HP VNFs. |
---|---|
DOI: | 10.1109/NetSoft48620.2020.9165471 |