Inter-Cell Slicing Resource Partitioning via Coordinated Multi-Agent Deep Reinforcement Learning

Network slicing enables the operator to configure virtual network instances for diverse services with specific requirements. To achieve the slice-aware radio resource scheduling, dynamic slicing resource partitioning is needed to orchestrate multi-cell slice resources and mitigate inter-cell interfe...

Full description

Saved in:
Bibliographic Details
Published inICC 2022 - IEEE International Conference on Communications pp. 3202 - 3207
Main Authors Hu, Tianlun, Liao, Qi, Liu, Qiang, Wellington, Dan, Carle, Georg
Format Conference Proceeding
LanguageEnglish
Published IEEE 16.05.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Network slicing enables the operator to configure virtual network instances for diverse services with specific requirements. To achieve the slice-aware radio resource scheduling, dynamic slicing resource partitioning is needed to orchestrate multi-cell slice resources and mitigate inter-cell interference. It is, however, challenging to derive the analytical solutions due to the complex inter-cell interdependencies, inter-slice resource constraints, and service-specific requirements. In this paper, we propose a multi-agent deep reinforcement learning (DRL) approach that improves the max-min slice performance while maintaining the constraints of resource capacity. We design two coordination schemes to allow distributed agents to coordinate and mitigate inter-cell interference. The proposed approach is extensively evaluated in a system-level simulator. The numerical results show that the proposed approach with inter-agent coordination outperforms the centralized approach in terms of delay and convergence. The proposed approach improves more than two-fold increase in resource efficiency as compared to the baseline approach.
ISSN:1938-1883
DOI:10.1109/ICC45855.2022.9838518