Online Sparse Beamforming in C-RAN: A Deep Reinforcement Learning Approach
Higher communication rates are required given that cloud radio access network (C-RAN) becomes a significant component of 5G wireless communication, yet the problem of using sparse beamforming to maximize the achievable sum rate in the long term subject to transmit power constraints still remains ope...
Saved in:
Published in | IEEE Wireless Communications and Networking Conference : [proceedings] : WCNC pp. 1 - 6 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
29.03.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 1558-2612 |
DOI | 10.1109/WCNC49053.2021.9417394 |
Cover
Summary: | Higher communication rates are required given that cloud radio access network (C-RAN) becomes a significant component of 5G wireless communication, yet the problem of using sparse beamforming to maximize the achievable sum rate in the long term subject to transmit power constraints still remains open in C-RAN. Inspired by the success of Deep Reinforcement Learning (DRL) in solving dynamic programming problems, we propose a DRL-based framework for online sparse beamforming in C-RAN. Particularly, the DRL agent is in charge of remote radio head (RRH) activation based on the defined state space, action space, and reward function, and meanwhile makes a decision on transmit beamforming at active RRHs in each decision period. Through simulations, we evaluate the performance of the proposed framework by comparing it with traditional ways and show that it can achieve higher sum rate in time-varying network environment. |
---|---|
ISSN: | 1558-2612 |
DOI: | 10.1109/WCNC49053.2021.9417394 |