Online Sparse Beamforming in C-RAN: A Deep Reinforcement Learning Approach

Higher communication rates are required given that cloud radio access network (C-RAN) becomes a significant component of 5G wireless communication, yet the problem of using sparse beamforming to maximize the achievable sum rate in the long term subject to transmit power constraints still remains ope...

Full description

Saved in:
Bibliographic Details
Published inIEEE Wireless Communications and Networking Conference : [proceedings] : WCNC pp. 1 - 6
Main Authors Zhong, Chong-Hao, Guo, Kun, Zhao, Mingxiong
Format Conference Proceeding
LanguageEnglish
Published IEEE 29.03.2021
Subjects
Online AccessGet full text
ISSN1558-2612
DOI10.1109/WCNC49053.2021.9417394

Cover

More Information
Summary:Higher communication rates are required given that cloud radio access network (C-RAN) becomes a significant component of 5G wireless communication, yet the problem of using sparse beamforming to maximize the achievable sum rate in the long term subject to transmit power constraints still remains open in C-RAN. Inspired by the success of Deep Reinforcement Learning (DRL) in solving dynamic programming problems, we propose a DRL-based framework for online sparse beamforming in C-RAN. Particularly, the DRL agent is in charge of remote radio head (RRH) activation based on the defined state space, action space, and reward function, and meanwhile makes a decision on transmit beamforming at active RRHs in each decision period. Through simulations, we evaluate the performance of the proposed framework by comparing it with traditional ways and show that it can achieve higher sum rate in time-varying network environment.
ISSN:1558-2612
DOI:10.1109/WCNC49053.2021.9417394