A Deep Q-Learning Bisection Approach for Power Allocation in Downlink NOMA Systems
In this work, we study the weighted sum-rate maximization problem for a downlink non-orthogonal multiple access (NOMA) system. With power and data-rate constraints, this problem is generally non-convex. Therefore, a novel solution based on the deep reinforcement learning (DRL) framework is proposed...
Saved in:
Published in | IEEE communications letters Vol. 26; no. 2; pp. 316 - 320 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Institute of Electrical and Electronics Engineers |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this work, we study the weighted sum-rate maximization problem for a downlink non-orthogonal multiple access (NOMA) system. With power and data-rate constraints, this problem is generally non-convex. Therefore, a novel solution based on the deep reinforcement learning (DRL) framework is proposed for the power allocation problem. While previous work based on DRL restrict the solution to a limited set of possible power levels, the proposed DRL framework is specifically designed to find a solution with a much larger granularity, emulating a continuous power allocation. Simulation results show that the proposed power allocation method outperforms two baseline algorithms. Moreover, it achieves almost 85% of the weighted sum-rate obtained by a far more complex genetic algorithm that approaches exhaustive search in terms of performance. |
---|---|
ISSN: | 1089-7798 1558-2558 |
DOI: | 10.1109/LCOMM.2021.3130102 |