Energy Saving in 6G O-RAN Using DQN-based xApp
Open Radio Access Network (RAN) is a transformative paradigm that supports openness, interoperability, and intelligence, with the O-RAN architecture being the most recognized framework in academia and industry. In the context of Open RAN, the importance of Energy Saving (ES) is heightened, especiall...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
23.09.2024
|
Online Access | Get full text |
Cover
Loading…
Summary: | Open Radio Access Network (RAN) is a transformative paradigm that supports
openness, interoperability, and intelligence, with the O-RAN architecture being
the most recognized framework in academia and industry. In the context of Open
RAN, the importance of Energy Saving (ES) is heightened, especially with the
current direction of network densification in sixth generation of mobile
networks (6G). Traditional energy-saving methods in RAN struggle with the
increasing dynamics of the network. This paper proposes using Reinforcement
Learning (RL), a subset of Machine Learning (ML), to improve ES. We present a
novel deep RL method for ES in 6G O-RAN, implemented as xApp (ES-xApp). We
developed two Deep Q-Network (DQN)-based ES-xApps. ES-xApp-1 uses RSS and User
Equipment (UE) geolocations, while ES-xApp-2 uses only RSS. The proposed models
significantly outperformed heuristic and baseline xApps, especially with over
20 UEs. With 50 UEs, 50% of Radio Cards (RCs) were switched off, compared to
17% with the heuristic algorithm. We have observed that more informative inputs
may lead to more stable training and results. This paper highlights the
necessity of energy conservation in wireless networks and offers practical
strategies and evidence for future research and industry practices. |
---|---|
DOI: | 10.48550/arxiv.2409.15098 |