Deep Reinforcement Learning-Based Resource Allocation for 5G Machine-Type Communication in Active Distribution Networks

With the development of smart grids and active distribution networks (ADNs), reliable and low-latency communication is the key to advanced applications such as energy management and situation awareness (SA). However, with the increasing amount of data and location information to be collected, ensuri...

Full description

Saved in:
Bibliographic Details
Published inMobile Networks and Management pp. 39 - 59
Main Authors Li, Qiyue, Cheng, Hong, Yang, Yangzhao, Tang, Haochen, Liu, Zhi, Cao, Yangjie, Sun, Wei
Format Book Chapter
LanguageEnglish
Published Cham Springer International Publishing
SeriesLecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the development of smart grids and active distribution networks (ADNs), reliable and low-latency communication is the key to advanced applications such as energy management and situation awareness (SA). However, with the increasing amount of data and location information to be collected, ensuring the real-time transmission of sampling data has become a challenge. In addition, the operating environment of ADNs is complex, and external interference will affect the reliability of transmission. In particular, the occurrence of power emergencies is random, and the high reliability of emergency data transmission caused by emergencies has attracted much attention. Although repeated data transmission in 5G machine-type communication (MTC) can improve the reliability, how to dynamically allocate communication resources according to the transmitted data and external interference remains a problem. To this end, we propose a scheme of repeated data transmission to eliminate the influence of external interference on the outage probability of emergency data transmission. Our scheme is modeled as a dynamic programming problem to maximize the energy efficiency. First, external interference is considered in the calculation of the transmission outage probability of smart meters (SMs), and the number of repeated transmissions of emergency data is placed in the position of the index, which is determined by reaching the target outage probability. Then, to allocate dynamic resource in real time in a changing environment, we propose a deep reinforcement learning method, which has fast computing speed, can more quickly allocate resources and reduce the delay of data transmission. Simulation results have verified the superiority of the proposed scheme.
Bibliography:This work is supported in part by grants from the National Natural Science Foundation of China (52077049, 51877060), Anhui Provincial Natural Science Foundation (2008085UD04), Fundamental Research Funds for the Central Universities (PA2020GDJQ0027, JZ2019HGTB0089, PA2019GDQT0006), and the 111 Project (BP0719039).
ISBN:9783030947620
3030947629
ISSN:1867-8211
1867-822X
DOI:10.1007/978-3-030-94763-7_4