Meta-Hierarchical Reinforcement Learning (MHRL)-Based Dynamic Resource Allocation for Dynamic Vehicular Networks

With the rapid development of vehicular networks, there is an increasing demand for extensive networking, computting, and caching resources. How to allocate multiple resources effectively and efficiently for dynamic vehicular networks is extremely important. Most existing works on resource managemen...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on vehicular technology Vol. 71; no. 4; pp. 3495 - 3506
Main Authors He, Ying, Wang, Yuhang Wang, Lin, Qiuzhen, Li, Jianqiang
Format Journal Article
LanguageEnglish
Published New York IEEE 01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the rapid development of vehicular networks, there is an increasing demand for extensive networking, computting, and caching resources. How to allocate multiple resources effectively and efficiently for dynamic vehicular networks is extremely important. Most existing works on resource management in vehicular networks assume static network conditions. In this paper, we propose a general framework that can enable fast-adaptive resource allocation for dynamic vehicular environments. Specifically, we model the dynamics of the vehicular environment as a series of related Markov Decision Processes (MDPs), and we combine hierarchical reinforcement learning with meta-learning, which makes our proposed framework quickly adapt to a new environment by only fine-tuning the top-level master network, and meanwhile the low-level sub-networks can make the right resource allocation policy. Extensive simulation results show the effectiveness of our proposed framework, which can quickly adapt to different scenarios, and significantly improve the performance of resource management in dynamic vehicular networks.
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2022.3146439