RAVEN: Resource Allocation Using Reinforcement Learning for Vehicular Edge Computing Networks

Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU,...

Full description

Saved in:
Bibliographic Details
Published inIEEE communications letters Vol. 26; no. 11; pp. 2636 - 2640
Main Authors Yanhao, Zhang, Abhishek, Nalam Venkata, Gurusamy, Mohan
Format Journal Article
LanguageEnglish
Published New York IEEE 01.11.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Vehicular Edge Computing (VEC) enables vehicles to offload tasks to the road side units (RSUs) to improve the task performance and user experience. However, blindly offloading the vehicle's tasks might not be an efficient solution. Such a scheme may overload the resources available at the RSU, increase the number of requests rejected, and decrease the system utility by engaging more servers than required. This letter proposes a Markov Decision Process based Reinforcement Learning (RL) method to allocate resources at the RSU. The RL algorithm aims to train the RSU in optimizing its resource allocation by varying the resource allocation scheme according to the total task demands generated by the traffic. The results demonstrate the effectiveness of the proposed method.
ISSN:1089-7798
1558-2558
DOI:10.1109/LCOMM.2022.3196711