Multi-agent deep reinforcement learning-based partial offloading and resource allocation in vehicular edge computing networks
The advancement of intelligent transportation systems and the increase in vehicle density have led to a need for more efficient computation offloading in vehicular edge computing networks (VECNs). However, traditional approaches are unable to meet the service demand of each vehicle due to limited re...
Saved in:
Published in | Computer communications Vol. 234; p. 108081 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
15.03.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The advancement of intelligent transportation systems and the increase in vehicle density have led to a need for more efficient computation offloading in vehicular edge computing networks (VECNs). However, traditional approaches are unable to meet the service demand of each vehicle due to limited resources and overload. Therefore, in this paper, we aim to minimize the long-term computation overhead (including delay and energy consumption) of vehicles. First, we propose combining the computational resources of local vehicles, idle vehicles, and roadside units (RSUs) to formulate a computation offloading strategy and resource allocation scheme based on multi-agent deep reinforcement learning (MADRL), which optimizes the dual offloading decisions for both total and residual tasks as well as system resource allocation for each vehicle. Furthermore, due to the high mobility of vehicles, we propose a task migration strategy (TMS) algorithm based on communication distance and vehicle movement speed to avoid failure of computation result delivery when a vehicle moves out of the communication range of an RSU service node. Finally, we formulate the computation offloading problem for vehicles as a Markov game process and design a Partial Offloading and Resource Allocation algorithm based on the collaborative Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (PORA-MATD3). PORA-MATD3 optimizes the offloading decisions and resource allocation for each vehicle through a centralized training and distributed execution approach. Simulation results demonstrate that PORA-MATD3 significantly reduces the computational overhead of each vehicle compared to other baseline algorithms in VECN scenarios. |
---|---|
ISSN: | 0140-3664 |
DOI: | 10.1016/j.comcom.2025.108081 |