Coded Distributed Computing for Vehicular Edge Computing With Dual-Function Radar Communication

In this paper, we propose a coded distributed computing (CDC)-based vehicular edge computing (VEC) framework. Therein, a task vehicle equipped with a dual-function radar communication (DFRC) module uses its communication function to offload its computing tasks to nearby service vehicles and its rada...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on vehicular technology Vol. 73; no. 10; pp. 15318 - 15331
Main Authors Nguyen, Tien Hoa, Thi, Hoai Linh Nguyen, Le Hoang, Hung, Tan, Junjie, Luong, Nguyen Cong, Xiao, Sa, Niyato, Dusit, Kim, Dong In
Format Journal Article
LanguageEnglish
Published New York IEEE 01.10.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we propose a coded distributed computing (CDC)-based vehicular edge computing (VEC) framework. Therein, a task vehicle equipped with a dual-function radar communication (DFRC) module uses its communication function to offload its computing tasks to nearby service vehicles and its radar function to detect targets. However, due to the high mobility of the vehicles, the relative distance between the task vehicle and each service vehicle frequently varies over time, which causes a straggler effect and results in high offloading latency and even offloading disruption. To address this issue, the CDC based on the <inline-formula><tex-math notation="LaTeX">(m, k)</tex-math></inline-formula>-maximum distance separable (MDS) code is used at the communication function of the task vehicle. We then formulate an optimization problem that aims to i) minimize the overall computing latency, ii) minimize the offloading cost, and iii) maximize the radar range subject to the offloading latency requirement and connection duration. To achieve these objectives, we optimize the fractions of power allocated to the radar and communication functions and the MDS parameters. However, the highly dynamic vehicular environment makes the problem intractable, particularly due to the uncertainty of computing resource, and stochastic networking resources. Thus, we propose to use deep reinforcement learning (DRL) algorithms with regularization to address this issue. To enhance the generalizability of the proposed DRL algorithms, we further develop a transfer learning algorithm that allows the task vehicle to quickly learn the optimal policy in new environments. Simulation results show the effectiveness of the proposed scheme in terms of radar range, computation latency, and offloading cost. Furthermore, the employment of transfer learning is demonstrated to greatly boost the convergence speeds.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9545
1939-9359
DOI:10.1109/TVT.2024.3409554