Relay-Assisted Edge Computing Framework for Dynamic Resource Allocation and Multiple-Access Task Processing in Digital Divide Regions
In the digital divide regions, the edge computing can improve the performance of application services for the Internet of Things (IoT) devices. However, the lagging of information and communication technology (ICT) results in congested access spectrum and imbalanced computational load. Moreover, the...
Saved in:
Published in | IEEE internet of things journal Vol. 11; no. 21; pp. 35724 - 35741 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.11.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In the digital divide regions, the edge computing can improve the performance of application services for the Internet of Things (IoT) devices. However, the lagging of information and communication technology (ICT) results in congested access spectrum and imbalanced computational load. Moreover, the mobility of IoT devices further exacerbates the fluctuating quality of communication links and the frequent changing of access positions. So, how to realize the reliable service requirements of devices in a heterogeneous environment with multiscale constraints should be considered appropriately and comprehensively. In this article, we model a relay-assisted multiaccess edge computing (MEC) framework, employing multihop transmission to enable the cross-domain service coverage. Under this framework, we formulate a quantitative model to characterize communication and computation processes within task migration, and derive analytical results for service latency. To improve the access resource efficiency, we adopt a joint nonorthogonal multiple access (NOMA) scheme to extend the transmission dimension, and employ proportional fairness to dynamically allocate resources. Besides, we propose a multiagent deep reinforcement learning (DRL) for optimizing the long-term task offloading scheduling, address the optimization problem of maximizing the system throughput efficiency. And we improve the action exploration and output dimensions of DRL to achieve convergence and performance enhancement. Simulation and analytical results show that our proposed algorithm outperforms the comparison algorithms in the key performance indicators. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2024.3439332 |