Cooperative Computation Offloading for Multi-Access Edge Computing in 6G Mobile Networks via Soft Actor Critic

Driven by numerous emerging mobile servicesand applications, multi-access edge computing (MEC) is regarded as a promising technique to alleviate core network congestion and reduce service latency for massive Internet of Things (IoT) over 6G mobile networks. However, the infrastructure of conventiona...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on network science and engineering Vol. 11; no. 6; pp. 5601 - 5614
Main Authors Sun, Chuan, Wu, Xiongwei, Li, Xiuhua, Fan, Qilin, Wen, Junhao, Leung, Victor C. M.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.11.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Driven by numerous emerging mobile servicesand applications, multi-access edge computing (MEC) is regarded as a promising technique to alleviate core network congestion and reduce service latency for massive Internet of Things (IoT) over 6G mobile networks. However, the infrastructure of conventional MEC suffers from the lack of cloud server (CS) or cooperation of multiple edge servers (ESs), rendering it less capable of handling large-scale computation tasks in the ultra-dense smart environments. This paper investigates the issues of cooperative computation offloading for MEC in the 6G era. The proposed MEC system enables edge-cloud and edge-edge cooperation to address the limitations of single ES and the nonuniform distribution of computation task arrivals among multiple ESs. To support low-latency services, we model the cooperative computation offloading problem as a Markov decision process, and propose two intelligent computation offloading algorithms based on Soft Actor Critic (SAC), i.e., centralized SAC offloading and decentralized SAC offloading. Evaluation results show that the proposed algorithms outperform the existing computation offloading algorithms in terms of service latency.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4697
2334-329X
DOI:10.1109/TNSE.2021.3076795