A deep Q-network-based edge service offloading in cloud–edge–terminal environment A deep Q-network-based edge service offloading in

The advancement of mobile edge computing enables edge devices to efficiently utilize resources through optimized scheduling, providing robust computational support for diverse service requests. However, mobile intelligent terminals have limited computational resources and often rely on edge servers...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of supercomputing Vol. 81; no. 6
Main Authors Cao, Buqing, Yi, Yating, Zeng, Zilong, Ye, Hongfan, Tang, Bing
Format Journal Article
LanguageEnglish
Published New York Springer US 19.04.2025
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The advancement of mobile edge computing enables edge devices to efficiently utilize resources through optimized scheduling, providing robust computational support for diverse service requests. However, mobile intelligent terminals have limited computational resources and often rely on edge servers for task processing. Each edge server, in turn, has finite resources, necessitating task offloading to other servers when resource demand exceeds availability. This process aims to improve the efficiency of the response of the service. However, edge devices tend to prioritize their own performance, often neglecting load balancing across servers. To address this issue, this paper proposes a deep reinforcement learning-based method for offloading edge services in mobile edge environments. The method considers both the offloading demands of mobile terminals and the service reception capacities of edge servers to achieve efficient offloading, load balancing, and reduced communication delay. Initially, offloading demands and the reception capabilities of idle edge servers are mathematically modeled. The offloading scenario is then framed as an optimization problem involving multiple objectives and subject to various constraints. Finally, deep reinforcement learning is applied to construct a Markov decision process for iterative optimization, resulting in low-delay and balanced offloading solutions. The experimental results show that, compared with the baseline methods such as Random, Top-K, K-means, PSO, and Q-learning, the method proposed in this paper has achieved improvements of 30.85%, 17.42%, 12.69%, 22.32%, and 11.64% respectively in terms of the load standard deviation. This successfully verifies the effectiveness of the method proposed in this paper.
ISSN:1573-0484
DOI:10.1007/s11227-025-07228-4