Design and Analysis of an Efficient Multiresource Allocation System for Cooperative Computing in Internet of Things

By migrating tasks from the end devices to the edge or cloud, cooperative computing in the Internet of Things can support time-sensitive, high-dimensional, and complex applications while utilizing existing resources, such as the network bandwidth, computing resources, and storage capacity. How to de...

Full description

Saved in:
Bibliographic Details
Published inIEEE internet of things journal Vol. 9; no. 16; pp. 14463 - 14477
Main Authors Zhang, Xiaoqi, Cheng, Hongju, Yu, Zhiyong, Xiong, Neal N.
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 15.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:By migrating tasks from the end devices to the edge or cloud, cooperative computing in the Internet of Things can support time-sensitive, high-dimensional, and complex applications while utilizing existing resources, such as the network bandwidth, computing resources, and storage capacity. How to design the multiresource allocation system efficiently is a significant research problem. In this article, we design a multiresource allocation system for cooperative computing in the Internet of Things based on deep reinforcement learning by redefining latency calculation models for communication, computation, and caching with the consideration of practical interference factors, such as the Gaussian noise and data loss. The proposed system uses actor-critic as the base model for rapidly approximating the optimal policy by updating parameters of the actor and critic in respective gradient directions. The balance control parameter is introduced to fit the desired learning rate and actual learning rate. At the same time, we use the method of double experience pool to limit the exploration direction of the optimal policy, which reduces the time complexity and space complexity of the problem solution and improves the adaptability and reliability of the scheme. Experiments have demonstrated that multiresource allocation algorithm based on deep reinforcement learning (DRL-MRA) performs well in terms of the average service latency under resource-constrained conditions, and the improvement is significant with the increase of network size.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2021.3094507