Delay and energy aware task scheduling mechanism for fog-enabled IoT applications: A reinforcement learning approach

With the expansion of the internet of things (IoT) devices and their applications, the demand for executing complex and deadline-aware tasks is growing rapidly. Fog-enabled IoT architecture has evolved to accomplish these tasks at the fog layer. However, fog computing devices have limited power supp...

Full description

Saved in:
Bibliographic Details
Published inComputer networks (Amsterdam, Netherlands : 1999) Vol. 224; p. 109603
Main Authors Raju, Mekala Ratna, Mothku, Sai Krishna
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the expansion of the internet of things (IoT) devices and their applications, the demand for executing complex and deadline-aware tasks is growing rapidly. Fog-enabled IoT architecture has evolved to accomplish these tasks at the fog layer. However, fog computing devices have limited power supply and computation resources compared to cloud devices. In delay-sensitive applications of fog-enabled IoT architecture, executing tasks with stringent deadlines while reducing the service latency and energy usage of fog resources is a difficult challenge. This paper presents an effective task scheduling strategy to allocate fog computing resources for IoT requests to meet the deadline of the requests and resource availability. Initially, the scheduling problem is formulated as mixed-integer nonlinear programming (MINLP) to reduce the energy consumption of the fog resources and service time of the tasks subject to the deadline and resource availability constraints. To address the high dimensionality issue of the tasks in a dynamic environment, a fuzzy-based reinforcement learning (FRL) mechanism is employed to reduce the service delay of the tasks and energy usage of the fog nodes. Initially, the tasks are prioritized using fuzzy logic. Then the prioritized tasks are scheduled using the on-policy reinforcement learning technique, which enhances the long-term reward compared to the Q-learning approach. Further, the evaluation outcomes reflect that the proposed task scheduling technique outperforms the existing algorithms with an improvement of up to 23% and 18% regarding service latency and energy consumption, respectively.
ISSN:1389-1286
1872-7069
DOI:10.1016/j.comnet.2023.109603