A green, secure, and deep intelligent method for dynamic IoT-edge-cloud offloading scenarios

To fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the use...

Full description

Saved in:
Bibliographic Details
Published inSustainable computing informatics and systems Vol. 38; p. 100859
Main Authors Heidari, Arash, Navimipour, Nima Jafari, Jamali, Mohammad Ali Jabraeil, Akbarpour, Shahin
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.04.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the user experience. Energy Harvesting (EH) technology enables green energy to offer a continuous energy supply for IoT objects. It provides a solid assurance for the proper functioning of resource-constrained IoT objects when combined with the maturation of edge platforms and the development of parallel computing. The Markov Decision Process (MDP) and Deep Learning (DL) are used in this work to solve dynamic online/offline IoT-edge offloading scenarios. The suggested system may be used in both offline and online contexts and meets the user's quality of service expectations. Also, we investigate a blockchain scenario in which edge and cloud could work toward task offloading to address the tradeoff between limited processing power and high latency while ensuring data integrity during the offloading process. We provide a double Q-learning solution to the MDP issue that maximizes the acceptable offline offloading methods. During exploration, Transfer Learning (TL) is employed to quicken convergence by reducing pointless exploration. Although the recently promoted Deep Q-Network (DQN) may address this space complexity issue by replacing the huge Q-table in standard Q-learning with a Deep Neural Network (DNN), its learning speed may still be insufficient for IoT apps. In light of this, our work introduces a novel learning algorithm known as deep Post-Decision State (PDS)-learning, which combines the PDS-learning approach with the classic DQN. The system component in the proposed system can be dynamically chosen and modified to decrease object energy usage and delay. On average, the proposed technique outperforms multiple benchmarks in terms of delay by 4.5%, job failure rate by 5.7%, cost by 4.6%, computational overhead by 6.1%, and energy consumption by 3.9%. •Putting forward a new paradigm for incorporating edge and blockchain into IoT to assure safe offloading.•Offering a novel dynamic method for offline/online offloading for different IoT-edge scenarios.•Providing the EH component to extend battery life and enhance offloading effectiveness.•Decreasing energy usage, decreasing computational delay, and minimizing task failure rates to improve system efficiency.
ISSN:2210-5379
DOI:10.1016/j.suscom.2023.100859