An Energy-Efficient Routing Protocol with Reinforcement Learning in Software-Defined Wireless Sensor Networks
The enormous increase in heterogeneous wireless devices operating in real-time applications for Internet of Things (IoT) applications presents new challenges, including heterogeneity, reliability, and scalability. To address these issues effectively, a novel architecture has been introduced, combini...
Saved in:
Published in | Sensors (Basel, Switzerland) Vol. 23; no. 20; p. 8435 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Basel
MDPI AG
13.10.2023
MDPI |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The enormous increase in heterogeneous wireless devices operating in real-time applications for Internet of Things (IoT) applications presents new challenges, including heterogeneity, reliability, and scalability. To address these issues effectively, a novel architecture has been introduced, combining Software-Defined Wireless Sensor Networks (SDWSN) with the IoT, known as the SDWSN-IoT. However, wireless IoT devices deployed in such systems face limitations in the energy supply, unpredicted network changes, and the quality of service requirements. Such challenges necessitate the careful design of the underlying routing protocol, as failure to address them often results in constantly disconnected networks with poor network performance. In this paper, we present an intelligent, energy-efficient multi-objective routing protocol based on the Reinforcement Learning (RL) algorithm with Dynamic Objective Selection (DOS-RL). The primary goal of applying the proposed DOS-RL routing scheme is to optimize energy consumption in IoT networks, a paramount concern given the limited energy reserves of wireless IoT devices and the adaptability to network changes to facilitate a seamless adaption to sudden network changes, mitigating disruptions and optimizing the overall network performance. The algorithm considers correlated objectives with informative-shaped rewards to accelerate the learning process. Through the diverse simulations, we demonstrated improved energy efficiency and fast adaptation to unexpected network changes by enhancing the packet delivery ratio and reducing data delivery latency when compared to traditional routing protocols such as the Open Shortest Path First (OSPF) and the multi-objective Q-routing for Software-Defined Networks (SDN-Q). |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1424-8220 1424-8220 |
DOI: | 10.3390/s23208435 |