Q-learning and Simulated Annealing-based Routing for Software-defined Networks

With the increasing dependence on cloud services, the demand for high data rates has been growing exponentially. Therefore, the power-hungry data centers has been expanding to accommodate this growth with the required network services. Many Internet Service Providers (ISP) are targeting greener comm...

Full description

Saved in:
Bibliographic Details
Published in2022 International Conference on Computer and Applications (ICCA) pp. 1 - 10
Main Authors Kandil, Marwa, Awad, Mohamad Khattar, Alotaibi, Eiman Mohammed, Mohammadi, Reza
Format Conference Proceeding
LanguageEnglish
Published IEEE 20.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:With the increasing dependence on cloud services, the demand for high data rates has been growing exponentially. Therefore, the power-hungry data centers has been expanding to accommodate this growth with the required network services. Many Internet Service Providers (ISP) are targeting greener communication while balancing the trade-off between energy efficiency and satisfaction of quality-of-service (QoS) requirements. Software-defined networking (SDN) is a new networking paradigm that separates the network control plane from the data plane; thus, allowing the network controller to have a full overview of the network status and complete control of traffic routing. This paper investigates the application of recent developments in reinforcement learning (RL) techniques to optimize routing in Software-defined networks. Mainly, we developed a simulated annealing Q-learning (SAQL) routing algorithm that provides an optimized balance between energy consumption and QoS-requirements satisfaction in real-time for software-defined networks. The algorithm is implemented and tested on the open network operating system (ONOS) controller, which facilitates evaluation of the algorithm's performance in real networks. A comparison study between the proposed SAQL algorithm, the classical Q-learning ε-greedy exploration algorithm and traditional OSPF was carried out on two topologies. Results show that SAQL achieved around 60% less average control power than the standard OSPF and ε-greedy approaches while maintaining a relatively low latency of 0.280 ms in Nsfnet topology. Simulation results confirm that SAQL routing algorithm managed to balance the trade-off between energy-aware and QoS-aware routing.
DOI:10.1109/ICCA56443.2022.10039651