Multimodal cloud resources utilization forecasting using a Bidirectional Gated Recurrent Unit predictor based on a power efficient Stacked denoising Autoencoders

To reap the advantages of many continual growing cloud services, cloud industries should adopt smart and holistic resources scheduling strategies. By deploying efficient deep learning technologies, many chaotic cloud traffics’ potential issues may be solved. Toward efficient cloud instances rightsiz...

Full description

Saved in:
Bibliographic Details
Published inAlexandria engineering journal Vol. 61; no. 12; pp. 11565 - 11577
Main Authors Ikhlasse, Hamzaoui, Benjamin, Duthil, Vincent, Courboulay, Hicham, Medromi
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.12.2022
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:To reap the advantages of many continual growing cloud services, cloud industries should adopt smart and holistic resources scheduling strategies. By deploying efficient deep learning technologies, many chaotic cloud traffics’ potential issues may be solved. Toward efficient cloud instances rightsizing and scheduling, we adopt in this paper a new Bidirectional Gated Recurrent Unit predictor based on a power efficient Stacked Denoising Autoencoders to forecast simultaneously future hourly virtual CPU, memory, and storage utilizations. Using various data ranges of resources under three AWS instances families, the best forecasting results achieved so far [1,83,30,78,331,11] of mean RMSE values and [1,37,21,63,245,13] of mean MAE values while predicting respectively future vCPU, memory, and storage utilizations. In addition, the proposed model also proved its precision stability and outperformance over the three considered SDAE-GRU, SDAE-LSTM and BiGRU benchmark models. Given the neglected power consumption measurement noticed in most related studies, we eventually validated the proposed predictor’s power efficiency by measuring in addition its real time consumed power in watt and temperature throughout the training process duration. The proposed predictor decreased the average consumed power by 5% compared to a classical sparse AE-BiGRU.
ISSN:1110-0168
DOI:10.1016/j.aej.2022.05.017