AutoMLPoweredNetworks: Automated Machine Learning Service Provisioning for NexGen Networks

This research paper presents a novel framework designed to automate the provisioning of ML services, intelligently tailoring the ML package based on various factors such as service profiles, regional resource usage patterns, operator-defined KPIs, and current ML resource utilization in the network....

Full description

Saved in:
Bibliographic Details
Published inGLOBECOM 2023 - 2023 IEEE Global Communications Conference pp. 6432 - 6437
Main Authors Singh, Sukhdeep, Jain, Ashish, Thaliath, Joseph, Hong, Moonki, Yoon, Seungil
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.12.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This research paper presents a novel framework designed to automate the provisioning of ML services, intelligently tailoring the ML package based on various factors such as service profiles, regional resource usage patterns, operator-defined KPIs, and current ML resource utilization in the network. Our proposed framework employs dynamic and automatic cell grouping techniques using similarity metric correlation algorithms across Base Stations (BS). It selectively trains a representative cell within each group using the best available Machine Learning (ML) model automatically determined. The trained model of the representative cell is subsequently applied to the remaining BS within the same group. To evaluate the effectiveness of our solution, we conducted extensive evaluations using real-world operator data from 5G networks, encompassing a wide range of network KPIs. The results demonstrate the remarkable impact of our framework, showcasing substantial resource savings in terms of ML Server Processing time, memory consumed, and server util percentage. Furthermore, our approach significantly reduces the number of ML trainings required, all while maintaining high ML prediction accuracies. On average, our solution achieves an impressive 39.94% reduction in ML server processing time, a substantial 60.46% reduction in ML server memory, a remarkable 75.11% reduction in server util percentages, and a total of 649 fewer ML trainings for 5G operator data across various network KPIs. These achievements highlight the efficacy of our framework in optimizing resource allocation without compromising the accuracy of ML predictions.
ISSN:2576-6813
DOI:10.1109/GLOBECOM54140.2023.10437119