Energy-efficient cloud-edge collaborative model integrating digital twins and machine learning for scalable and adaptive distributed networks
The exponential growth of distributed networks, as seen in smart grids, IoT, and industrial automation, have added to the demands for effective and adaptive optimization systems. Traditional cloud solutions, while successful in providing global insights and scalability, often suffer from high latenc...
Saved in:
Published in | Sustainable computing informatics and systems Vol. 47; p. 101157 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.09.2025
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The exponential growth of distributed networks, as seen in smart grids, IoT, and industrial automation, have added to the demands for effective and adaptive optimization systems. Traditional cloud solutions, while successful in providing global insights and scalability, often suffer from high latency and limited responsiveness, whereas edge-based models excel at instant decision making but lack global synergy and scale. In an effort to overcome these constraints, this paper proposes a novel Cloud-Edge Collaborative Optimization Framework, which leverages the latest machine learning and digital twin algorithms, to scale up distribution networks. The model relies on Long Short-Term Memory (LSTM) networks at the edge layer to forecast traffic in real time and make local decisions, and Multi-Agent Reinforcement Learning (MARL) at the cloud layer to coordinate resources across the globe. Digital twins facilitate real-time flexibility, dynamic simulation and feedback for continuous improvement. This proposed model was extensively tested against actual network datasets. We noted a 50 % reduction in latency compared to cloud-only architectures, with latency on average, baselined at 35.34 ms, reduced to 17.67 ms; additionally, we noted 23 % more resource utilization compared to edge-only setups based on the average of 10 simulation runs. We had real world IoT traffic data for the experimentation with throughput of 50–100 Mbps and PDR greater than 90 % (consistently), which demonstrates that the network operated robustly under changing conditions; we averaged the results for reliability and significance. This study provides an ideal foundation for future work on digital-twin-enhanced cloud-edge architectures.
•Integrates cloud and edge computing to optimize distributed network performance while reducing energy consumption.•Uses LSTM at the edge for real-time traffic prediction and MARL at the cloud for global resource coordination.•Achieves a 50 % reduction in latency and a 23 % improvement in resource utilization compared to traditional models.•Enables real-time simulation, feedback, and dynamic adaptation for continuous network optimization. |
---|---|
ISSN: | 2210-5379 |
DOI: | 10.1016/j.suscom.2025.101157 |