Distributed edge cooperation and data collection for digital twins of wide-areas

Digital twins for wide-areas (DT-WA) can model and predict the physical world with high fidelity by incorporating an artificial intelligence (AI) model. However, the AI model requires an energy-consuming updating process to keep pace with the dynamic environment, where studies are still in infancy....

Full description

Saved in:
Bibliographic Details
Published inChina communications Vol. 20; no. 8; pp. 177 - 197
Main Authors Kang, Mancong, Li, Xi, Ji, Hong, Zhang, Heli
Format Journal Article
LanguageEnglish
Published China Institute of Communications 01.08.2023
Key Laboratory of Universal Wireless Communications,Beijing University of Posts and Telecommunications,Beijing 100876,China
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Digital twins for wide-areas (DT-WA) can model and predict the physical world with high fidelity by incorporating an artificial intelligence (AI) model. However, the AI model requires an energy-consuming updating process to keep pace with the dynamic environment, where studies are still in infancy. To reduce the updating energy, this paper proposes a distributed edge cooperation and data collection scheme. The AI model is partitioned into multiple sub-models deployed on different edge servers (ESs) co-located with access points across wide-area, to update distributively using local sensor data. To reduce the updating energy, ESs can choose to become either updating helpers or recipients of their neighboring ESs, based on sensor quantities and basic updating convergencies. Helpers would share their updated sub-model parameters with neighboring recipients, so as to reduce the latter updating workload. To minimize system energy under updating convergency and latency constraints, we further propose an algorithm to let ESs distributively optimize their cooperation identities, collect sensor data, and allocate wireless and computing resources. It comprises several constraint-release approaches, where two child optimization problems are solved, and designs a large-scale multi-agent deep reinforcement learning algorithm. Simulation shows that the proposed scheme can efficiently reduce updating energy compared with the baselines.
ISSN:1673-5447
DOI:10.23919/JCC.fa.2023-0202.202308