TrimCaching: Parameter-Sharing AI Model Caching in Wireless Edge Networks

Next-generation mobile networks are expected to facilitate fast AI model downloading to end users. By caching models on edge servers, mobile networks can deliver models to end users with low latency, resulting in a paradigm called edge model caching. In this paper, we develop a novel model placement...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the International Conference on Distributed Computing Systems pp. 36 - 46
Main Authors Qu, Guanqiao, Lin, Zheng, Liu, Fangming, Chen, Xianhao, Huang, Kaibin
Format Conference Proceeding
LanguageEnglish
Published IEEE 23.07.2024
Subjects
Online AccessGet full text

Cover

Loading…