Queuing Models for Different Caching Schemes by Caching Partial Files
We consider a server with a finite number of files which can be partitioned and it is connected through a shared link to multiple users with cache memory. We study the performance of the system using average queuing delay at the server as a performance metric for different system models. Users in un...
Saved in:
Published in | 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI) pp. 1234 - 1238 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.09.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We consider a server with a finite number of files which can be partitioned and it is connected through a shared link to multiple users with cache memory. We study the performance of the system using average queuing delay at the server as a performance metric for different system models. Users in uncoded system caches equal fraction of files instead of the whole file. Requests for a particular file are merged until the start of servicing of that request. We compare the system models having single request queue and multiple request queues with different percentage of the files cached for uncoded caching scheme. Also, we propose a novel system model with coded caching scheme which segregates file requests by grouping the requests based on popularity profile of the files. When the server is busy, pending file requests are queued in respective group queues instead of traditional user request queues. In this scheme, each user caches different percentage of file based on file's popularity such that higher percentage of cache will be allocated for most popular files. We have compared proposed system model and multiple user request queues model with coded multicasting. We have drawn conclusions about the average queuing delay at the server by comparing uncoded and coded caching schemes. |
---|---|
DOI: | 10.1109/ICACCI.2018.8554431 |