Online Caching with Fetching cost for Arbitrary Demand Pattern: a Drift-Plus-Penalty Approach

In this paper, we investigate the problem of caching in a single server setting from the stochastic optimization viewpoint. The goal here is to optimize the time average cache hit subject to a time average constraint on the fetching cost and the cache constraint when the demands are non-stationary a...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1 - 5
Main Authors P, Shashank, Bharath, B. N.
Format Conference Proceeding
LanguageEnglish
Published IEEE 04.06.2023
Subjects
Online AccessGet full text
ISSN2379-190X
DOI10.1109/ICASSP49357.2023.10097152

Cover

Loading…
More Information
Summary:In this paper, we investigate the problem of caching in a single server setting from the stochastic optimization viewpoint. The goal here is to optimize the time average cache hit subject to a time average constraint on the fetching cost and the cache constraint when the demands are non-stationary and possibly correlated across time. We propose a modified Drift-Plus-Penalty (DPP) algorithm where at each time slot, we greedily minimize the \color{Blue}{\text{difference}} of fetching cost and an estimated cache hit multiplied by a factor V > 0. Since the problem does not exhibit an equilibrium optimal solution, we use a T slot lookahead metric where we benchmark the performance of the proposed algorithm with respect to a genie aided cache hit which has access to demands of the future T slots. We show that with a probability of at least 1 ‒ δ, the cache hit of the proposed algorithm with respect to the genie scales as \mathcal{O}\left( {\frac{{{T^2} + T\log R}}{{V\sqrt R }} + \frac{T}{V}} \right) + {\text{ms}}{{\text{e}}_{R,T}} and a fetching cost of \mathcal{O}\left( {\frac{{V\log \left( {\frac{1}{\delta }} \right)}}{R}} \right) is achievable, \color{Blue}{\text{here}}\;{\text{we}}\;{\text{divide}}\;{\text{time}}\;{\text{into}}\;R\;{\text{blocks}}\;{\text{of}}\;T\;{\text{slots}}\;{\text{each}} and mse R,T is the "Mean Squared Error" (MSE) of the predictor. We make the following observations to achieve better performance: (i) the MSE of the predictor should be less, (ii) V should be chosen large to achieve better cache hit but results in higher fetching cost, (iii) higher R to compensate for larger V , i.e., more time is required to achieve lower fetching cost. We corroborate these findings using a real world dataset, and show that the proposed algorithm outperforms some well known caching algorithms.
ISSN:2379-190X
DOI:10.1109/ICASSP49357.2023.10097152