Decoupling NDN caches via CCndnS: Design, analysis, and application

In-network caching is considered to be a vital part of the Internet for future applications (e.g., Internet of Things). One proposal that has attracted interest in recent years, Named Data Networking (NDN), aims to facilitate in-network caching by locating content by name. However, the efficiency of...

Full description

Saved in:
Bibliographic Details
Published inComputer communications Vol. 151; pp. 338 - 354
Main Authors Rezazad, Mostafa, Tay, Y.C.
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In-network caching is considered to be a vital part of the Internet for future applications (e.g., Internet of Things). One proposal that has attracted interest in recent years, Named Data Networking (NDN), aims to facilitate in-network caching by locating content by name. However, the efficiency of in-network caching has been questioned by experts. Data correlation among caches builds strong dependencies between caches at the edge and in the core. That dependency makes analyzing network performance difficult. This paper proposes CCndnS (Content Caching strategy for NDN with Skip), a caching policy to break the dependencies among caches, thus facilitating the design of an efficient data placement algorithm. Specifically, each cache – regardless of its location in the network – should receive an independent set of requests; otherwise, only misses from downstream caches make their way to the upstream caches, i.e. a filtering effect that induces a correlation among the caches. CCndnS breaks a file into smaller segments and spreads them in the path between requester and publisher in a way that the head of the file (the first segment) should be cached at the edge router close to the requester and the tail far from the requester and towards the content provider. Requests for a segment skip searching caches in its path, to search only the cache with the segment of interest. That reduces the number of futile checks on caches, and thus the delay from memory accesses. This mechanism also decouples the caches, so there is a simple analytical model for cache performance in the network. We illustrate an application of the model to enforce a Service Level Agreement (SLA) between a content provider and the caching system proposed in this paper. The model can be used for cache provisioning for two purposes: (1) To specify the cache size to be reserved for specific contents to reach some desired performance. For instance, if the client of an SLA requires a 50% cache hit for its content at each router, the model can be used to determine the cache size that needs to be reserved to reach the 50% hit rate. (2) To calculate the effect of such reservations on other contents that use the routers covered by the SLA. The design, analysis, and application are tested with extensive simulations. •CCndnS reduces content redundancy to almost one copy for each data chunk.•CCndnS evens out the traffic by balancing the popularity of cached content.•CCndnS introduces the skipping technique to skip futile searches on CSs.•CCndnS breaks content dependencies among caches.•Develop a straightforward mathematical model for analyzing cache behavior.
ISSN:0140-3664
1873-703X
DOI:10.1016/j.comcom.2019.12.053