Wireless Multihop Device-to-Device Caching Networks
We consider a wireless device-to-device network, where n nodes are uniformly distributed at random over the network area. We let each node caches M files from a library of size m ≥ M. Each node in the network requests a file from the library independently at random, according to a popularity distrib...
Saved in:
Published in | IEEE transactions on information theory Vol. 63; no. 3; pp. 1662 - 1676 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.03.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We consider a wireless device-to-device network, where n nodes are uniformly distributed at random over the network area. We let each node caches M files from a library of size m ≥ M. Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical "protocol model" of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions, including Zipf distributions with exponent less than one. In the parameter regime of interest, i.e., m=o(nM), we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of Θ(√M/m) for heavy-tailed popularity distributions. This scaling is constant with n , thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is Θ (M/m). The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold > 1, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as n increases. |
---|---|
ISSN: | 0018-9448 1557-9654 |
DOI: | 10.1109/TIT.2017.2654341 |