Abstract
In this paper, we asymptotically study the average delay performance
for random content requests in a general network of caches, where the
delay of a content request refers to the number of hops necessary to
fetch the requested content. The key factors to be considered in our
study are dynamic change of content popularity and thus the need of
learning content popularity distribution in an on-line manner, where
there exists a complex inter-play among (a) how long we should learn
popularity, (b) how often we should change cached contents, and (c)
how we use learnt popularity in caching contents over the network. We
model this inter-play in a generalized framework, called RLP (Repeated
Learning and Placement), and aim at quantifying the asymptotic delay
of content requests under various external conditions. To this end, we
first consider an oracle policy that produces a lower-bound on delay,
and propose a policy in our repeated learning and placement framework
whose performance is almost close to that of the oracle policy. Our
derivation of this scaling law in delay is made under different dependence of
the speed of popularity change, average routing distance in the
network of caches, and the shape of the popularity distribution. We believe that
our result is of broad interest to the case of a complex network of
caches such as a large-scale CDN (Content Distribution Network) or ICN
(Information-Centric Network).