To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cloud-assisted wireless networks are emerging solutions that unite wireless networks and cloud computing to deliver cloud services directly from the network edges to support the foreseen massive demands from data- and computation-hungry mobile users. In this chapter, we first provide an overview of the two emerging cloud-assisted wireless network paradigms – namely, the cloud radio access network (C-RAN), which aims at centralization of base station (BS) functionalities, and mobile-edge computing (MEC), which aims at providing the RAN with computing and storage resources. We then leverage the C-RAN and MEC paradigms to design novel cooperative caching frameworks that explore the synergies of the in-network computing and storage resources. Specifically, a novel cooperative hierarchical caching framework is designed in C-RAN, where caching is performed both at the distributed BSs and at the cloud processing unit (CPU), which bridges the gap between the traditional edge-based and core-based caching schemes. Furthermore, a joint cooperative caching and processing framework is designed in a MEC network, where the MEC servers perform both cache storage and video transcoding to support adaptive bitrate (ABR) video streaming.
This chapter focuses on the performance enhancement brought by the addition of caching capabilities to full-duplex (FD) radios in the context of ultra-dense networks (UDNs). In particular we aim at showing that the interference footprint of such networks, i.e., the major bottleneck to overcome to observe the theoretical FD throughput doubling at the network level, can be significantly reduced thanks to edge caching. A caching model is designed to mimic a geographical caching policy based on the popularity of local files and to compute their associated cache-hit probability. Subsequently, we calculate the probability of successful transmission of a file requested by a use equipment, either directly by its serving small cell base station (SCBS) or by the corresponding backhaul node (BN): this quantity is then used to lower-bound the throughput of the considered network. Our approach makes use of tools from stochastic geometryto guarantee the generality of our results and analytical tractability of the problem.
We study the delivery of 360°-navigable videos to 5G virtual reality (VR) wireless clients in future cooperative multi-cellular systems. A collection of small cell base stations interconnected via backhaul links are sharing their caching and computing resources to maximize the aggregate reward they earn by serving 360° videos requested by the wireless clients. We design an efficient representation method to construct the 360° videos such that they deliver only the remote scene viewpoint content genuinely needed by the VR users, thereby overcoming the present highly inefficient approach of sending a bulky 360° video, whose major part is made up of scene information never navigated by a user. Moreover, we design an optimization framework that allows the base stations to select cooperative caching/rendering/streaming strategies that maximize the aggregate reward they earn when serving the users for the given caching/computational resources at each base station.
In this chapter, we study the technoeconomic challenges for one of the most promising new caching paradigms, the elastic wireless edge caching solution, by which third parties dynamically lease storage resources in a wireless cloud. The main idea is the following: a mobile network operator (MNO) advertises storage prices for servers placed in proximity to the end users, and various content providers lease on-demand capacity to improve the quality of their services. We describe the main concepts and existing business models for the elastic CDN solution, provide an overview of the related work, and discuss the key differences between in-network and edge caching. We then present a detailed model for this system where the caches reside in cellular base stations. We formulate a problem where cache dimensioning, content caching, and request routing decisions are jointly optimized by a central processor (CP) to reduce content delivery delay, subject to a given leasing budget. We design a suite of dynamic solution algorithms, based on the Lyapunov drift-minus-benefit technique and present numerical experiments that quantify the benefits of elastic over typical static cache deployments
Wireless edge caching for mobile social networks (MSNs) has emerged as one of the prospective solutions to provide reliable and low-latency communication services for mobile users on social networking. In this chapter, we first give an overview of MSNs, including their development and challenges. We then discuss mobile edge caching (MEC) paradigms to address emerging issues for MSNs, e.g., service delay, users’ experience, and economic efficiency. In addition to the advantages, the development of MEC networks also places some key challenges’ such as hierarchical architecture of MEC networks, proactive caching, privacy, and security issues. The framework can authenticate MSN users based on public-key cryptography and predict their content demands utilizing a matrix factorization method. Based on the prediction, an optimal content caching policy for an MEC node is presented to minimize the average latency of all MSN users under the MEC nodes’ storage capacity constraints. Furthermore, this framework provides an optimal business model to maximize the revenue for MSN service providers based on the demands of the MSN users and the obtained optimal caching policy.
Large-scale data analysis is becoming an important source of information for mobile network operators (MNOs). MNOs can now investigate the feasibility of possible new technological advances such as storage/memory utilization, context awareness, and edge/cloud computing using analytic platforms designed for big data processing. Within this context, studying caching from a mobile data traffic analytical perspective can offer rich insights on evaluating the potential benefits and gains of proactive caching at base stations. In this chapter, we study how data collected from MNOs can be leveraged using machine learning tools in order to infer insights into the benefits of caching. Through our practical architecture, vast amount of data can be harnessed for content popularity estimations and placing strategic contents at base stations (BSs). Our results demonstrate several gains in terms of both content demand satisfaction and backhaul offloading rates while utilizing real-world data sets collected from a major MNO.
This chapter presents a content-centric framework for transmission optimization in cloud radio access networks (RANs) by leveraging wireless edge caching and physical-layer multicasting. We consider a cache-enabled cloud RAN, where each base station (BS) is equipped with a local cache and connected to a central processor (CP) via a backhaul link. The BSs acquire the requested contents either from their local caches or from the core network via the backhaul links. We first study the caching effects on multicast-enabled access downlink, where users requesting the same content are grouped together and served by the same BS or BS cluster using multicasting. We study the cache-aware joint design of the content-centric BS clustering and multicast beam-forming to minimize the system total power cost and backhaul cost subject to the quality-of-service (QoS) constraints for each multicast group.
This chapter investigates the impact of caching in the interference networks. First, we briefly review the basics of some classic interference networks and the corresponding interference management techniques. Then we review an interference network with caches equipped at all transmitters and receivers, termed as cache-aided interference network. The information-theoretic metric normalized delivery time (NDT) is introduced to characterize the system performance. The NDT in the cache-aided interference network is discussed for both single-antenna and multiple-antenna cases. It is shown that with different cache sizes, the network topology can be opportunistically changed to different classic interference networks, which leverages local caching gain, coded multicasting gain, and transmitter cooperation gain (via interference alignment and interference neutralization). Finally, the NDT results are extended to the partially connected interference network.
In this chapter, a novel framework is proposed to address critical mobility management challenges, including frequent handovers (HOs), handover failure (HOF), and excessive energy consumption for seamless HO in emerging dense wireless cellular networks. In particular, we develop a model that exploits broadband mmW connectivity whenever available to cache content that MUEs are interested in. Thus it will enable the MUEs to use the cached content and avoid unnecessary HO to small cell base stations (SCBSs) with relatively small cell sizes. First, we develop a geometric model to derive tractable, closed-form expressions for key performance metrics, such as the probability of caching, cumulative distribution function of caching duration, and the average data rate for content caching over an mmW link. In addition, we provide insight on the performance gains that caching in mmW–mW networks can yield in terms of reducing the number of HOs and the average HOF.
We consider joint caching, routing, and channel assignment for video delivery over coordinated small-cell cellular systems of the future internet. We formulate the problem of maximizing the throughput of the system as a linear program in which the number of variables is very large. To address channel interference, our formulation incorporates the conflict graph that arises when wireless links interfere with each other due to simultaneous transmission. We utilize the column generation method to solve the problem by breaking it into a restricted master subproblem that involves a select subset of variables and a collection of pricing subproblems that select the new variable to be introduced into the restricted master problem, if that leads to a better objective function value.
Edge-caching has received much attention as an efficient technique to reduce delivery latency and network congestion during peak-traffic times by bringing data closer to end users. Existing works usually design caching algorithms separately from physical layer design. In this chapter, we analyze edge-caching wireless networks by taking into account the caching capability when designing the signal transmission. Particularly, we investigate multi-layer caching, where both base station (BS) and users are capable of storing content data in their local cache and analyze the performance of edge-caching wireless networks under two notable uncoded and coded caching strategies. We first calculate backhaul and access throughputs of the two caching strategies for arbitrary values of cache size. The required backhaul and access throughputs are derived as a function of the BS and user cache sizes. Then closed-form expressions for the system energy efficiency (EE) corresponding to the two caching methods are derived. Based on the derived formulas, the system EE is maximized via a precoding vectors design and optimization while satisfying a predefined user request rate. Two optimization problems are proposed to minimize the content delivery time for the two caching strategies.