The ongoing development of the fifth generation (5G) wireless technologies is taking place in a unique landscape of recent advancement in information processing, marked by the emerging prevalence of cloud-based computing and smart mobile devices. These two technologies complement each other by design, with cloud servers providing the engine for computing and smart mobile devices naturally serving as human interfaces and untethered sensory inputs. Together, they are transforming a wide array of important applications such as telecommunications, industrial production, education, e-commerce, mobile healthcare, and environmental monitoring. We are entering a world where computation is ubiquitously accessible on local devices, global servers, and processors everywhere in between. Future wireless networks will provide communication infrastructure support for this ubiquitous computing paradigm, but at the same time they can also utilize the new-found computing power to drastically improve communication efficiency, expand service variety, shorten service delay, and reduce operational expenses.
The previous generations of wireless networks are passive systems. Residing near the edge of the Internet, they serve only as communication access pathways for mobile devices to reach the Internet core and the public switched telephone network (PSTN). Improvements to these wireless networks have focused on the communication hardware and software, such as advanced electronics and signal processing in the transmitters and receivers. Even for 5G, substantial research effort has been devoted to densification techniques, such as small cells, device-to-device (D2D) communications, and massive multiple-input multiple-output (MIMO). The successes of this communication-only wireless evolution reflect the classical view of an information age centered on information consumption through the Internet.
Yet, in many emerging applications, communication and computation are no longer separated, but interactive and unified. For example, in an augmented-reality application, which might be displayed on smart eyeglasses, the user's mobile device continuously records its current view, computes its own location, and streams the combined information to the cloud server, while the cloud server performs pattern recognition and information retrieval and sends back to the mobile device contextual augmentation labels, to be seamlessly displayed overlaying the actual scenery. As it can be seen from this example, there is a high level of interactivity between the communication and computing functions, and a low tolerance for the total delay due to information transmission and information processing.