The Road Ahead for Wireless Technology: Dreams and Challenges

By Andrea Goldsmith

What does the future of wireless look like?

The next generation of wireless networks – 5G and beyond – will support ubiquitous communications between people and devices, including devices we cannot even envision today.

These networks will have much better reach, reliability, and performance than today’s networks and will support new applications such as smart homes and cities, autonomous cars, drones,  robots and virtual and augmented reality, as well as widespread sensing coupled with massive data computation in the cloud. Realizing this vision will require significant technology advances on both the device and the network side.

From a device perspective, smartphones today already have remarkable functionality. These devices are our phones, cameras, music and video players, computers, GPS devices, gaming machines, virtual-reality enablers and e-readers, while also providing a platform for a limitless number of targeted applications, or “apps,” that utilize the smartphone capabilities. These devices could provide much more functionality in the future, including applications that tap emerging machine learning capabilities.  Smartphones could also be much smaller, particularly if we could replace screens with a more intelligent input/output mechanism.

Future smartphones will also incorporate millimeter wave technology to exploit the significant additional gigahertz of spectrum available in these bands.

On the network side, the focus of the future is on much better performance and reliability than we have today, while supporting a much broader range of devices and applications than current 4G systems.  In particular,  5G networks will need to provide speeds  of multiple Gbps for some high-performance devices with 99% coverage indoors and out,  while also supporting very small low-power low-rate devices that often won’t need continuous connectivity.

The Skinny on the Backbone

A big challenge in supporting Gigabit data rates for 5G is that there is very limited spectrum in the cellular bands.

According to the FCC’s analysis as shown in Fig. 1, Mobile Broadband:  The Benefits of Additional Spectrum, by 2014 we were projected to run nearly a 300 MHz spectrum deficit, even with many people using Wi-Fi for video and other bandwidth-intensive smart phone applications.  Coincidentally, there is approximately 300 MHz of spectrum in the 5 GHz band for Wi-Fi, and that is why most people switch to Wi-Fi for whenever it is available, particularly for high-speed low-latency applications such as video and gaming.

Fig. 1: Annual growth of mobile data per cell site and spectrum surplus/deficit

The FCC’s analysis indicates that today’s cellular network cannot support the current data demand from users. Moreover, on the horizon is the so-called “Internet of Things (IoT),” whereby any electronic device with an on-off switch can also have a radio connecting it to the Internet.   It is currently estimated that there will be 20-30B connected IoT devices by 2020.

While the frenzy around IoT may be overblown, it seems likely that billions more devices will require Internet connectivity in the near future based on the commercial success that some IoT devices, such as connected doorbells and thermostats, are already enjoying.  Given this success, it is reasonable to assume that 5G networks will need to support IoT device connectivity requirements along with the connectivity requirements of future smart phones and other devices that for now live only in our imaginations.

Thus, a big challenge in designing 5G networks is the heterogeneous requirements for the devices and applications these networks will support.  In particular, while smartphones will always benefit from higher data rates and better coverage, many IoT applications do not need high rates or ubiquitous coverage.  In addition, some 5G applications will not have the strict latency requirements of real time voice, video and gaming, whereas other applications such as autonomous driving have low data rates but stringent latency and reliability constraints.

We need to determine how to build a wireless network that can support the voice and data needs of smartphones along with the wide range of requirements that other 5G devices and applications will entail.

What Would Shannon Say?

Looking at the 300 MHz deficit identified in the FCC report, the following question arises: Do we have that deficit because there is not enough spectrum to meet demand and current networks are at capacity, or is it because there is room for much more innovation in cellular system design that would lead to orders of magnitude better spectrum efficiency?

In other words, are we near or far from the Shannon fundamental data rate limit in wireless network design?

In fact, the Shannon capacity of most wireless networks has been an open problem for decades, so we cannot say whether we are close to that limit or not in current wireless network designs.  In particular, we do not know the capacity of time-varying channels unless we know the channel state perfectly and instantaneously.  The capacity of channels with interference or relays is an open problem in information theory as well.  Hence, since cellular systems are interference limited and ad hoc and sensor networks entail both relaying and interference, we do not know the capacity limits of these networks either.

Shannon capacity is an asymptotic limit with no constraints on total energy, delay, or complexity. Hence the Shannon limit cannot help us determine whether these networks achieve the minimum energy per bit or minimum delay per bit that is possible, or how constraints on computational complexity at the device and in the infrastructure impact the performance limits of wireless networks.  Given the need for cost efficiency and energy efficiency in next generation networks and devices, we do not have a theory to tell us whether current technology is really close to the best we can do in these performance dimensions or if we are orders of magnitude away.

Despite decades of work by the smartest researchers in the world, we still have limited knowledge about the Shannon capacity limits of wireless networks. Hence we will likely never fully know these performance bounds.  However, information theory, and in particular upper and lower bounds on Shannon capacity, can provide insights, design guidelines, and performance targets for future wireless network designs. Moreover, regardless of the technology used, we can increase wireless network capacity by using more spectrum, which has ignited interest in millimeter wave communications.

mmWave + Massive MIMO:  The Dynamic Duo

Millimeter wave (mmWave) spectrum offers an opportunity to provide significantly higher data rates than are possible in current wireless systems.  As shown in Fig. 2, frequency bands at 60 GHz and above have tens of GHz of spectrum available.  All of our wireless systems today would fit into those mmWave bands.  Moreover that spectrum is currently unregulated or lightly regulated, meaning that service providers do not have to spend huge amounts of money to pay for licenses of that spectrum.

Fig. 2: Licensed and unlicensed spectrum in the mmWave frequency bands  Source:  ZTE

One of the main challenges with mmWave systems is attenuation.  For omnidirectional antennas, power falls off as the inverse of frequency squared, so the higher the frequency, the more path loss you have.   We also see significant attenuation from blocking objects, such as buildings or cars.  Rain is a big attenuator as well, because the chemical structure of water causes large attenuation of 60 GHz signals. We need to overcome the attenuation challenges of mmWave propagation to ensure reasonable range and coverage for systems operating in these frequency bands.

Fortunately, massive MIMO, using tens to hundreds of antennas in an array, can help us reduce or eliminate this attenuation problem. With a large antenna array at any frequency, we can point the energy in a very narrow, angular beamwidth.   The more antennas we have, the narrower the beamwidth we create so we can reduce or eliminate attenuation by pointing the energy at a finer and finer angular segment.

In addition, fading goes away with massive MIMO.  Fading results from signals reflecting off buildings or other objects. These reflections arrive at the receiver with different time delays and hence with different phase shifts. The reflections are combined “in the air” at the receiver, which results in constructive (when the phases align) and destructive (when the phases do not align) interference..  This changing interference can cause 20-30 dB dips in received signal power, which is referred to as fading. When the beams of a massive MIMO array are pointed at a very narrow angle, there are almost no reflections from surrounding objects, and hence almost no change in the received signal power due to fading. There is also no interference because we are only looking at a very narrow, angular slice of the incoming signals and most interference is outside this angular sector.  In short, massive MIMO solves many of the wireless propagation challenges we have developed techniques over the last few decades to overcome.

Massive MIMO brings challenges of its own. A large antenna array takes up significant space. However, mmWave antennas are relatively small as their size is on the order of a signal wavelength, which is inversely proportional to frequency.  Due to the small size of mmWave antennas,  we can pack more of them into a given area. Thus, massive MIMO and mmWave are a dynamic duo in complementing each other’s weaknesses: massive MIMO compensates for the large attenuation in mmWave systems, and the small wavelength of mmWave allows for a large number of antenna elements in a MIMO array of a given size.

There are other challenges of massive MIMO in addition to size. For traditional MIMO architectures each antenna has its own radio front end and A/D converter, which has cost, size and power consumption that scales linearly with the number of antennas. For this reason hybrid structures with analog as well as digital processing are being developed. Complexity and delay of the signal processing associated with large MIMO arrays are also high, hence there is currently much research into low-complexity low-latency alternatives.

Rethinking Architecture:  Control in the Cloud

Another significant challenge in 5G design is the network architecture.  The underlying premise of cellular system design, that interference is treated as noise and hence systems are interference limited, has not changed since first generation systems.  Under this premise frequency reuse is designed so that the interference it introduces still allows for the desired communication performance.

As cellular systems have become more sophisticated, mechanisms have been introduced to reduce interference or operate well despite it, but the basic architecture of cellular systems is still based around treating interference as noise that cannot be mitigated. Current technology offers many mechanisms to reduce or exploit interference, from beamforming to multiuser detection to cooperative signal processing across base stations. These advances open the door to rethinking cellular system architectures around interference as something to exploit or ignore rather than overcome.

Another significant change coming in 5G networks is massive deployment of small cells. It is well known that small cells are the solution to increased system capacity, as the data rate per square meter grows exponentially as the cell size shrinks.

Small cells also provide better transmit energy efficiency than large cells at both the device and base station because the device and base station are closer to each other, and hence require less transmit energy to communicate.  However, we cannot build a network with small cells alone.   We still need big cells for coverage.  In particular, a new cellular network will typically rollout first focused on widespread coverage via large cells and then filling in excess capacity with small cells.

Large and small cells are generatlly deployed today in the same way they have been since first generation networks – a base station is mounted on a tower or pole, then measurements taken and the static base station parameters adjusted based on these measurements. Following this process,  base station deployment is complete and its static parameters are never again adjusted, even when large or small cells are deployed nearby.  Now, however, we can do dynamic optimization of all base station parameters across all cells, both large and small.  In particular, base station measurements can be taken periodically or whenever network conditions change, and then the base station parameters adjusted to current network conditions.

“Computation in the cloud” is the best way to dynamically optimize a network.  Under this premise, shown in Fig. 3, a self-organizing network (SON) server in the cloud collects measurements from all base stations, whether they are large or small, and calculates the optimal value of the parameters such as frequencies, power, antenna parameters, and user data rates assigned to each base station in order to maximize system performance.

Fig. 3: Dynamic Optimization of Base Station (BS) Parameters Through a SON Server in the Cloud

This dynamic allocation of resources entails many challenges.  If the resources are optimized centrally, then the algorithmic complexity may be intractable, and there are also challenges of latency and communication costs in getting all the data to the cloud for the optimization.

The question of distributed versus centralized control of this resource allocation is also a key technical challenge.  Distributed optimization can be highly suboptimal while centralized control introduces complexity, latency and backhaul costs to and from the cloud.

An alternative to distributed vs. centralized control is “fog” optimization, whereby clusters of neighboring cells form a neighborhood over which resources are optimized. This is called fog optimization, to contrast it with centralized cloud optimization. The notion of fog optimization, also shown in Figure 3,  is a relatively new concept, hence many open questions remain:   how to form the neighborhoods, what size they should be, what entity should preform the fog optimization, how do base stations within a neighborhood communicate and what information should be exchanged between them?

Backhaul from small cells is another challenge, in particular whether it should be via a separate wireless system cable or fiber.  Beyond the technical considerations, there are economic and political challenges including the cost of dense small cell deployment and getting municipalities to agree to it.  These technical, economic and political issues have prevented small cells from reaching large-scale deployment in current cellular systems but these barriers seem to be falling for 5G deployments.

In fact, Wi-Fi is today’s small cell and is the primary access mode almost everywhere except in the car and outside metropolitan areas.  It offers lots of spectrum – more than 300MHz in the 5GHz band – and excellent physical layer design.  The big problem with Wi-Fi is interference.  The basic MAC design for Wi-Fi is from the 1970’s, although multiuser MIMO and coordination across access points have been introduced recently.  In the 1970s form of MAC, carrier-sense multiple access with collision detection, access points avoid interfering with each other by sensing each other’s signals and detecting when this sensing fails, causing their signals to collide. . This decentralized approach to multiple access significantly degrades the efficiency of spectrum sharing relative to centralized control and scheduling, sometimes to near zero in very dense environments with many Wi-Fi users.

SON for Wi-Fi, whereby Wi-Fi access points send their current network conditions to a centralized controller in the cloud and get back resource allocation parameters such as channel and transmit power, can lead to big efficiency gains. This premise is similar to the central control of cellular systems, and is the underlying premise behind enterprise Wi-Fi and some of the newer mesh Wi-Fi home networks. SON for Wi-Fi can lead to significant performance gains over existing Wi-Fi access techniques due to the distributed and highly inefficient way Wi-Fi access points are controlled today.

If cloud control is a good idea for Wi-Fi, then perhaps we should control all wireless networks in the cloud.  This would enable resource allocation and application mapping to wireless resources in an optimal way through centralized control.

The notion of cloud control for all wireless networks is a premise called software-defined wireless networking (SDWN), illustrated in Fig. 4.  In this architecture, the different types of radios associated with different networks and frequency bands in the overall system are an inexpensive commodity running intelligent software.  This software controls the radio parameters such as frequency allocation, power allocation and antenna parameters. The centralized controller not only optimizes the radio parameters, but it also maps the applications to the different radios and their networks that are best suited for that application’s requirements.

Software-Defined Network Architecture

Fig. 4: Software Defined Wireless Network Architecture

For example, for an IoT application, the SDWN takes the most energy efficient network and maps the device to that network. On the other hand, for a high-definition video application, the SDWN might use its mmWave network which has plentiful spectrum. For an autonomous driving application, the available network with the highest reliability and lowest latency might be used. In general the SDWN will match the application and the device to the network it is best suited to at a given point in time, which will depend on the available bandwidth, proximity, congestion and propagation conditions of the network.

There are a lot of technical and non-technical challenges for SDWNs, but this is a promising vision for a seamless cloud of connectivity.  The users do not care which network they are using – they just want it to work, and a SDWN architecture could ensure connectivity that meets the requirements of each application if it is possible to do so with all the available network resources optimally allocated. Challenges of this vision include the complexity in optimization of network resources, seamless handover between different networks, and controlling heterogeneous wireless hardware across different networks as well as frequency bands from the cloud.

Building for Tomorrow’s Users

Tomorrow’s wireless networks will need to support high performance applications that require coverage and capacity, IoT applications seeking extreme energy efficiency, and a range of other applications with varying data rate, latency, reliability, and energy constraints.  Advances in the physical layer offer a number of options that we can incorporate into next-generation cellular system design to improve performance.  We also need to ask what performance metrics cellular systems should be designed to optimize.  Rather than focusing only on capacity, the driving constraints in tomorrow’s systems may also include energy, coverage and cost.

In summary, this is a critical time to re-think the technology to incorporate into wireless networks as well as the overall design of such networks. If significant innovation in devices, radios and networks can be realized, we can build 5G networks that will support the exciting applications and devices of the future.