myAltera Log In

Forgot Username or Password?

Don't have an account?

From Here to 5G: A Roadmap of Challenges

Deploy 5G wireless networks by 2020! From research labs to service providers, these words have come to define a shared goal, almost a mantra. As a goal, the words justify all manner of R&D activities. But as a mantra, they betray a fundamental lack of definition. Just what is 5G? And what problems must we solve in order to deploy it?

On the surface, definition seems simple: 5G is the next generation of cellular technology after 4G. But peel back the surface, and you find something less appealing: a plethora of interests wrestling and squirming to promote their own product plans to ensure network support for their services, or merely to stave off obsolescence. Among their voices, you can hear demands for a diverse list of wishes.

Mobile users, if they’ve even thought about it, imagine 5G as meaning faultless provision of HD movies, games, and augmented reality experiences, moving with them through different environments and devices. Such service implies:

  • Over 10 Gbps bandwidth for an endpoint
  • A thousand times more bandwidth delivered into a given area compared to 4G
  • Ten to 100 times more devices in a given area
  • Perception of five-nines availability and ubiquitous  geographic coverage

These are from the perspective of the mobile user. A network architect might look at this list and write the following imperatives:

  • A 90 percent reduction in network energy consumption per bit
  • Bandwidth aggregation across any available spectrum and access networks
  • Under 10 ms switching time between different radio access networks
  • Dynamic spectrum allocation, peer-to-peer connections, and self-backhauling capabilities

Their attention are drawn to all this discussion, an Internet-of-Things (IoT) developer might add some needs of her own:

  • Under 1ms round-trip latency
  • Up to ten-year battery life for simple devices on the network
  • Industrial-grade functional safety and security

Perhaps the best way to sort out this oversupply of expectations is to recognize that there are, in fact, at least three categories of engineers already thinking about 5G. First, there are those who envision 5G as an extension: 4G services done well, delivered with more bandwidth to more users. Second, some see 5G not as a single wireless network, but as the unification of virtually all wireless connections, including short-range, tiny-cell, WiFi, and cellular. Third, there are IoT developers who want this unified network to meet the requirements of IoT systems, including energy savings, low latency, reliability, and security.

Each of these groups has its own list of expectations. For each list we can begin to identify the specific technology milestones 5G must achieve.

Show Me the Movie

There are many users who would be delighted if 5G could just deliver what their service provider has already promised for 4G: Wi-Fi-like bandwidth while they are between hot spots. These folks just want to see the end of the football game, or to get their kids in the car before the credits roll on the 2,314th viewing of Frozen. Some apps vendors salivate over this bandwidth as well, imagining the location-based video or augmented reality they could serve up to greedy consumers. Service providers can imagine getting in on some of that revenue they lose to WiFi, or even weaning homes off of cable or DSL connections.

But delivering this kind of bandwidth, especially in dense urban markets, will require a break from many of the assumptions of legacy wireless. If you need to deliver more continuous Mbps  to an endpoint, you have two choices—a fatter pipe or more pipes. Both options figure into 5G.

Fat Pipes

To make a fatter pipe, you again have basically two choices. You can pack the bits closer together with more clever and demanding coding schemes, or you can give each channel more bandwidth. You can also, in principle at least, gain some efficiency on streaming data by making frames longer and simplifying overhead. Each of these choices has implications for the whole network.

More powerful waveforms and coding schemes are a possibility, but they may not be used on existing frequency bands, suggests Altera® wireless applications expert Richard Maiden. “We have a hazy crystal ball for looking at baseband algorithms,” he warns. “Right now it appears there will be no change in the waveforms in the bands up to 2.5 GHz. Between 2.5 and 10 GHz we many see some waveform changes to improve spectral efficiency. As for wider pipes, this is unlikely to occur below 2.5 GHz due to pre-allocated spectrum.  Between 2.5 and 6 GHz there is only light licensing and therefore channels of the order of 100-200 MHz could be deployed. At this point new bands between 28 and 60GHz have a blank sheet of paper, very wide channels could be deployed here.”

These new, higher-frequency bands offer, as Maiden suggested, offer another opportunity— more bandwidth per channel (Figure 1). But this offer also comes at a cost. Additional RF circuits and antennas will be needed for the new bands. Baseband data converters will need higher sampling rates and perhaps more resolution, especially if transmitters need to employ digital pre-distortion. The Fast Fourier transforms in the baseband units will need to be wider and/or lower latency.

Figure 1. There are many potentially available bands in the land above 6 GHz, but each has its own circuit, antenna, and propagation situation.

There are many potentially available bands in the land above 6 GHz, but each has its own circuit, antenna, and propagation situation.

And there are physical limitations. “RF propagation characteristics get poorer as the frequency increases,” Maiden observes. At 60 GHz RF doesn’t even penetrate moist air very well, let alone walls. “Fat pipes at high frequencies will be short pipes,” Maiden summarizes. “To fully exploit 60 GHz you will need lots of short pipes.”

More Pipes

An alternative approach to delivering more bandwidth to an endpoint is to aggregate multiple channels, using several in parallel to deliver packets to the user. In fact LTE-A already supports carrier aggregation. But 5G may take the idea much further, allowing aggregation of carriers from different bands or even different kinds of devices—for example combining a few channels from small cells with one channel from a cell tower and one WiFi connection. All of these ideas are latent in LTE-A today, but they are not widely deployed.

Aggregation presents two sorts of challenges. First and most obvious, both the network and the endpoints will have to be able to handle the data rates involved.  Today’s handset architectures are not intended to deal with 10 Gbps in either the baseband or the application processor. Consequently handset providers will need to develop next generation devices that can meet the needs of the network.

For the base station, the situation is even more challenging. Larger volumes of data are moving though, getting split up between multiple pipes and must be received at the handset, whilst maintaining some semblance of correct packet sequence. And there is a larger network management issue. A single stream of data for a handset may be split up among one or two macrocell towers and a handful of small cells. With the small cells likely working in the millimeter-wave bands, connections to a moving handset—not to mention a vehicle—will be quite volatile, requiring frequent restructuring of the connections in response to rapidly shifting channel characteristics.

MIMO

As the picture becomes more complex, with multiple cell sites pumping signals through multiple bands to one device, higher order multiple-in/multiple-out (MIMO) antennas become increasingly attractive. The base stations can use massive MIMO antenna arrays to direct narrow beams at individual endpoints. The endpoint devices can use a much more modest array to beam-shape its own receiver sensitivity and transmit energy (Figure 2). With MIMO beamforming, it is possible to reduce transmit power and still get better channel characteristics. But beamforming needs a considerable increase in baseband processing capacity to do the matrix arithmetic used to compute the phase differences among the antenna elements.

Figure 2. Massive MIMO cell sites can aim narrow beams at specific clients, while simpler MIMO arrays in handsets can provide some directional selectivity.

Massive MIMO cell sites can aim narrow beams at specific clients, while simpler MIMO arrays in handsets can provide some directional selectivity

 

Spatial Density

Aggregation and MIMO address not only delivered bandwidth, but another major wishlist item—the ability to get far more Gbps into a given space—a suburban neighborhood, for instance, or a floor in an office building. But as density increases, another issue arises. “It looks easy to just crowd an area with small cells,” Maiden observes. “But it is not always an improvement. Most networks are interference-limited.” As the density of small cells goes up, keeping them from stepping on each other and the macro network becomes a critical network-management problem. Channel allocation and beamforming become vital tools.

“But beamforming requires tracking,” Maiden cautions. Somehow the network has to track the physical location of the user and coordinate its cells. Otherwise there will be too many handover events and the new base station may lose its new client before narrowing and aiming a beam.

Clearly there is a lot of tracking, monitoring, allocating, and optimizing going on in the network. But where? One school argues for total centralization (Figure 3)—virtualize all the baseband processing and move it into a metro data center along with the control-plane code and appsRun digitized waveforms back and forth between the radio heads and the data center over dedicated fiber. This is called a centralized radio access network (C-RAN).

Figure 3. C-RAN pulls the base station electronics out of cell sites and centralizes it in a metro data center, sending digitized baseband waveforms back and forth between data center and towers.

C-RAN pulls the base station electronics out of cell sites and centralizes it in a metro data center, sending digitized baseband waveforms back and forth between data center and towers.

At the other extreme, some envision a peer network in which smart basebands located close to the antenna negotiate directly with each other to manage the network, only using the central facility for back-office services like accounting. The concept could extend all the way to allowing the endpoints to make direct peer-to-peer connections, in effect turning everyone’s handset into a tiny cell site. And of course there are many options between the extremes.

One factor in choosing a topology is fronthaul—getting data between the baseband and the radio. Ideally that would be done with a metro Ethernet, using a standard like IEEE 1904 for digitized radio waveforms over Ethernet. But the data rates can get huge.

“Today a 4×4 MIMO radio head at 20 MHz baseband requires 5 Gbps on a Common Public Radio Interface (CPRI),” Maiden says. “If you have 128 transmit or receive antennas, that jumps to 160 Gbps.” With massive MIMO and today’s metro networks, pure C-RAN may be simply infeasible.

Smaller cells have an easier problem. With integrated baseband hardware and relatively low throughput, they can make their own transport arrangements though whatever media happen to be available, even grabbing a local WiFi or cell-tower connection. In a pure peer network the question is moot, as every node in effect uses its neighbors for connection back into the network.

The technology for such networks seems tantalizingly plausible. At the endpoints there is nothing new. Even massive MIMO for macrocell towers is old stuff to a military radar designer. The big unknown is how to do all that baseband computing within tight power budgets—and within that call for a 90 percent cut in network energy at the base stations. Semiconductor process improvements alone can’t get there, even with the most careful ASIC design.

Network Management

Another big unknown is network management. As the network becomes more complex, with a mixture of macrocell towers, small cells, possibly customer-installed tiny cells, WiFi hubs, and perhaps peer-to-peer connections, management becomes an interesting problem. The goal is that all users should always have up to their full 10 Gbps available on demand, with imperceptible latency. The reality is a network fundamentally different from today’s 4G topologies. In 5G, nodes may be arranged almost at random rather than in geographic patterns. Different nodes will support different combinations of bands, and may have different levels of beamforming capability. Propagation within a channel will change with the season, weather, time of day, and unpredictable events like someone opening a door or stepping in front of an antenna. And of course many of the users will be moving; some of the most critical, like automated vehicles, will move quite rapidly. Successful connection management and optimization algorithms remain to be demonstrated, whether for centralized or distributed control.

Enter the IoT

So far we have assumed that 5G users are much like the 4G users—they run apps on their devices, stream video ranging from cute cat clips to feature films, exchange photos and text, and may even try the odd phone call. But another, very different category of prospective users is casting lustful eyes at 5G’s planned ubiquity and bandwidth— developers of the IoT.

In many cases the last few links in the IoT—between the Internet connection and aggregating hubs, and between hubs and sensors or actuators—will from practical necessity be wireless. So the motivation to use a wireless network that is already in place is obvious. But to meet their needs, IoT developers want to overlay telecom providers’ plans for 5G with a new set of very different requirements.

First, IoT developers are much more concerned about latency than about bandwidth. If you are designing a control system in which dumb sensors and actuators couple to control algorithms in the cloud, round-trip network latency matters—it is a parameter in system performance. Some cite 1ms as the upper bound. Note in passing that latency is also a concern for some important non-IoT applications, including connected vehicles and augmented-reality displays.

Second, IoT developers are often obsessive about energy—not for the network, but for the endpoints. They want a low-speed, very-low duty-cycle mode that would allow a device to use the 5G network but still have a ten-year battery life, or that would allow a device to subsist by energy harvesting.

Third, the IoT will—after a few disasters—require levels of connection reliability and data security completely foreign to today’s cellular networks. Nobody really cares today if a governmental entity steals a copy of your cat video. Most users don’t even value their privacy enough to install an encryption app, despite the levels of intimacy they sometimes commit to their devices. But many people would care if an IoT attack shut off electricity in New York.

All three categories of IoT-specific needs are challenges, not in themselves, but because they are new to the cellular network. Today 4G prioritizes maintaining the connection and delivering some token bandwidth over minimizing latency. Service providers will compromise latency to gain utilization, even to the point where users are ready to switch vendors. But for IoT users, the networks will have to manage to a wide variety of quality-of-service requirements, even as they are coping with switching among more, smaller, heterogeneous cells.

Energy is equally an issue. Today’s networks are based on continuous side-channel signaling between cell sites and endpoint, monitoring state, tracking approximate location, and measuring channel parameters. There is no room in this protocol for an endpoint that suddenly appears out of nowhere, sends a short burst of data with a minimalist header, and then vanishes again, expecting 1ms latency from burst to response. And as for reliability and security, these needs will take cellular networks into completely unfamiliar territory.

The challenges are clear. Telecom consortia, operating companies, and equipment providers are rushing to define the right questions, conduct research, and begin product development. Expect these efforts to gradually converge on a phased deployment that begins with realization of functions, like carrier aggregation, already latent in LTE-A. Later, the industry will move on to more features. The schedule is highly uncertain and fraught with unquantified risks. But you can bet that something—some service that can plausibly be called 5G—will be on the air by the end of 2020. Mission accomplished.


CATEGORIES : All, IoT, Wireless/ AUTHOR : Ron Wilson

10 comments to “From Here to 5G: A Roadmap of Challenges”

You can leave a reply or Trackback this post.
  1. The big problem with 5G is that it wants to serve everything and replace any other form of connectivity, wired and wireless. Folks are also trying to keep complexity high , to keep hardware prices high and new entrants away.

    5G today is aimed at serving carriers and the current leaders in 4G, expand the SAM and keep consumers hungry for data (when you aren’t starving for data, the carriers hit a wall, it’s against their interests to offer enough, it has to always be less than the consumer needs) while also keeping licensing payments at very high levels.

    What consumers need is a very very low power connection from a modem that can fit in glasses (volume is a problem too , not just power and thermals) and data they can afford. Simple, cheap (hardware and data) , very very low power and thermals in a very small volume and reasonable speeds is what is needed.What we’ll get won’t be that, the world doesn’t work like that.
    If 5G networks would be built and operated more like roads and carriers would disappear, while standard essential patents would be free, the world could get the 5G it needs instead of the 5G aimed at forcing the consumer to just pay more and more.

    On the other hand, 5G is an opportunity for a new standard to take over the world.. If 5G doesn’t serve the market,someone could try to offer what the consumer actually needs.

  2. “In a pure peer network the question is moot, as every node in effect uses its neighbors for connection back into the network.”

    In my opinion, this is the direction things must … and will … go. But to make it work we need to rediscover ATM and UltraWideBand.

    IP just can’t hop fast enough. IP is lucky to make 20 hops in 1/8th second (minimal responsiveness for voice) with no QoS. ATM can make thousands of hops in this time because it is hardware and connection oriented. Since these hops will be shorter, UWB can be employed as the low power high bandwidth alternative of choice.

    This will be resisted aggressively by the carriers … because when it finally deploys, the backbone is no longer needed. And we will have untapable links so NSA won’t like it.

  3. “. If you are designing a control system in which dumb sensors and actuators couple to control algorithms in the cloud, round-trip network latency matters—it is a parameter in system performance. ”

    I think that would put you in the “dumb sensor” category yourself.

  4. “There is no room in this protocol for an endpoint that suddenly appears out of nowhere, sends a short burst of data with a minimalist header, and then vanishes again, expecting 1ms latency from burst to response. ”

    But ATM through it’s designed-in SVC and various AAL mechanisms has such provisions. Rediscovery of ATM is imminent.

  5. ” Telecom consortia, operating companies, and equipment providers are rushing to define the right questions, conduct research, and begin product development. ”

    The industry successfully squashed superior technologies like ATM and UWB more than a decade ago. But I don’t think they can suppress them forever. Physics wins out in the end.

  6. P.S. I really appreciate your articles Ron.

  7. challenges means opportunity , because we must meet the Fast-growing demand,

  8. Ron,

    Great summary. Appreciate your vision and clarity on these complex issues.

    One of the major issues in the 4G to 5G migration is the total lack of interest and awareness of the impact and requirements for next generation video architecture and applications.

    The business and technical oligarchs are still assuming that the same 1970’s dumb, static
    analog CODEC-based video architecture will simply continue to grow in 5G.

    Yet, everyone knows that the exponential growth in inefficient analog CODEC video data transfer is rapidly eating the Internet, and is artificially raising demand to build G5 networks.

    Fortunately, legacy analog CODEC video architecture and apps have already been replaced in certain high performance/high security US military and intelligence programs.

    The DVO (Digital Video Object) architecture was developed for DoD/DARPA (Defense Research Project Agency) in 2008 to replace the analog CODEC video architecture.

    DVO beta apps are now running on Apple/iOS and Android mobile platforms; and run live streaming video at much smaller file sizes, and on much lower and unstable bandwidth.

    DVO is also running much higher video apps and file security; integrated with secure OS and secure cell network technologies; that eliminate the FLASH and HTML5 vulnerabilities.

    Next Steps:

    1. Before anyone proposes another 5G strategy; please define how much of this network will run video data. And, then please define the impact of reducing video data transfers by 80%.

    2. Contact the Altera/Intel Federal office to learn about the government and commercial release of DVO video architecture and apps for secure mobile video communications.

  9. Interesting article from the perspective of a user. Looking forward to the genius solutions we will come up with!

  10. Turbo Coding-LDPC and digital fountain codes will make 5G happen…

Write a Reply or Comment

Your email address will not be published.