myAltera Log In

Forgot Username or Password?

Don't have an account?

Hubs Become Central to the IoT

Even before real systems are widely deployed, the Internet of Things (IoT) is rushing into a period of rapid evolution. Early—and, frankly, simplistic—ideas about IoT architecture are giving way to more nuanced views, often based on analysis of data flows and on hard questions of why the IoT really matters. The result will be new architectures, leading to new silicon. We will illustrate this trend with snapshots of three new IC deployments described at this year’s Hot Chips conference.

Let’s begin with today’s concepts. Many systems designers’ first impressions of the IoT fit into one of two camps: conservatives or idealists (Figure 1). The conservatives remain focused on conventional embedded design and see the IoT as an additional layer of requirements to be slathered over their existing designs. The idealists see the IoT as an opportunity to virtualize nearly everything, drawing all tasks except physical sensing and actuating back into the cloud. Often the best solutions turn out to be linear combinations of the extremes. But these compromises will bring about the emergence of whole new categories of computing near the network edge.

Figure 1. Conservatives and idealists see the IoT very differently.

figure-1

Two Simple Ideas

Perhaps the most frequent perception of the IoT among designers of industrial, infrastructure, and aerospace systems—the heartland of embedded computing—is a perception of just more requirements. They see the IoT in terms of new functions, such as passive data-logging, remote update, or perhaps remote command capability, that require Internet connectivity.

So the first question is, obviously, how to physically connect to the Internet. If the embedded controller is at least a modest-sized board already connected to an industrial network or Ethernet, this isn’t much of a problem. But if the controller is either small—only a microcontroller unit (MCU), for example—or physically isolated, getting to the Internet can mean additional hardware: a WiFi port, a Bluetooth interface, or some combination of the myriad short-range wireless links the IoT has spawned in recent years. And of course any new connection will require a wireless hub to connect to, and a protocol stack on your system.

But there is another critical—and often underappreciated—layer in this incremental approach to the IoT: security. Connecting an embedded controller to the Internet, however indirectly, connects the controller to every hacker in the world, and raises a bright banner announcing, “I’m here; come probe me!” If the controller has any conceivable ability to harm persons or property, it must take responsibility for authentication, data protection, and functional safety. Even if the controller is doing nothing of importance, it still must be guarded against malware. A recent massive denial-of-service attack appears to have been launched from an enormous botnet composed at least partly of IoT connected devices.

This protection is more easily prescribed than accomplished. As the international news relates nearly every week, even government agencies and global enterprises have failed to secure their systems. IoT developers are impaled on the dilemma of having to do better, but with far fewer physical resources. A Hardware Security Module (HSM) inside an MCU seems barely adequate, but today it is physically unattainable.

Difficulties notwithstanding, the great advantage of this conservative view is what it conserves. The latencies and bandwidths of data flows in the embedded system remain intact—or at least they should, if connectivity and security tasks don’t introduce new uncertainties into the system. So real-time tasks continue to meet deadlines and the transfer functions of control loops remain the same. This is an obvious benefit for a multi-axis motor controller. But it can even be valuable in a system as apparently plodding as a building’s lighting management system.

An Ideal, Lost

The idealist’s approach to the IoT is entirely different. Start with a clean sheet of paper. Draw in all the necessary sensors and actuators. Now put an Internet connection on each one, and create a cloud application to read the sensors and command the actuators. In effect, this is a completely virtual system. You can change not only operating parameters, but algorithms and even the purpose of the system simply by changing software. For industrial applications the phrase “software-defined machine” has been suggested.

But those devils in the details are legion. And most of them relate to the presence of the Internet at the heart of the system. Internet messages are subject to unpredictable delays over a wide range—including, at the extreme, forever. So a system using this ideal architecture must tolerate delayed or lost messages. This requirement is so constraining that it leads many experienced designers to reject the idealized architecture out of hand, no matter how theoretically flexible it might be. And there is another issue.

The same connectivity and security requirements that descended on our conservative embedded system still apply. The sensors and actuators still must talk with the Internet, and must still defend themselves against it. But now we are adding these demands not to a board-level computer, but to tiny sensors, solid-state relays, or motor controller MCUs. The relative overhead is huge and the likelihood high that an attack will overwhelm the simple security measures a tiny, battery-powered or scavenging device can mount. So what to do?

These questions have led many architects to seek a middle path, neither conservative nor idealistic. They are moving critical computing functions to an intermediate location, between the sensors and the Internet. Often this intermediate site also acts as a wireless hub.

Intermediators

The idea of moving computing to an intermediate point, often between a short-range wireless network and an Internet connection, raises many new questions. Which tasks should go where? Just how much computing power and adaptability does this smart hub require? And does this arrangement require new algorithms, or is it really just a repartitioning of a conventional embedded system?

The answers to these questions come from first finding the weakest link in the system. In this case, that link would be the public, non-deterministic, occasionally absent Internet. The object becomes to distribute tasks among local sites, the hub, and the cloud so that no latency-sensitive data flows have to traverse the Internet, and secondarily, so that computations are as close as possible to the data they consume.

If we try to follow these guidelines in practice, we will see that in some applications the conservatives are exactly right: the best solution is to keep the computing resources local, and to simply layer-on connectivity and a degree of security. But we can identify at least three other interesting cases.

Enter the Smart Phone

One interesting case arises when there is a functional advantage to combining the operations of several near-by controllers. This situation might come up, for example, when several controllers are working on different parts of the same process, but all of them would benefit from the sensor data their neighbors are collecting. Moving the control algorithms to a wireless hub that gathers all the sensor data and controls all the actuators can allow superior control optimization.

Today such systems will typically be implemented using short-range wireless links, from local wireless MCUs on the sensors and actuators to a proprietary wireless hub. If the area to be covered gets too large for a low-power wireless link, the system can escalate to an industrial-strength wireless network, WiFi, or even a cellular connection to bridge longer distances.

The eventual deployment of 5G service—sometime after 2020, probably—could simplify this picture further, offering a single medium for local links, longer-range connections and the pipe back to the internet. But mentioning cellular service brings up an interesting point that may prove valuable well before 5G is in place.

If we look at implementation of the hub, we see an increasingly complex system. There are provisions for connectivity, both upward to the internet and outward to sensors and actuators. The latter wireless connections must be flexible in RF front end, baseband, and protocol to cope with the mass confusion of wireless-network quasi-standards. Software-defined radio would be a reasonable response to the current mess.

Then there is the actual controller, where the algorithms are executed. This too must provide considerable headroom, as access to all that sensor data will probably lead to a call for more elaborate and demanding algorithms, perhaps requiring hardware acceleration on real-time tasks. And there are security needs, since the hub will bear most of the authentication and encryption responsibility for the system. These needs may dictate a hardware crypto accelerator and a secure key store.

From a distance, this could sound like a description of a very different kind of device: a smart phone (Figure 2). And in fact there is considerable interest in using smart phones, or even a subset of a smart-phone chip set, as a hub in control applications. The Internet connectivity is already in place via either WiFi or the cellular network, at least some of the needed local wireless link support is there, and Android provides an open platform that is relatively easy to extend. But what about processing power?

Figure 2. The block diagram for a smart phone SoC can look very similar to what we’d want for a smart hub

figure-2

This year’s Hot Chip conference gave a look at a mobile application processor design that hinted at an answer. From Mediatek, the chip aimed at a compromise between performance and energy efficiency. Starting from ARM’s big.LITTLE concept, the Mediatek designers came up with a ten-core processing subsystem arranged in three clusters. There are four low-power, 1.4 GHz ARM® Cortex®-A53 cores in one cluster, four 2.0 GHz A53 cores in a second cluster, and two speed-optimized 2.5 GHz A72 cores in a third. All share a hierarchical coherent interconnect and a dynamic task scheduler. The ten-CPU cluster should be able to move gracefully and on the fly from MCU-like power efficiency to, given enough threads, near server-class compute performance, while executing a mix of real-time and background tasks. That is, after all, what an advanced smart phone requires, and that sounds a great deal like what we want from our hub.

Since this cluster is embedded in a smart-phone SoC, it will be accompanied by a GPU, a cellular modem, WiFi and Bluetooth support, near-field radio, and security hardware. Given the volumes cell-phone SoCs reach in production, pricing should be aggressive. So these SoCs can be very attractive bases for IoT hubs.

Power to the Edge

Ten CPU cores might seem massive overkill for a hub that is basically just reading sensors, executing a control algorithm, sending commands to actuators, and serving as a firewall. Granted, the number of CPUs is larger because of Mediatek’s little-medium-big strategy, and the plethora allows you to lock a time-critical task to a dedicated CPU if you need to. But more than that, the abundance of processing power serves a trend. Algorithms are getting more complex.

You can see the trend in more sophisticated control functions and in, for example, use of Kalman filters, with their intense matrix arithmetic, in sensorless motor control and battery management. But with the resurgence of machine learning, the trend is about to blossom.

Among the earliest manifestations of this resurgence was vision processing. Designers recognized that—quite apart from its obvious uses in surveillance and automotive driver assistance—image classification could often be the most effective way to estimate the state of a system. One camera can see what it might take hundreds of sensors to measure. An early application used fixed cameras to observe a street, and image processing to determine which parking spaces were occupied, replacing dozens of buried sensors and hundreds of meters of underground cable.

The ascendency of convolutional neural networks (CNNs) as the most successful image-classification algorithm has led to use of CNNs, and exploration of other forms of deep-learning networks such as recurrent neural networks, in IoT hubs. The evaluation of such models quickly uses up CPU cores. That leads to interest in many-core processors and in hardware accelerators, such as GPUs or FPGAs, for the hubs. And that brings us to a second Hot Chips paper.

At the conference, Movidius described a deep-learning SoC—essentially, a collection of fixed-function image processors, RISC CPU cores, vector processors for matrix arithmetic, and memory blocks, all optimized for evaluating deep-learning networks. The company claimed performance superior to that of two unidentified GPUs, but at low-enough power to need no fan or even heat sink.

Shifting Concepts

We’ve watched an evolution from connected local controllers to smart hubs to hubs hosting deep-learning networks. This may prove a long-term solution for systems that can be satisfactorily managed using only their current observable state as input. But there is growing interest in going beyond this concept, to systems that can call on not only their own state, but upon history, and even upon unstructured pools of seemingly unrelated data. This is the realm of big-data analysis.

Examples of the use of big-data techniques in system management predate the current popularity of deep learning. Machine maintenance systems have used big-data analyses of operating history to identify predictors of impending failure, for example, or to track down the location of parts from a suspect lot. The gradual blending of traditional big-data techniques, such as statistical analyses and relevance ranking, with deep-learning algorithms will only promote the importance of cloud-based analyses to embedded systems.

That does leave us with several questions. First, how much does the big-data algorithm need to know about the state of the system, and in how timely a manner? The presumption in most marketing presentations seems to be that the system will continually log all of its state to the cloud. That is how you get a PowerPoint slide saying that a smart car generates 25 GB per hour of new data. But it seems far more likely that the IoT hub will filter, abstract, represent, and prioritize the state information, reducing the flow significantly.

Another question involves performance in the cloud. If the cloud-based analysis is being done to predict next month’s energy consumption or to schedule annual maintenance, there is no great hurry. If it is part of a low-frequency control loop or a functional-safety system, there are hard deadlines. And that is where the third Hot Chips paper comes in.

Baidu presented a software-defined, FPGA-based accelerator for cloud data centers, intended to slash execution time for a wide variety of big-data analyses. Baidu’s specific example was an SQL query accelerator—which would be useful in about 40 percent of big-data analyses, the presenter said. But the reconfigurable architecture would be applicable across a wider range of tasks. Thus acceleration, particularly if used with a deterministic network connection, could extend the usefulness of big-data algorithms in control systems, working hand-in-hand with smart hubs.

We have seen how practical issues, such as bandwidth, latency, or security, argue in favor of smart IoT hubs. Once the hub is smart, at least three quite different architectures become interesting: the hub as connected system controller, the hub as deep-learning controller, and the hub as agent for a cloud-based big-data system (Figure 3). Some combination of the three should be right for just about any connected embedded system.

Figure 3. Three very different architectures can serve different needs for IoT systems.

figure-3


CATEGORIES : IoT/ AUTHOR : Ron Wilson

7 comments to “Hubs Become Central to the IoT”

You can leave a reply or Trackback this post.
  1. A much more realistic view of IoT, Cloud Computing and Fog Computing. Besides managing the overwhelming data a direct connection would require, the local intelligence could also provide a much needed low-latency, decision-making feature such as that of a Safety Instrumented System, but for vital IoT applications. It also provides a greater opportunity for a layer of local HMI that could be vital to real world applications and commercialization.

  2. Will lead? Will Reorganize the IoT around Smart Hubs. Hmmm. Seems to me that SmartThings (now owned by Samsung) and Wink figured that would be required *years* ago.

  3. Interesting that you said, “those devils in the details are legion.” in the part about the idealists’ view of the IoT. I am reminded of Mark 5:9 and Luke 8:30, in which Jesus Christ asks a demon his name, and he replies “Legion” because there are many demons in the possessed man. I don’t know if that was intentional.

    Good point about using a single camera rather than a large number of sensors to, for example, determine occupied parking spaces. I’d like to point out that beyond the capex and opex to install and maintain such sensors, the camera(s) have the advantage of reliability. Crapped out in-pavement left turn lane sensors are a continual pain at intersections near my house. And in the parking garage at the Cosmopolitan hotel in Las Vegas, they have red and green lights indicating occupied or not parking spaces; great idea, but if somebody parks his SmartCar in or out a ways from the sensor, you think the space is free when it’s not. Not to mention sensors or indicator lights that fail. Several redundant cameras would be much cheaper to install and maintain and more reliable than hundreds of sensors. I guess it wouldn’t help the indicator part, though. In the case of parking spaces, the lines are already on the pavement for humans to see, so it should be relatively easy for image processing systems to determine occupancy.

    I’m trying not to be long-winded here, but I just wanted to say that I really appreciate the articles you write, Ron. They give me a good insight into some areas that I might not otherwise know about, my tendency leaning toward analog, RF, and microwave circuits and systems. I find the level of detail just right. And I suspect that you really enjoy researching and writing about embedded systems and processing. Your dream job?

    • Jim:
      Thank you for the great thoughts and the kind words. Yes, it’s a dream job. As I once heard someone describe journalism, the perfect job for a nosy person with a short attention span. And thanks for catching the reference on Legion.
      ron

  4. Intelligent Internet of Trust, IIoT is better than IOT. Each Thing publishes it public key to local key servers. The gateway then uses this identity to add this Thing is a set of rules defining interaction in the local net and access to external (cloud) resources. I do not want my coffee pot talking to my neighbors dish washer. This is the Trust.
    The local protocol is a distributed machine learning system. This protocol learn and adapts the network to the users.

  5. We lost lots of ground when we abandoned ATM and UWB technologies.

  6. Thanks Ron for a very interesting and thought provoking article. It’s nice to raise the head out of the current deadlined project and dream a little, lol.

Write a Reply or Comment

Your email address will not be published.