myAltera Log In

Forgot Username or Password?

Don't have an account?

Is that Drone the Sound of Progress?

By now, thanks to the attentions of marketing writers and news reporters, the word “drone” is overloaded with meanings. Let’s set aside traditional senses: a low-frequency, continuous sound; a male bee; an unsounded string on a musical instrument. We are interested in aircraft.

Somehow in the mid-20th century the military began calling unmanned target aircraft drones. In this century the name has spread–first to other types of unmanned military aircraft, and then to all manner of commercial and consumer flying machines. This article will introduce a taxonomy of these latter drones. We will show that the categories reflect not only manner of use, but also the system architecture. And we will argue that this list of system architectures predicts the roadmap of many different kinds of embedded system designs across the next five years.

A Matter of Control

We can place drones on a spectrum based on how they are controlled. At one extreme are very simple radio-controlled, or even tethered devices. At the other extreme lie fully autonomous drones capable of competing complex missions without human oversight. Along this line lies our story.

In principle, a quadcopter could be a purely radio-controlled device, with no on-board functionality beyond the motor drivers. The operator could control the speed of each rotor independently. But in practice, even most entry-level consumer drones include a flight controller module that takes responsibility for the vehicle’s stability. This raises the level of commands arriving at the drone. Instead of motor speeds, the operator can think in terms of vehicle attitude and thrust, or even orientation and velocity.

That transition implies a lot more than just parsing commands. Our simple model could be just a four-channel RC receiver passing pulse streams to the electronic speed control (ESC) chips driving the motors. The more realistic model must do more: it has to be aware of the vehicle’s attitude.

And this in turn implies a number of on-board sensors: accelerometers, perhaps a gyroscope, and maybe a flux-gate compass to sense attitude (Figure 1). If you want the drone to also have rudimentary emergency functions, such as Fly Home or Land, add in a GPS receiver for location and a pressure sensor to measure air pressure and estimate altitude. All this input joins the commands from the radio receiver at a microcontroller chip—it can be a low-cost 8 bit microcontroller unit (MCU), but a modest ARM* Cortex*-M3 or M7 gives headroom for added features and better control. The MCU uses the sensor inputs to estimate the drone’s location, velocity, and attitude, and combines this information with the command inputs in a software implementation of a proportional-integral-derivative (PID) control loop to compute motor speeds, which it passes along to the ESCs.

Figure 1. A basic drone today combines sensor input and external commands to control rotor motors.

us-systemdesign-journal-drones-f1

Drone 100

Our quadcopter is now a very conventional embedded system, with sensor inputs, feedback control loops, and motor outputs. And it has the seeds of autonomous operation in its ability to return home or to execute an emergency landing. It has the obvious potential to be a lot of fun, to act as a super selfie-stick, and to be of some practical value. But some enthusiasts have found more impressive uses for this simple model.

One example is the emerging sport of drone racing. Racing drones can be as simple as the system we have just described, raced over a purely line-of-sign course. But first-person-view (FPV) racing appears to be taking over. FPV drones add a video camera and an RF video downlink to the drone design, giving the pilot with virtual-reality goggles a live, near-real-time view from on board the drone. This adds a camera, some light video-processing tasks for the MCU, and a downlink RF channel, turning the RF receiver into a transceiver.

Racing drones typically also include return-home and emergency-land functions, and sometimes for safety and convenience a GPS-based ring fence definable by the pilot. When the drone approaches the ring fence it takes some pre-determined action such as automatically returning home. Such limited autonomy can prevent drones from becoming out-of-control hazards and can stop them from simply taking off across country. But these drones are still blind: they can’t prevent hard landings, nor can they avoid collisions with hard objects or resentful fellow pilots. Indeed, many introductions to racing warn that you have to accept that sometimes you will crash—in the beginning, that you will crash frequently.

Before we take up the subject of curing drone blindness, we should look at one more instance of simple drones doing remarkable things. Some months ago, Intel made quite a media splash with Drone 100: an animated light show provided by 100 drones performing synchronized aerial dancing. The display suggests organized swarm behavior by drones aware of each other’s location. But in fact the system is both simpler and—in a way—more interesting than that, according to Intel® UAV product manager Natalie Cheung.

“The hundred drones are controlled by a single pilot at a single computer,” Cheung explains. “The pilot monitors each drone’s location and speed relative to a preprogrammed pattern, using the on-board GPS receivers, barometers, and other sensors.” Cheung said that the drones follow predetermined, timed courses during the display.

The limited autonomy and location accuracy of the blind drones lead to some restrictions. During performances the program calls for 6 m separation between drones. The performance area is surrounded by two GPS geofences. At the first fence an errant drone is asked to return. At the second fence, it is commanded to execute an emergency landing. In order not to exceed environmental limits, the team only fly drones in ideal weather.

Even with these precautions, a successful show takes a lot of preparation, Cheung says. All the choreography has to be programmed ahead of time, from the take-offs in layers to the similarly-sequenced landings at show’s end. But that’s just the beginning. “We arrive four to six days in advance of a show,” Cheung related. “We air-test each drone, test the corners of the area, and fly quadrants of the drone cloud. And we have learned a great deal about how to use weather data.”

The implications of Drone 100 go far beyond putting on a good flight show, Cheung emphasizes. Synchronized, safe clouds of drones could be of huge value on search-and-rescue missions, covering hectares in the time it would take a person to traverse a few square meters. They could quickly inspect large objects, such as a moving container ship. They could disperse sprays or powders over a wide area in minutes—for instance spraying for mosquitoes while residents briefly shelter in place.

But such missions can’t count on a week of on-site preparation, nor on a window of perfect weather. In order to deploy quickly under adverse conditions, a drone cloud—or for that matter an individual—needs more capabilities: the ability to avoid collisions, to navigate by terrain features, perhaps to stream multiple channels of high-definition (HD) video and telemetry back to the base. And these needs mean more architectural changes.

I Can See That

Simply avoiding collisions can be achieved without massive changes. A set of proximity sensors—for instance sonar, which serves wonderfully for bats—can provide a warning that includes direction, range, and rate of approach. From there some additional code on the flight controller MCU should be able to avoid the easily avoidable. But sometimes a drone’s interaction with an object needs to be more complex than merely not crashing into it. It may be necessary to resolve an object from its surroundings, measure its range and velocity, classify it, or even recognize individuals or their gestures. Now we are talking about more sophisticated sensors and much more processing.

Cheung suggests that one path to this goal would be to adapt Intel’s RealSense technology for use on drones. This would provide an HD visual-spectrum camera, two IR cameras for stereo imaging, and an IR laser illuminator, all in a single assembly. The assembly interfaces directly to an AeroCompute Board, which deploys a quad-Atom CPU cluster plus a small FPGA for I/O expansion. We’ve gone quite a way beyond that lone Cortex-M3.

Another approach to enriching the drone’s sensory environment is illustrated by Aerotenna, a company with a background further down the electromagnetic spectrum, in microwave electronics. Using compact microwave transceivers, the company offers precision altimeters, 360-degree radar for collision avoidance, and synthetic-aperture radar for surface or sub-surface imaging.

Such modules can add hugely to the information available to the drone’s flight controller. But they add a new dimension to the drone’s architecture, too. Rather than video analytics, radar sensors require real-time radar signal processing. To address this need, Aerotenna provides their own flight-controller module, based on a medium-sized FPGA with integral ARM Cortex-A9 cores. The CPUs handle the logic while the FPGA fabric deals with the digital signal processing.

Time to Fuse

At this point we have quite a few kinds of data flowing into our drone, each for its own specific purpose. There are sparse numerical data streams from the inertial navigation unit, altimeter, and GPS receiver. The flight controller uses this data, generally combined and fed into a PID control loop, for positioning—that is, controlling the motor speeds. There may be video that just gets compressed and transmitted back to the pilot’s goggles.

Or there may be richer video, from stereo cameras for example, that is used locally: to assist in location, to avoid objects, or for object recognition. And there may be other media, such as sonar or radar, performing other specific functions. Such a drone design can look a great deal like many other multitask control systems, with each particular subset of sensors, actuators, and processing resources dedicated to a specific task.

This partitioned approach simplifies some important implementation problems, such as ensuring sufficient processing bandwidth and buffer memory for streaming signals, and guaranteeing latency for critical events—like a collision warning, for instance. But it also leaves opportunities on the table.

In many situations it is possible to fuse data from different sources to improve the quality of results, or to perform tasks that weren’t otherwise possible. For example, by combining GPS data with visual reference point locations and radar bearings and ranges, a Kalman filter can significantly improve the accuracy and response time of location estimates. At a deeper level, a drone that can not only detect an obstacle but also classify it with vision processing has a much better chance of avoiding the obstacle—especially if said hazard possesses an unusual shape or behavior. If you approach a helicopter, for instance, avoiding the fuselage is the most obvious task, but not the most important one.

But sensor fusion too will have a significant effect on the drone architecture. Kalman filters are computationally expensive cauldrons of matrix arithmetic. And if the input signals have moderately high dynamic range, the filters may require floating-point arithmetic. Object classification from video is another matter altogether—probably best done with a deep-learning neural network. We are now beyond the range of power-efficient MCUs and into the world of hardware accelerators (Figure 2). But we are still very much constrained by the power and weight envelope of our drone platform. We need not more general-purpose computing power, but energy-efficient acceleration of specific computational kernels.

Figure 2. To approach full autonomy, a drone must have many types of inputs and energy-efficient hardware acceleration for complex computing tasks.

us-systemdesign-journal-drones-f2

Safe Travels

There is another aspect to drone evolution quite orthogonal to the issues we’ve examined so far: that aspect is functional safety. There are obvious advantages for drone pilots in being sure the vehicle will not wreak havoc among litigious neighbors. But governments, always skeptical of the rationality of their citizens, are stepping in with regulations to enforce common sense. It is now quite difficult, for example, to legally fly a drone in the USA unless you are in continuous visual contact with it. Such restrictions will only be relaxed as they are replaced by functional safety requirements on the drone design.

These requirements will have three different parts. One part will cover design practices–documenting requirements, qualifying intellectual property (IP), documenting implementation decisions and tool choices, and documenting verification—to ensure that the design does exactly what the requirements specify. A second part requires high reliability during operation and fail-safe modes on failure, ensuring that the drone will do in practice what the design does in theory. The third part requires that the drone will intervene when necessary to avoid inflicting damage.

The first part will have little impact on drone architecture, other than making it quite difficult for designers to use open-source code, use Linux* operating system, or incorporate IP that has not been certified for functional safety. These limitations will in turn make it unlikely that home-built projects will receive the certification needed to escape stringent operating restrictions—a problem that in practice will probably result in a lot of illegal drone use. But the second part—depending on what level of reliability individual agencies demand in particular applications—could require significant hardware changes: screened components, redundant design, or an entire back-up emergency flight controller.

The third part takes us back to our discussion in the previous section. The functionally safe drone must be able to recognize a dangerous situation and get itself into a safe state, whether that means avoiding an obstacle on its course, hovering until an ambiguous situation becomes clear, or finding a safe place for an emergency landing. This is a more demanding requirement than that faced by autonomous cars—it is not sufficient to simply pass control back to the pilot when the controller becomes confused. Even more than our pursuit of limited autonomy, functional safety will push the limits of sensor fusion. Imagine trying to devise and train a neural network that could identify a safe landing spot on a crowded fairground.

The AI Drone

After looking at the demands of functional safety, it is worth swinging back to the user’s point of view. Beyond aerial selfies, air races, and industrial drone clouds, what do users want from the future? One answer that comes quickly to mind is autonomous delivery: the dream that your on-line order click might be answered by a gentle whirring sound and a thump on your front porch. This is the epitome of many disparate dreams, from retail services to remote inspection and maintenance to military adventures: the fully autonomous mission.

In order to complete a mission in any of these contexts without human intervention, our drone will have to extend its function even further. To navigation, object recognition, and safety we must now add, at least for the terminal phase of the mission, the ability to understand context. It won’t be sufficient to reach the right GPS coordinates and drop the package from a low altitude—into the swimming pool, or onto the neighbor’s balcony, say. The drone will need to identify the front porch, and to avoid dropping the parcel onto the postman or into the delighted jaws of the owner’s dog. Ideally, the drone might want to message the customer and only deposit the package when the correct person opens the door.

But this level of intelligence, while it requires no new sensors, is pushing the limits of data-center machine learning—it’s well beyond what most embedded-system designers think of as practical. And in fact the best implementation may be a system partitioned between the drone and the cloud. This implies the ability to stream high-bandwidth sensor data back to the cloud—a capability that may have to wait for 5G cellular deployment. It also suggests the ability of results from a cloud-based deep-learning algorithm to inform the activity of a simpler network in the drone in real time—for example, allowing the intended customer’s spouse, but not a stranger, to approach. This sort of cooperation between linked machine-learning systems is not well studied today.

We have charted a roadmap that begins with a radio-controlled toy and ends in a distributed artificial intelligence (AI) system, part of which resides in a drone. It is easy to argue that this roadmap could predict, with a few changes, the evolutionary course of many other embedded systems, be they industrial, infrastructure, medical, or transportation. From remote operation to functional safety to intelligent autonomy, we are all on the same trip.


CATEGORIES : Embedded system/ AUTHOR : Ron Wilson

4 comments to “Is that Drone the Sound of Progress?”

You can leave a reply or Trackback this post.
  1. Ron. I always look forward to and value reading your articles.

    Standing back a little and looking over the landscape, especially on reflection of your AI segment:

    Look what we expect and accomplish with 3 months of driver’s education with 15 year olds. Contrast that with all the magic needed to have a drone deliver a package.

    And regarding delivery of packages: I’m amazed that FedEx, UPS, and even the US Mail haven’t partnered with companies like Dollar General and neighborhood convenience stores for drop off and pickup services … since people are making pretty regular trips to those locations in their normal daily lives.

    We already have great notification services to our smart phones (e.g. Amazon) for tracking.

    We always seem to be getting technologies looking for problems, rather than problems looking for technologies.

    Regards,
    Todd Marshall
    Plantersville, TX.

  2. Thank you Ron.
    A very interesting and enlightening editorial.
    Keep up the good work.
    DM.

  3. The mixed architecture : FPGA , processor multi-core will be solution ; The project is ambitious and interesting.
    I’m pilot on planes piston engine and RC planes, usind drones to deliver parcel actually is dangerous:
    – no fiability maintenance and redundant sensors
    – no air-traffic controller : transponder
    – flying over city and town : prohibited aerial … .
    In my opinion the military application will be most important development and some civilian application: aerial surveillance in the decade.
    BB

  4. Thank you very much Ron. I’m always looking for articles supporting hybrid architectures FPGA+CPU like cyclone V for drones.

Write a Reply or Comment

Your email address will not be published.