A Gesture Toward Change

Recall for a moment a scene—you’ve probably witnessed something like it at trade shows or on video dozens of times. A robot arm moves purposefully from its rest position. It swoops down on an object—a sphere, say—plucks it up, holds it aloft, replaces it, and then returns to rest. Each motion is fast, smooth, and, well, mechanical.

Now imagine, if you will, another scene. It is 1937, in a darkened theater in London. A young Laurence Olivier, legendary career still ahead of him, reaches out, hesitates, but reluctantly, as if under a regretful spell, takes from another player a skull.

“Alas, poor Yorick! I knew, him, Horatio …”

One a motion. The other a gesture (Figure 1). This article is about the difference, and the profound change that difference has triggered in the architecture of motor control.

Figure 1. Some objects are harder to pick up than others.

Motion and Gesture

Obviously we are using both words in very restricted ways. Both are descriptions of trajectories—paths defined by continuous sets of positions and times. By motion we mean a trajectory determined only by its endpoints and the physical characteristics—the degrees of freedom and dynamics—of the device under control: the robot arm, drone, self-driving car, or whatever. When a robot repeats a motion you expect it to duplicate the trajectory exactly.

But in a gesture, the trajectory may be determined by outside constraints. For example, a trajectory might be chosen to keep acceleration below some set value to avoid damaging a payload—maybe a full cup of hot coffee. Or the trajectory might be optimized to minimize some parameter, such as energy consumption or noise. Or the trajectory might be complicated by the need to avoid obstacles.

There are several important elaborations on this last point. In legacy robotics it is assumed that locations of the object to be manipulated and other objects in its environment are static. But many systems now are controlled at least in part by machine vision. Camera input feeds an object-recognition module that defines the location, orientation, and trajectory of the intended target and of other objects in the vicinity, freeing the robot to work in much more dynamic environments.

A special case is becoming increasingly important: collaborative robotics. In this scenario, multiple robots—or, more challengingly, robots and humans—work together in the same field of motion. The interaction may be minimal, as in slightly overlapping stations on an assembly line. Or it may be intimate, as in a robot surgical assistant. Either way, collaboration presents new design issues. Now there may be complicated obstacles performing complex, unpredictable motions within the robot’s field. And if a human is involved, there will be functional safety issues—you the designer will have to prove that no harm can come to the people.

With humans there is yet another complexity—one that may not yet be fully included in studies of collaborative robots. Humans, consciously or subconsciously, ascribe meaning to gestures. The mechanical motion of a simple robot arm carries a strong sense of “I am a machine, and unaware of you.” Try imitating it in front of a mirror to see what it conveys. But when a mechanical device performs a gesture—using a trajectory not obviously dictated by its task—our brains attempt to infer meaning and even emotion. This peculiarity can be anything from creepy to amusing to distracting to outright dangerous. It can also be used intentionally to transfer information from the machine to the human.

All this is interesting theory. But what is the point for system designers? Simply, it is that machine activity is evolving from simple motions to gestures. As it does so, the software and hardware lying between the sensors and the motor windings must change. And depending on how the system is partitioned—especially in Internet of Things (IoT) implementations–even the motor control blocks closest to the windings may become quite different.

A Gesture Pipeline

We can describe the process of making gestures in terms of a pipeline, with stages moving from highly abstract—situations and goals—to highly concrete—motor winding currents and shaft-encoder outputs (Figure 2). The definitions are useful but somewhat arbitrary.

.Figure 2. A gesture pipeline includes far more than just a simple motor controller.

The first stage is situation awareness. In this stage artificial intelligence—or perhaps a human—maintains a dynamic, 3D model of the environment. Ideally, it should include an assessment of the intent of movable objects so it can assign probabilities to their possible movements. And it must include objectives: is the intent to manipulate an object? Under what constraints? What, if anything, should the gesture communicate to observers?

The next stage in our pipeline takes data from the awareness stage and selects a trajectory to fulfill the objectives. Think of a self-driving car choosing a course and speed that gets to Point B without risking a collision, enraging other drivers, breaking laws, or terrifying pedestrians. Or envision a collaborative robot trying to pick up a part through a tangle of other robot and human limbs. At the same time, the trajectory may have additional constraints such as minimizing energy, keeping acceleration below a limit, or describing a particular figure in space.

In the third stage, the system must map the trajectory into a time-dependent series of commands that can be executed by the motor controllers. This can be done at many different levels of abstraction, ranging from decomposing the trajectory into a myriad of points to modeling it as a smooth sequence of complex arcs.

Finally, there is the motor controller. This stage must take in whatever the previous stage has rendered and convert it into currents on motor windings that cause the device to follow the trajectory. It is generally a feedback control system.

Zooming In

You could of course write a book about any one of these stages. But we are going to limit ourselves to some comments about the last two. They are interesting, closely related, and the most influenced by the advent of the IoT.

The end result of the pipeline, of course, has to be motion. And that means currents through motor windings, which in turn means signals to motor controllers: torque levels to field-oriented controllers, pulse trains to synchro motor, and so forth. But how do we get a trajectory—which may have been created in a data center miles away—into the right form at the input to the motor controller?

For simple mechanical motion there is not much of a problem. The trajectory can be specified simply as an endpoint for each of the device’s degrees of freedom. Software in the motor controller can just implement a classical proportional-integral-derivative (PID) control loop (Figure 3) that will ramp up toward maximum speed until it approaches the next endpoint, then decelerate to land on the endpoint.

Figure 3. A simple motor controller translates a command for a new position into current waveforms on a permanent-magnet synchronous motor.

But what about more complex trajectories? If only the path matters and the speed along the path is irrelevant, then you could break the trajectory up into little linear segments and proceed as above. The PID controller would do its best to track the piecewise-linear waveform. Depending on the closed-loop transfer functions and the update rate, this could result in a poor or a good approximation to the trajectory—or a noisy, energy-wasting motion if the control loop is fast enough to reproduce the abrupt changes at the endpoints.

This approach becomes a problem if the bandwidth between the program generating the trajectory and the motor controller is limited, or if the latency is unpredictable. It also suffers from an inherent limitation: the controller only knows about the next endpoint. It can’t make optimizations that extend further into the trajectory. All such optimizations have to be baked into the trajectory when it is planned. It would be better if the local controller, rather than being just simple PID control, could look ahead and optimize.

Such ideas can result in very large increases in the computing load for the motor controller. Michael Randt, CEO of motion-control specialist Trinamic, points out that unaided microcontroller units (MCUs) may already be overburdened by field-oriented control and its matrix arithmetic. But turning to mixed-signal ASICs or to FPGAs can create the headroom to accommodate more demanding tasks.

One approach to such look-ahead optimization was described by Digi-Key vice president of applications engineering Randall Restle in a paper at the 2016 Intel® Developers’ Forum. The idea is to extract a series of points and velocities on the trajectory, starting from the current position. Fit a segment of a known-to-be-optimal curve to the points. Then select a velocity and acceleration from that curve for the current values. In this way the controller is always following a locally optimal path along the trajectory.

This process could be very computationally intensive. But Restle’s paper pointed out that the hard work had already been done: long before the digital age, designers of mechanical cams used fifth-order polynomials to create a family of optimal cam shapes. Those shapes can be parameterized, put in tables, and used to fit the next few points on the trajectory.

You can generalize from this approach. Working in state space, so you capture positions and velocities, you can mathematically minimize a cost function along a path through the space, controlling the motion in a near-ideal way. This continuous optimization presents a big computing burden, but it is within the range of ASIC or FPGA accelerators.

A further step, according to Intel’s systems solution engineering manager Ben Jeppersen, is to move toward model-predictive control (MPC). A technique related to the Kalman filters used for sensorless control of permanent-magnet synchronous motors, MPC maintains a mathematical model of the full system. This might include just the device under control, or it might extend into the environment. Using the model, the control system can test all of its options for the next step along the trajectory and pick the one that appears locally optimal. Again, the computational load can be formidable. And outside of industrial process control, where the technique is familiar, MPC is today more widely discussed in academia than in the industry.

All of these techniques are ways of combining a preselected trajectory with position and velocity feedback from the device under control to approximate optimal control of the device’s motors. By working in state space or employing MPC it is possible to optimize motion along the trajectory for variables that cannot be measured directly, such as maximum acceleration or jerk, avoidance of resonances, motor heating, or other physical and predictable quantities.

By shifting computation of the control loop to powerful local hardware, these techniques also fit into an IoT scenario employing edge computing. Trajectory selection can be left in the cloud or centralized controller. If the trajectory is communicated efficiently—as polynomial segments, for instance—the connection can be uncritical of bandwidth or latency. Near-real-time transactions, such as between sensors and the motion controller, can be handled on a local industrial network such as the Time-Sensitive Network. With enough local computing power, data from local cameras can even be used to update the locations and motions of external objects and modify the trajectory to accommodate them.

These techniques, driven by the need to create gestures, push motor control deep into the realm of applied mathematics. But Trimanic’s Randt points out that it is still engineering, not math. From choosing what to control to selecting algorithms to designing an implementation on real hardware, success comes from knowing your way around electric motors. There is no substitute for that.

 


CATEGORIES : All/ AUTHOR : Ron Wilson

Has one comment to “A Gesture Toward Change”

You can leave a reply or Trackback this post.
  1. Randolph Garrison says: -#1

    sounds like an over complication of the problem. I would recommend a pre adjusted step similar to projecting a movie at 25 to 60 frames a second. This way the movements appear fluid while actually they are very controlled small increments to the direction and position needed.

Write a Reply or Comment

Your email address will not be published.