Stanford Robotics

Artificial Intelligence Laboratory - Stanford University

Research on Control Architectures


Whole-Body Control Framework

The whole-body control framework (WBC) is situated very close to the actuator output in the overall robot architecture. It relies on real-time sensor feedback, such as joint encoders and force sensors, to produce actuator commands at a high frequency and in real-time.  Over the last couple of years, we have been working on making this approach extensible at runtime, and to allow complex model updates to be performed in a separate non-real-time process which makes it possible to run several layers of dynamically decoupled tasks in the real-time controller.  The core algorithms and computational models involved in this work are made available as open-source software in the Stanford Whole-Body Control Framework project on Sourceforge.

Overview Slides


The first slide on the left illustrates the overall architecture.  It is based on a rather classical subdivision into perception and action, with layers of representation and control that are roughly equivalent to that various orders of magnitude in the spatial and temporal scopes involved in producing autonomous robot behavior in compliant interaction with everyday environments.

The whole-body controller can receive parameters and goals interactively or from a plan representation such as the one produced by our elastic planning methodology. Feedback signals are mostly rather low-level sensor signal, but they can also incorporate information that depends on higher-level perception, although the latter tends to be fed into the motion generation pipeline by influencing the goals or the shape of the plan.
The second slide, on the right, shows where in the overall architecture the multi-objective operational-space task hierarchy is located.  The currently active behavior determines the tasks to run and their order in the hierarchy.  The whole-body control torques are computed using two phases.  First, each task is independently updated according to its goal and the current sensor readings.  Then, the torque contributions are summed up from top to bottom in the hierarchy, such that lower level tasks get projected into the null-space of higher-level tasks: this null-space is a representation of the torques that could be applied to the robot without disturbing a given task, and it can be computed using the operational space formulation.  In this manner, lower-level tasks cannot interfere with higher-level tasks.  Higher-level tasks will be satisfied first, and a higher-level task can prevent a lower-level task from achieving its goal, but never the other way around.



The third slide, on the left, illustrates that the active behavior can contain a mechanism for selecting different task hierarchies at runtime, for example by using a finite state machine with transitions that depend on sensor events.  This allows the behavior to become more encapsulated and perform compositions of actions.  For example, a door-opening behavior can contain separate task sets for the various phases involved in opening a door: approach handle, grasp handle, turn handle, push door, and release handle.  An effective mechanism for task set selection is an important prerequisite for implementing higher-level behaviors that are composed of distinct phases, such as whole-body control of locomotion using various gaits.
The final slide, on the right, is a simplified diagram that shows the three processes that are involved in our multi-rate update approach for the whole-body controller. There are three processes: the real-time servo performs the torque computation and summation over the task hierarchy, the non-real-time high-priority model process updates the task dynamic models based on the current robot state, and the low-priority user interface process allow interaction with the running servo process.  The whole-body behavior library is shared between the model and servo processes, but both of them have separate copies of the behavior and task instances, which eliminates the need for mutexes.  All communication employs asynchronous message passing.


Links

References

Luis Sentis. Synthesis and Control of Whole-Body Behaviors in Humanoid Systems. PhD thesis, Artificial Intelligence Laboratory, Department of Computer Science, Stanford University, Stanford, USA, July 2007.

Oussama Khatib. A Unified Approach for Motion and Force Control of Robot Manipulators: The Operational Space Formulation. IEEE Journal of Robotics and Automation. Vol. RA-3, No. 1, pp. 43-53, February 1987.