Sampling Based Motion Planning algorithms such as RRT* and PRM* provide computationally tractable ways to plan trajectories around complicated obstacles in high dimensions. Most of the theoretical properties for these algorithms, such as probabilistic feasibility and asymptotic optimality, has been mainly developed for the deterministic case in which a robot has perfect knowledge of its state and environment at all times. I will discuss the challenges for extending similar results and algorithms to the more realistic stochastic setting, when the robot has some motion uncertainty and a noisy sensor to position itself, and wants a cost-optimal trajectory while maintaining a low probability of collision. For example, from a computational standpoint, path collision probabilities need to be computed quickly and current approximations are extremely crude, and from an optimization standpoint, solving the chance-constrained problem formulation is much more challenging than just the collision-free formulation.
We have designed a low-cost, open-hardware haptic device, called Hapkit, in order to provide a hands-on laboratory experience in an introductory online haptics course. Hapkit is a one-degree-of-freedom kinesthetic haptic device that allows users to input motions and feel programmed forces. We piloted Hapkit in an online course in Autumn 2013, and it has since been used in an introductory controls class as well as graduate-level haptics class. The part files, assembly instructions, and template code are all open-source and can be found at http://hapkit.stanford.edu/.
The ability to react autonomously and robustly to dynamically changing environments is an essential feature for robots that are envisioned to work in human environments. In my talk, I will present a method called Stable Estimator of Dynamical Systems (SEDS); a framework that allows for fast learning of robot reaching motions from a small set of demonstrations. SEDS has four main features: 1) it can produce human-like movements, 2) it has guaranteed global asymptotic stability at the target point (if the target is reachable), 3) it is inherently robust to perturbations, and 4) it can instantly adapt to changes in dynamic environments. I will showcase performance of SEDS in a number of robot experiments, including reaching a target in a dynamic environment, playing mini-golf, dodging fast moving objects, and catching flying objects.
A radiotranslucent, single-transducer 3D ultrasound probe for volumetric imaging would address challenges in both imaging and guidance. Opportunities for such a device are increasing, particularly with the growing potential of targeted contrast-enhanced ultrasound in cancer detection. In the Salisbury Robotics Lab we have developed a method for single-transducer 3D ultrasound that leverages simple mechanical design and advanced imaging processing to meet these needs. The probe has applications for robotically guided radiation therapy and also represents a low-cost alternative to current methods of volumetric ultrasound.
The world’s oceans span a significant proportion of Earth’s surface and house a whole plethora of amazing flora and fauna. Exploring and monitoring these oceans, however, has remained expensive and challenging because human divers can only explore these environments during short periods of time and within limited depths. To facilitate such efforts, we propose to design a haptic based remotely teleoperated bimanual robot. Design methodologies for robotic manipulators have been well documented in literature. However, generalized methodologies for robotic systems with complex properties such as floating bases and branching architectures remain a challenge. The talk will focus on our design analyses of such systems.
When we use a tool to explore or manipulate an object, friction between the surface of the tool and the finger pads generates skin stretch cues that are related to the interaction forces. Previous work showed that stiffness discrimination with skin stretch cues is nearly as accurate as when using full kinesthetic cues. In this study, participants perform a teleoperated palpation experiment to determine the orientation of a stiff region in a surrounding artificial tissue, under five feedback conditions: force, reduced-gain force, vibration, graphical, and skin stretch feedback. When participants received tactor-induced skin stretch feedback, they performed as well as with force feedback, with no increase in task completion time.
An underactuated and compliant tendon-driven hand design is presented for underwater mobile manipulation. It is designed to mimic many of the same tasks as a human diver, so that marine researchers can dexterously access coral that has been previously unexplored. Performing the application-specific tasks while underwater can be quite challenging. Light suction flow at the fingertips helps mitigate repulsive water-object-hand interactions when grasping underwater, especially with light and small objects. We have developed a simulation model for the hand grasping with suction underwater to guide us in our future design work. For more information about the project visit: http://www.redsearobotics.net/
I will be talking about the different components that go into developing a musculoskeletal model for studying human movement and provide ways to validate our model in the context of its intended use.
I will be talking about the new tool that we've been creating for calculating metabolics of our simulations of human movement. I will review the different components that go into calculating metabolics and its usefulness in understanding walking with heavy loads and developing assistive devices.
Mild traumatic brain injuries (mTBI) from repeated head collisions have been linked to neurodegeneration in athletes and soldiers. Human injury tolerance is complex and poorly understood, making identification and prevention ineffective. Using a novel instrumented mouthguard, we measured human skull kinematics during head collisions, including the first complete six degree of freedom measurements of injury. We show that classical injury criteria could not describe the wide spatiotemporal variability of head collision biomechanics. Injury tolerance appears to vary by direction, as a weighted multidimensional classifier was required to unambiguously identify injury. Finite element simulations predicted tissue deformations in the corpus callosum and brainstem consistent with observed cognitive impairment and loss of consciousness. Our findings support the use of high-dimensional measurement devices as clinically translatable means of real time injury identification to prevent repeat trauma and neurodegeneration.
Dry adhesives are attractive for perching applications due to their extremely fast passive response times and ease of detachment. In order to generate adhesion, the proper ratio of shear/normal force must be maintained, which leads to the opposed grip design which uses internal force to enable inverted attachment.
We have been developing a controller to drive human musculoskeletal models to track experimental data with minimum effort. The desired properties of this new controller include robustness, flexibility, efficiency, generalizability as well as optimality. I will briefly describe our problem formulation and the methods we have tried to solve it. I will also show some experiment results and talk about the existing challenges.
To ensure safe and reliable operation in a robotic oil drilling system, it is essential to detect contact events such as impacts and slips between end-effectors and workpieces. In this challenging application, where high forces are used to manipulate heavy metal pipes in noisy environments, acoustic emissions (AE) sensors offer a promising contact sensing solution. Realtime AE signal features are used to create a multinomial contact event classifier. The sensitivity of signal features to a variety of contact events including two types of slip is presented. Results indicate that the classifier is able to robustly and dynamically classify contact events with >90% accuracy using a small set of AE signal features.
Reinforcement learning by means of policy search offers the promise of automatically learning control policies for complex tasks in noisy or partially observed environments. However, successful application of policy search typically requires designing a low-dimensional, compact representation of the policy that can be efficiently optimized with current methods. Unfortunately, much of the intelligence of the controller is often contained in this representation, rather than in the learned parameters. In this talk, I will describe my recent work on guided policy search algorithms, which use trajectory optimization to guide the policy search into regions of high reward and allow much more complex policies to be learned. Guided policy search can be used to learn general-purpose neural network controllers for tasks such as locomotion without manual engineering of the controller representation. I will present recent results for learning policies for simulated bipedal locomotion and push recovery, and discuss some ongoing work on a new version of the guided policy search algorithm that does not require a model of the system dynamics.
During manual interactions, we experience both kinesthetic forces and tactile sensations. Friction and normal force between the fingerpads and the tool/interaction surfaces cause shear and normal deformation of the skin. Capitalizing on this observation, we designed a 3-degree-of-freedom (DoF) tactile device that is grasped by a user and can render both tangential skin stretch and normal deformation on the skin of the user's fingerpads. Tactile feedback from the device is delivered in a manner consistent with natural tactile cues from manual interaction. An experiment assessed the accuracy with which users can locate the center of a contoured hole on a virtual surface. The task was completed under four conditions: the cases of skin deformation and force feedback, with both 3- and 1-DoF feedback in each case. With 3-DoF feedback, users located the hole faster and more accurately than with 1-DoF feedback, for both force and skin deformation feedback. These results indicated that users were able to interpret the additional DoF cues provided by our 3-DoF tactile device to improve task performance.
Controlling a pin array haptic device is challenging in part due to the many inputs required. This presentation discusses the application of singular value decomposition (SVD) within a feedback control system, called the SVD System, to control numerous subsystems with a reduced number of control inputs. The subsystems are coupled using a row-column structure to permit mn subsystems to be controlled using m+n inputs. The SVD System permits simultaneous control of every subsystem, which increases the convergence rate by an order of magnitude compared with previous methods.
Recent advances in combining haptics and virtual simulation with functional magnetic resonance imaging (fMRI) have enabled experiments that map complex unconstrained motions on to the brain. Reliably mapping neural responses to complex motor tasks, however, requires careful haptics engineering to avoid imaging artifacts, along with intuitive experiments that elicit reliable responses. In this talk, I will discuss our work in developing fMRI-compatible haptic interfaces and demonstrate that these interfaces avoid common neuroimaging artifacts. I will also demonstrate how well designed experiments can help overcome limits on fMRI's indirect and slow measurements of neural activity. Finally, I will show that our interface and experiment protocol can reliably elicit and localize heterogeneous neural activation in motor, pre-motor, and somatosensory cortex.
Advances in robotic technology have recently enabled the development of wearable devices aimed at assisting human movement. A major challenge in their development is characterizing how these devices interact with the neuromuscular system. My work addresses this challenge by creating accurate simulations of a standing long jump that enable the study of how external actuation of lower body joints affects neuromuscular performance. A planar, six segment (foot, shank, thigh, head-torso, upper arm, and lower arm) model was implemented in OpenSim, a musculoskeletal modeling package, and was driven by physiologically accurate torque actuators at the ankle, knee, hip, and shoulder. Dynamic optimization was used to solve for the set of torque-time profiles that maximize jumping distance while respecting joint limits and minimizing slipping during takeoff. The simulations were then augmented with an actuator that can provide up to 50 Nm of extension torque at the ankle, knee, or hip, and optimization was performed again. The optimization of the unassisted model yielded a simulated motion that captured salient features of kinematics and joint torques in comparison with experimental data of standing long jumps. Optimization of the model augmented with an actuator at the ankle, knee, or hip predicts that extra extension torque at the knee would yield the best improvement. This work serves as a first step to creating a framework for simulation-based design of augmentative devices.
Maintaining humanoid robot stability in unstructured environments is nontrivial because robots lack human-like tactile sensing and require complex task-speciﬁc controllers to integrate information from multiple sensors. To deploy humanoid robots in cluttered and unstructured environments such as disaster sites, it is necessary to develop advanced techniques in both locomotion and control. We proposes to incorporate a pair of actuated smart staffs with vision and force sensing that transforms biped humanoids into tripeds or quadrupeds or more generally, SupraPeds. The concept of SuprePeds not only improves the stability of humanoid robots while traversing rough terrain but also retains the manipulation capabilities. A uniﬁed task-oriented whole body robot formulation is also proposed to enable controls of task, posture, constraints, and balance in multiple contact situations. The simulation results are presented to demonstrate that the proposed control framework can efﬁciently deal with multi-contact locomotion in 3D unstructured environments.
Despite the apparent effortlessness with which we control our limbs, executing crisp and precise movements presents a complex control problem to the nervous system. During visually-guided reaching movements, neurons in motor cortex drive muscle activity via the spinal cord. Our understanding of how neural circuits generate the patterns of activity required to produce the desired movement is improving, but little is known regarding how motor cortex utilizes proprioceptive and visual feedback to adjust movements online. In order to study the neural mechanisms of motor feedback control, we aim to record neural activity in motor cortex during reaching movements in a simple virtual environment. A haptic feedback device renders visually- and haptically-defined obstacles and unexpected step-force perturbations, which will allow us to probe the cortical dynamics of proprioceptive feedback and the neural implementation of motor control policy.
Impedance-type kinesthetic haptic displays aim to render arbitrary desired dynamics to a human operator using force feedback. To effectively render realistic virtual environments, the difference between desired and rendered dynamics must be small. In this talk, we analyze the closed-loop dynamics of haptic displays considering the effects of time delay and low-pass filtering. We identify important parameters for accuracy in terms of “effective impedances,” which express the closed-loop impedance as physical analogs. Our results establish bandwidth limits for rendering effective stiffness and damping, and the effects of time delay. Experimental data gathered with a Phantom Premium validates the theoretical analysis.
For the power budget of a laptop, the brain and spinal cord coordinate the movements of the human body. By reducing the power consumed in computation, the neuromorphic approach of emulating the brain's spiking neurons is a step towards building autonomous, biomimetic robots. With low-power, analog, spiking silicon neurons, we controlled a physical, 3 degree-of-freedom robot. Our approach is to construct a force-based, task-oriented controller and map its functional components onto the steady-state spiking activity of our silicon neuron hardware. With a force-based controller, our approach is compliant to external forces and safe for the operator and the environment. With a task-oriented controller, our approach is robust to unpredictable disturbances. We obtain the closed-form function of this controller and simplify the task of computing this function by factorizing it into a linear combination of several sub-functions. Each sub-function is then regressed on to the steady-state spiking response of a pool of silicon neurons. In operation, each pool of neurons is fed the current robot configuration and desired task forces and computes the joint torques necessary for the robot. Our neuromorphic system runs in real time and is the first such system to control a robot with three or more degrees-of-freedom.
Designed by Samir Menon.
© Stanford University.
Last updated on Apr 1st, 2014