Seminar #5, Dec 4th 2014.


Talk 1. Motion tracking with 3D shape, color, and motion cues

David Held, Thrun and Savarese Labs

Although object tracking has been studied for decades, real-time tracking algorithms often suffer from low accuracy and poor robustness when confronted with difficult, real-world data. We present a tracker that combines 3D shape, color (when available), and motion cues to accurately track moving objects in real-time. Our tracker allocates computational effort based on the shape of the posterior distribution. Starting with a coarse approximation to the posterior, the tracker successively refines this distribution, increasing in tracking accuracy over time. The tracker can thus be run for any amount of time, after which the current approximation to the posterior is returned. Even at a minimum runtime of 0.7 milliseconds, our method outperforms all of the baseline methods of similar speed by at least 10%. If our tracker is allowed to run for longer, the accuracy continues to improve, and it continues to outperform all baseline methods. Our tracker is thus anytime, allowing the speed or accuracy to be optimized based on the needs of the application.


Talk 2. Underwater Grasp Evaluation with a Compliant Robotic Hand

Hannah Stuart, Cutkosky Lab

The Red Sea Exploratorium dexterous hand is designed for unstructured underwater environments, and will be remotely tele-operated by marine researchers. Its compliant underactuated design with suction flow on the fingertips can robustly perform a variety of tasks, but also demonstrates particular failure modes. These modes can be characterized to determine what user feedback is essential for success, and what reflexes/behaviors could be automated. Using tactile sensors specific to this application, we explore what key grasp information can be extracted and how it might be used to predict and prevent grasp failures.


Talk 3. Skin Deformation Display for Enhanced Driver Situational Awareness

Chris Ploch, Cutkosky Lab

Haptic sensations transmitted through the steering wheel are a vital element of driving, relaying information about road conditions, side slip, and traction. As cars take on more autonomy in driving, preserving the driver’s awareness of the car’s behavior and the driving situation is becoming increasingly important. Skin deformation represents a promising modality of haptic feedback that could be implemented in vehicles to improve driver situational awareness. In particular, this type of feedback could be useful in providing cues related to lanekeeping, collision avoidance, and deteriorating road friction conditions. In this presentation, I describe the development of a haptic feedback display consisting of compact tangential display devices embedded in the rim of the steering wheel, present preliminary device characterization results, and propose future experiments to test whether the feedback improves collision avoidance.


Talk 4. Perching and Vertical Climbing: Design of a Multimodal Robot

Matt Estrada, Cutkosky Lab

We present a robot capable of both (1) dynamically perching onto smooth, flat surfaces from a ballistic trajectory and (2) successfully transitioning to a climbing gait. Merging these two modes of movement is achieved via a mechanism utilizing an opposed grip with directional adhesives. Critical design considerations include (a) climbing mechanism weight constraints, (b) suitable body geometry for climbing and (c) effects of impact dynamics. The robot uses a symmetric linkage and cam mechanism to load and detach the feet while climbing. The lengths of key parameters, including the distances between each the feet and the tail, are chosen based on the ratio of required preload force and detachment force for the adhesive mechanism.


Talk 5. O2, the underwater diver

Xiyang Yeh, Khatib Lab

Our group is presently building O2, a bimanual underwater robotic ‘avatar diver’, as part of the Red Sea Exploratorium project to explore and monitor marine environments. The robotic platform will visually explore and image marine life and coral reefs in the Red Sea, collect samples, perform fine manipulation tasks, and conduct various physical measurements. In this talk, I will present challenges in designing dextrous underwater robots. I will outline our design methodology and discuss the robot's hardware-software architecture in detail.


Seminar #4, Nov 21st 2014.


Talk 1. Hopping Robots in Space: Hybrid Locomotion for the Exploration of Small Solar System Bodies

Benjamin Hockman, Pavone Lab

The future in-situ exploration of small Solar System bodies requires rovers capable of controlled surface mobility. In the microgravity environment of small bodies such as asteroids, comets or small moons, conventional wheeled rovers are ineffective due to the low frictional forces. Through a joint collaboration with the Jet Propulsion Laboratory, we have been studying microgravity mobility approaches using hopping/tumbling platforms. We present a minimalistic, internally-actuated spacecraft/rover hybrid that uses flywheels and brakes to impart mobility. This concept has the potential to lead to small, quasi-expendable, yet maneuverable rovers that are robust as they have no external moving parts. We characterize the dynamics of such platforms (including fundamental performance limitations) and discuss the control and planning algorithms. Simulations and preliminary experiments demonstrate the ability to perform long distance hops as well as short, precise traverses through controlled “tumbles”. To test our prototype in an asteroid-like environment, a novel 6 DoF gravity offload test bed is developed, which emulates rover dynamics in microgravity.


Talk 2. A User Interface for Teleoperation

Brian Soe, Khatib Lab

Autonomous robot control in unstructured and dynamically complex environments is challenging. In such environments, it could be beneficial to rely on human skills and perception to perform high-level commands, while the robot generates the low-level controls. In our lab, we are developing a framework to enable one or several users to control multiple remotely operated robots. The framework supports periodic update of the robotic state, and high frequency haptic interaction. This framework will be used in our O2 Underwater Robot and the Boeing wing crawler projects.


Talk 3. Context Dependent Mapping of Natural Language to Action Plan

Dipendra Misra, Saxena Lab

In this presentation, I will talk about my recent work concerning mapping of natural language commands to a robotic action plan sensitive to environmental context. I will discuss why even mapping a simple command such as "bring me warm tea" can be really difficult. Discussion will emphasize how action plan, environment representation and language are entangled.


Talk 4. A Skin-Stretch Haptic Device for Improved Control of Brain-Computer Interfaces

Sean Sketch and Darrel Deo, Okamura Lab

Robotic systems, such as prosthetics and exoskeletons, offer people suffering from motor impairments a chance to regain lost physical functionality. However, the neural control that individuals are able to exert over these robots is currently limited. This is due to both lack of control authority in many degrees of freedom and insufficient sensory feedback through the human-robot interface. We propose that haptic feedback is paramount for accurate and efficient control of robots via brain-computer interfaces (BCIs). Skin stretch at the fingertip is a novel form of haptic feedback for improving BCI-based robot control. In this presentation, we describe the design of a BCI-driven skin-stretch device, assess several control paradigms for this device, and demonstrate in a human user study that skin-stretch haptic feedback has the potential to improve the control of a computer cursor via an inexpensive, commercial electroencephalography-based (EEG) BCI.


Talk 5. Resonance: A New Perspective on Brain Dynamics

Kaveh Laksari, Camarillo Lab

Helmets have proven effective in mitigating moderate to severe head injuries by attenuating translational head accelerations. However, reduced acceleration levels do not seem to prevent mild traumatic brain injury (mTBI). Despite the ubiquity of helmets in contact sports, mTBI is highly prevalent. Even athletes without diagnosed concussions show pathological brain changes in fMRI and DTI studies. To study mTBI injury mechanisms, we need to gain a better understanding of the governing dynamics of skull-brain interactions. Here we propose a possible cause of mild repetitive injury, and hypothesize that brain motion is governed by slow dynamics, which can be driven dangerously close to its resonance frequency in typical sports head impacts. We developed the first dynamic model of brain based on in vivo MRI data and showed brain motion can be modeled by a rigid-body with constrained motion kinematics. Based on this model, we showed that skull-brain dynamics is dominated by an under-damped second order system with a low-frequency resonance at around 15 Hz. To test applicability to typical on-field exposures, we verified that model predictions have good agreement with median impact level cadaver experimental results both in time and frequency domains. From our previous field studies, we found that sports head impacts have a predominant driving frequency of 17 Hz in the sagittal plane. Combined with model findings, this implies that the helmeted head may be driven close to a mechanical resonance of the brain, leading to amplified brain-skull relative motion. It suggests that helmets could be exacerbating the effects of sports head impacts, and necessitates a re-evaluation of preventative equipment.


Seminar #3, Nov 7th 2014.


Talk 1. Direct volume rendering for deformable models

Brian Jo, Salisbury Lab

We present a system for interactive direct volume rendering of voxel grid data under deformations defined on an underlying tetrahedral mesh. The need for such a system often arises in medical simulation, where the voxel grid may contain radiodensities from a CT scan, and a finite element model deforms an underlying tetrahedral mesh. The fundamental idea of our algorithm is to first map rays in the deformed space of the object to the undeformed space before casting them through the voxel grid. This preliminary step allows us to avoid having to either resample the voxel data each time step or update any kind of underlying acceleration structure. We also introduce a spatial acceleration structure tailored for tetrahedral meshes that uses a combination of octrees and variance-based binary search partitions (BSPs), as well as a texture encoding scheme to upload this structure to a shader.


Talk 2. Magnetically steered microcatheters

Lizmarie Comenencia, Okamura Lab

Aneurysms, stroke, and other diseases in the brain can be reached and treated using catheters that are guided through the vascular system. However, current brain microcatheters have limited reach because they must be pushed and steered from outside the body, and are too large to navigate the narrow and tortuous geometries in deep brain vasculature. Magnetically steered brain microcatheters can enable enhanced control of the catheter tip movement. We propose to use a single permanent magnet to apply both force and torque in order to steer microcatheters in deep brain vasculature. As a scaled proof of concept, we attach a cylindrical magnet to a microcatheter tip and apply force and torque using an external actuator magnet controlled by a three-degree-of-freedom planar robot. Analysis of the resulting deflection and curvature of the microcatheter tip demonstrate increasing deflection with coupled force and torque control. Applications of this work include delivery of treatments for brain disease and deep brain stimulation via a minimally invasive approach.


Talk 3. Learning user preferences over robot trajectories

Ashesh Jain, Saxena Lab

We consider the problem of learning user preferences over robot trajectories for environments rich in objects and humans. This is challenging because the criterion defining a good trajectory varies with users, tasks and interactions in the environments. We represent trajectory preferences using a cost function that the robot learns and uses it to generate good trajectories in new environments. We design a crowd-sourcing system - PlanIt, where non-expert users label segments of robot’s trajectory. PlanIt allows us to collect a large amount of user feedback, and using the weak, noisy labels from PlanIt we learn the parameters of our model. We test our approach on 122 different environments for robotic navigation and manipulation tasks. Our extensive experiments show the learned cost function generates preferred trajectories in human environments. Our crowdsourcing system PlanIt is publicly available for visualizing the learned cost function as heatmaps and for providing preference feedback: http://planit.cs.cornell.edu


Talk 4. Contact-state based filtering for haptic studies

Keegan Go, Khatib Lab

Haptic studies often use spring forces or physics engine forces for haptic rendering, but both these methods have drawbacks. Spring forces soften the feeling of contact making it difficult for users perceive contact when moving slowly. On the other hand, physics engine computed forces tend to be more realistic in contact though they are highly unstable in certain configurations. Here we show a contact-state based method for combining the two which aims to solve both these problems. In this method, objects are decomposed into cuboids, and the current contact-state of an object is described as the subset of vertices that are in contact. Spring forces are rendered to the user and the physics engine computed forces are added as transience only when the contact-state changes.


Talk 5. µTugs: controllable adhesives enabling micro robots to deliver macro loads

David Christensen, Cutkosky Lab

The controllable adhesives used by insects to both carry large loads and move quickly despite their small scale inspires the μTug robot concept. These are small robots that can both move quickly and use controllable adhesion to apply interaction forces many times their body weight. The adhesives enable these autonomous robots to accomplish this feat on a va- riety of common surfaces without complex infrastructure. The benefits, requirements, and theoretical efficiency of the adhesive in this application are discussed as well as the practical choices of actuator and robot working surface material selection. A robot actuated by piezoelectric bimorphs demonstrates fast walking with a no-load rate of 50 Hz and a loaded rate of 10 Hz. A 12 g shape memory alloy (SMA) actuated robot demonstrates the ability to load more of the adhesive enabling it to tow 6.5 kg on glass (or 500 times its body weight). Continuous rotation actuators (electromagnetic in this case) are demonstrated on another 12 g robot give it nearly unlimited work cycles through gearing. This leads to advantages in towing capacity (up to 22 kg or over 1800 times its body weight), step size, and efficiency. This work shows that using such an adhesive system enables small robots to provide truly human scale interaction forces, despite their size and mass. This will enable future microrobots to not only sense the state of the human environment in which they operate, but apply large enough forces to modify it in response.




Seminar #2, Oct 24th 2014.


Talk 1. Grasping without Squeezing:Shear Adhesion Gripper with Fibrillar Thin Film

Elliot Hawkes, Cutoksky Lab

Nearly all robotic grippers have one trait in common: they grasp objects with normal forces, either directly, or indirectly through friction. This method of grasping is effective for objects small enough for a given gripper to partially encompass. However, to grasp larger objects, significant grip forces and a high coefficient friction are required. We present a new grasping method for convex objects, using almost exclusively shear forces. We achieve shear grasping with a gripper that utilizes thin film gecko-inspired fibrillar adhesives that conform to the curvature of the object. We present a verified model for grasping a range of curvatures and results that demonstrate the thin film fibrillar adhesives' increased contact area on textured surfaces when loaded in shear. Finally, the gripper is implemented on a robotic arm and grasps a variety of convex objects (at rest and ballistic).


Talk 2. Haptic-fMRI Design Paradigms

Hari Ganti, Khatib Lab

Haptic-fMRI is a relatively new introduction to robotics, and it merges the goals of traditional robotics, such as high stiffness and low inertia, with haptic-fMRI specific goals, like transparency, fMRI-compatibility, and setup ease. Designing around these constraints is possible, and has yielded significant neuroscience data related to human-motor control, exemplifying usage of the technology. In this talk, I will show how rapid prototyping methods, along with new materials, can create a transparent, fMRI-compatible robot, with high stiffness and minimized inertia.


Talk 3. Anticipatory Planning for Human-Robot Teams

Hema Koppula, Saxena Lab

When robots work alongside humans for performing collaborative tasks, they need to be able to anticipate human’s future actions and plan appropriate actions. The tasks we consider are performed in contextually-rich environments containing objects, and there is a large variation in the way humans perform these tasks. We use a graphical model to represent the state-space, where we model the humans through their low-level kinematics as well as their high-level intent, and model their interactions with the objects through physically-grounded object affordances. This allows our model to anticipate a belief about possible future human actions, and we model the human’s and robot’s behavior through an MDP in this rich state-space.


Talk 4. Position/force control for unmodelled, soft robots

Michael Yip, Camarillo and Salisbury Labs

This talk will give a brief update on our work on controlling a flexible, continuum robot without using any kinematic or mechanics model. I will show how we can do both position and force control while the robot is navigating constraints and obstacles in an unsensed environment, all by learning the Jacobian matrix online and adapting the robot control as it encounters new disturbances.


Talk 5. Scaling Controllable Adhesives to Grapple Floating Objects in Space

Hao Jiang, Cutkosky Lab

As the number of rocket bodies and other debris in Earth’s orbit increases, the need to capture and remove this space junk becomes essential to protect new satellites. A low cost solution may include gecko-inspired directional adhesives, which require almost no compressive preload to generate adhesion and are therefore suitable for surface grasping in space where objects are free floating. Current individual adhesive units with a pair of opposed pads achieve a limit of 13N normal to the surface. Instead of using a single large unit to generate high levels of adhesion, using multiple small gripper units is desirable to prevent single-point failures and to conform to higher curvatures. For this strategy to succeed, it is essential to distribute the overall force evenly, to minimize the overall preload normal to the surface, and to prevent local failures from propagating over the array. We present two load sharing mechanisms. The first uses nearly-constant force springs in parallel. The second uses a tendon and pulleys in series. Both allow a 4-unit gripper to maintain the same adhesive stress as a single unit. A normal adhesive load to compressive preload ratio of 100:1 is demonstrated. Zero gravity experiments and air bearing floor experiments demonstrate the gripper’s functionality in a simulated space environment.




Seminar #1, Oct 3rd 2014.


Talk 1. User experiment with Force sensing needle and 1DOF Haptic feedback system

Jung Hwa, Cutoksky Lab

The presentation is overview about calibration and user test results of a 3-D tip-force sensing needle with haptic feedback. To overcome lack of haptic sensation in minimally invasive interventions, we designed a smart needle that senses a force at the tip with embedded fiber bragg grating (FBG) sensors. The needle is interrogated at 2 kHz, and dynamic forces are displayed remotely with a voice coil actuator. The needle is tested in a simulated ideal single-axis master/slave system, with the voice coil haptic display at the master, and the needle at the slave end. Tissue phantoms with embedded membranes were used to determine the ability of the tip-force sensors to provide real-time haptic feedback as compared to external sensors at the needle base during needle insertion via the master/slave system. Subjects were able to determine the position of the embedded membranes with significantly better accuracy using FBG tip feedback than with base feedback using a commercial force/torque sensor (p = 0.045) or with no added haptic feedback(p = 0.0024).


Talk 2. Acceleration filters for Haptic fMRI

Jack Zhu, Khatib Lab

Control of robotic systems rely on adequate estimates of joint velocity, position, and acceleration. Optical encoders are often used to directly measure joint positions, but differentiation of these signals at high control rates greatly increase noise for velocity and acceleration signals. We demonstrate an implementation of the Savitsky-Golay filter that provides low-noise velocity and acceleration estimates in real-time, and a control framework that allows asynchronous reading of joint and task space parameters.


Talk 3. Integrating motion generation and variable impedance control

Mohammad Khansari, Khatib Lab

In this talk, I will discuss the new paradigm of integrating motion generation and variable impedance control into a unified control policy [Khansari et. al., RSS 2014]. This idea is motivated by the fact that motion generation and variable impedance control are jointly essential for successful execution of many robotic tasks. Although these two skills are closely related to each other, they are currently used as two disjoint units, i.e. a motion generator that generates a trajectory and an impedance controller that tracks the generated trajectory. In my talk I will highlight the importance of having a unified control policy that is capable of simultaneously performing both skills, and showcase the features of this new controller on a number of tasks tested on the KUKA LWR robot.


Talk 4. Effects of Master-Slave Tool Misalignment in a Teleoperated Surgical Robot

Clifford Bargar, Okamura Lab

In a teleoperated system, misalignment between the master and slave manipulators can result from clutching, errors in the kinematic model, and/or sensor errors. This study examines the effect of type and magnitude of misalignment on the performance of the teleoperator. We first characterized the magnitude and direction of orientation misalignment created when clutching and unclutching during use of research versions of two surgical robots: the Raven II and the da Vinci Research Kit. We then purposely generated typical misalignments in order to measure the impact of such misalignment on user performance of a modified peg transfer task with the Raven II. For both orientation and velocity misalignment, users were able to compensate for misalignment angles up to approximately 20 degrees. These results can be used to guide the design and refinement of teleoperated systems for a variety of applications.


Talk 5. Experimental Evaluation of a Passively Morphing Ornithopter

Aimy Wissa, Cutkosky Lab

Ornithopters or flapping wing Unmanned Aerial Vehicles (UAVs) have potential applications in both civil and military sectors. Amongst all categories of UAVs, ornithopters have a unique ability to fly in low Reynolds number flight regimes and have the agility and maneuverability of rotary wing aircraft. In nature, birds achieve such performance by exploiting various wing kinematics known as gaits. The objective of this work was to improve the steady level flight performance of an ornithopter by implementing the Continuous Vortex Gait using a novel passive compliant mechanism. A compliant mechanism, called a compliant spine, was fabricated, and integrated in the ornithopter's wing leading edge spar. Each compliant spine was designed to be flexible in bending during the wing upstroke and stiff in bending during the wing downstroke. A test ornithopter was tested in free and constrained flight with various compliant spine designs inserted in its wings. Results from all the tests, proved that feasibility and efficacy of passive wing morphing using a compliant mechanism in improving the steady level flight performance of the test ornithopter.






    Designed by Samir Menon.
    © Stanford University.
    Last updated on Sep 18th, 2014