In the past decade, an abundance of data has become available, such as online data on the Web, scientific data such as the transcript of the human genome, sensor data acquired by robots or by the buildings we inhabit. Turning data into information pertaining to problems that people care about, is the central mission of AI research at Stanford.
Members of the Stanford AI Lab have contributed to fields as diverse as bio-informatics, cognition, computational geometry, computer vision, decision theory, distributed systems, game theory, image processing, information retrieval, knowledge systems, logic, machine learning, multi-agent systems, natural language, neural networks, planning, probabilistic inference, sensor networks, and robotics.
The Salisbury Lab conducts research in the areas of robotics, medical robotics, haptic devices and haptic rendering algorithms. One project is developing a virtual environment that enables surgeons to plan and practice surgical procedures by interacting visually and haptically with patient-specific data derived from CAT and MRI scans. Our lab developed the first version of the personal robot (PR-1), which eventually was licensed to Willow Garage and was the genesis of the PR-2 personal robot.
Deep learning is a rapidly growing area of machine learning, that is becoming widely adopted within academia and industry. Whereas machine learning is a very successful technology, applying it today still often requires spending substantial effort hand-designing features to feed to the algorithm. This is true for applications in vision, audio, and text/NLP.
We are developing nanoscale electronic devices and circuits to emulate the functions of the synapses and neurons of the brain. The goal is to use nanoscale electronic devices to do information processing using algorithms and methods inspired by how the brain works. Currently, we are using phase change memory and metal oxide RRAM to perform gray-scale analog programming of the resistance values.
Energy-efficient computing platforms are sorely needed to control autonomous robots and to decode neural signals in brain-machine interfaces. Inspired by the brain’s energy efficiency, we are exploring a hybrid analog-digital approach that uses subthreshold analog circuits to emulate graded dendritic activity and asynchronous digital circuits to emulate all-or-none axonal activity.
About The Red Sea Robotics Research Exploratorium was created in April 2012 through a generous research award from the King Abdullah University of Science and Technology (KAUST). As a part of the KAUST Global Collaborative Research Program, Stanford University is part of a team of universities working to build a major science and technology university along a marshy peninsula on Saudi Arabia’s western coast.
This video production documents the life and career of Ed Feigenbaum, "Father of Expert Systems," through archival photographs, a Computer History Museum oral history, and the recollections of his collaborators and students. These recollections were videotaped at the Feigenbaum 70th Birthday Symposium, held on March 25-26, 2006 and co-sponsored by the Stanford Computer Forum.
The Red Sea Robotics Research Exploratorium was created in April 2012 through a generous research award from the King Abdullah University of Science and Technology (KAUST) . As a part of the KAUST Global Collaborative Research Program , Stanford University is part of a team of universities working to build a major science and technology university along a marshy peninsula on Saudi Arabia’s western coast.
Professor of Computer Science and of Electrical Engineering
robotics, machine learning, probabilistic methods
Professor of Computer Science
logic, multi-agent systems, game theory, electronic commerce
Professor (Research) of Computer Science and of Surgery
robotics, haptics, computer-aided surgery