Papers and conference talks



<img> 

Group induction
Alex Teichman and Sebastian Thrun.
Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013.

Tracking-based semi-supervised learning, as originally presented at RSS2011, was an offline algorithm. This is fine in some contexts, but ideally a user could provide new hand-labeled training examples online, as the system runs, without retraining from scratch. Qualitatively, this would mean the ability to point out - from the back seat of your autonomous car - a few examples of, say, an elliptical bike or sk8poler, and the algorithm would start learning to recognize them on the fly without you having to do anything else. Group induction is a mathematical framework for this kind of learning.

pdf, bib


<img> 

Unsupervised extrinsic calibration of depth sensors in dynamic scenes
Stephen Miller, Alex Teichman, and Sebastian Thrun.
Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2013.

This paper shows how to find the relative pose of two stationary depth sensors using only motion cues. Like the CLAMS paper, no calibration target, specialized hardware, or precise measurement is necessary.

bib


<img> 

Unsupervised intrinsic calibration of depth sensors via SLAM
Alex Teichman, Stephen Miller, and Sebastian Thrun.
Robotics: Science and Systems (RSS), 2013.

The CLAMS paper. RGBD sensors such as the Xtion Pro Live or Kinect typically have significant depth distortion beyond two or three meters. This paper introduces a method of calibrating away the distortion just by recording data from a handheld sensor. No calibration target, specialized hardware, or precise measurement is necessary. The method is more generally applicable, too, as it requires only that the intrinsic parameter in question be myopic: error in the parameter setting generates greater measurement error as range increases. This includes, for example, focal length and center pixel of a depth camera.

pdf, code, rss link, bib


<img> 

Learning to segment and track in RGBD
Alex Teichman, Jake Lussier, and Sebastian Thrun.
IEEE Transactions on Automation Science and Engineering, 2013.

Extended journal version of previous work with the same title. This paper includes algorithmic optimizations that enable real-time segmentation and tracking of arbitrary objects, as well as an application example in which we use the algorithm to train an object detector without the usual hassles of a turntable or crowdsourced annotation.

pdf, IEEE link, bib


<img> 

Online, semi-supervised learning for long-term interaction with object recognition systems
Alex Teichman and Sebastian Thrun.
Invited talk at RSS Workshop on Long-term Operation of Autonomous Robotic Systems in Changing Environments, 2012.

This talk is a preliminary version of what eventually became group induction. The core mathematical framework of group induction did not yet exist.

presentation


<img> 

Tracking-based semi-supervised learning
Alex Teichman and Sebastian Thrun.
International Journal of Robotics Research (IJRR), 2012.

Extended journal version of previous work with the same title. More experiments, more intuition as to how the method works. PDF to come soon, or feel free to email me.

sage, bib


<img> 

Learning to segment and track in RGBD
Alex Teichman and Sebastian Thrun.
Workshop on the Algorithmic Foundations of Robotics (WAFR), 2012.

Tracking-based semi-supervised learning requires some method of model-free segmentation and tracking. This paper describes a method of model-free segmentation and tracking that can work in a broad range of environments where segmentation is non-trivial.

pdf, bib


<img> 

Practical object recognition in autonomous driving and beyond
Alex Teichman and Sebastian Thrun.
IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2011.

This paper gives an overview of the recent object recognition research in our lab and what is needed to make it a fully functional, high accuracy object recognition system that is applicable beyond perception for autonomous driving.

pdf, bib


<img> 

Tracking-based semi-supervised learning
Alex Teichman and Sebastian Thrun.
Robotics: Science and Systems (RSS), 2011.

Building on previous work, we introduce a simple semi-supervised learning method that uses tracking information to find new, useful training examples automatically. This method achieves nearly the same accuracy as before, but with about two orders of magnitude less human labeling effort.

pdf, bib, project, RSS proceedings


<img> 

Towards 3D object recognition via classification of arbitrary object tracks
Alex Teichman, Jesse Levinson, and Sebastian Thrun.
International Conference on Robotics and Automation (ICRA), 2011.

Breaking down the object recognition problem into segmentation, tracking, and track classification components, we show an accurate and real-time method of classifying tracked objects as car, pedestrian, bicyclist, or 'other'.

pdf, bib, dataset





<img> 

Towards fully autonomous driving: systems and algorithms
Jesse Levinson, Jake Askeland, Jan Becker, Jennifer Dolson, David Held, Soeren Kammel, J. Zico Kolter, Dirk Langer, Oliver Pink, Vaughan Pratt, Michael Sokolsky, Ganymed Stanek, David Stavens, Alex Teichman, Moritz Werling, and Sebastian Thrun.
Intelligent Vehicles Symposium, 2011.

This paper is a broad summary of recent work on Junior, Stanford's autonomous vehicle. Topics covered include object recognition, sensor calibration, planning, control, etc.

pdf, bib


<img> 

Exponential family sparse coding with application to self-taught learning
Honglak Lee, Rajat Raina, Alex Teichman, and Andrew Y. Ng.
International Joint Conference on Artificial Intelligence (IJCAI), 2009.

pdf, bib

<img> 

Automatic configuration recognition methods in modular robots
Michael Park, Sachin Chitta, Alex Teichman, Mark Yim
International Journal of Robotics Research (IJRR), 2008.

pdf, bib