Developing the next generation of 3D game interface (multiple teams possible)

The goal of this project is to employ a recent generation time-of-flight depth camera as input device for a game. As opposed to normal video cameras, a depth camera delivers 3D information rather than intensity information in images. This new technology has only recently reached a high level of maturity, robustness and data quality. Latest models deliver high-quality depth data at up to 60 frames per second. Our lab has several of these cameras available to experiment with. One of the cameras was explicitly lend to us by the manufacturer with the goal of having bright students work on great game demos.

In this project project, you will therefore explore the suitability of the sensor as a real-time input device for games. Seeing at what rich form of data the camera delivers, it is quite clear that it will eventually revolutionize the way how games are played. Recent new interface technologies like the Wii only provide a fraction of the capabilities that a sensor like ours provides. In this project, you will have the chance to play a significant role in that development as long as the technology is still fresh out of the lab.

The project has several components:

In fact we can easily split this project into many sub-projects, so each team of two or three people could work on a separate game idea. However, I guess I will at most be able to supervise two teams. So, please decide soon if you want to join this very fun project.

Existing Infrastructure

For this project we provide one Canesta and one or two Swissranger sensors plus additional equipment that you may want to play round with (tripods, computers, IR reflective tape). A basic recording software is already available for either camera.

References

Some general pointers on the employed camera technology:

Contact

Christian Theobalt (theobalt at cs dot stanford dot edu)

Please also check out the web page of my group
3D Video and Vision-based Graphics