Monthly Technical Report

June 1999


One undergraduate student and two masters students have joined our TMR group. We are also reporting on their projects below.

We are making steady progress on our collaboration on map-building with SRI. We have developed an algorithm and software to convert scans of the environment (we use SRI software to register these scans) into lines. Our algorithm to convert scans into lines is novel; it is based on fast and robust randomized techniques. We plan to augment this software with techniques to convert the lines into a polygonal model. Since our next-best-view technique operates on polygonal inputs, we plan, as a further step, to apply our next-best-view algorithm to the polygonal model to compute the best position to sense the environment at next. In this manner, we will be able to automate the process of moving the robot to gather scans of the environment (SRI currently moves the robot by hand) and also to minimise the number of positions at which scans have to be taken.

We are also setting up experiments to do 3D scans of the environment to build a 3D model of the environment. To this end, we plan to mount the second SICK sensor we have purchased on one of our robots in such a way that the sensor casts laser beams in a vertical plane of light. By rotating the robot quickly, we will get an omnidirectional 3D scan of the environment at a given point. In this experiment, we are obtaining range data from the SICK sensor at a rapid rate through a high-speed interface card so that we can obtain a single 3D scan in just a few seconds.

We are continuing our progress on target-tracking. We have implemented in simulation a sophisticated target-tracking planner that can use the model of the environment constructed in the map-building phase in order to track the target more robustly. Our algorithm takes the visibility constraints of the camera and the mobility restrictions of the robot (e.g., the robot cannot move sideways) into account. We designed the algorithm so that we will be able to easily scale it to handle multiple robots. We are also implementing a "smart" controller for the robot that will take high-level motion commands from the target-tracking planner and convert these commands into smooth robot and camera motions. Our goal is to execute the controller on the robot and have it communicate with the planner running off-board. In case the communication breaks down, the controller is designed to generate its own commands by simply trying to stay within some distance from the target. In this manner, the controller can take over and track the target locally (e.g., based on visual servoing).

We have started a project to endow all our motion-planning algorithms with the ability to use sonar sensors to detect and avoid unexpected obstacles. We use a technique based on potential-fields to ensure that the robot avoids obstacles smoothly. For the target planner, we are investigating ideas on how to intergate obstacle avoidance with the controller. An interesting issue that comes up in this context is that while the obstacle-avoiding module is controlling the robot, it must communicate its actions to the controller so that the controller can ensure that the camera continues to be pointed at the target.