Monthly Technical Report

October 1998


We have nearly completed the task of defining realistic models and data structures for

(i) representing the map of a building being explored,

(ii) describing the perception and mobility capabilities of the robots in a parametric form that is usable by our future software modules,

(iii) expressing the motion plans to be computed by these modules, and

(iv) defining a parametric model of a target's behavior.

We are developing new techniques for extending the target-finding planner to use a visibility model that is more general than the idealistic omnidirectional one used in our previous planner. We are considering two extensions:

Based on the recommendations of the previous review meeting, we are considering techniques to combine these two algorithms for target tracking (we could then imagine cooperation between ground robots and overflying helicopters to find targets in a urban environment composed of many buildings). *** We would appreciate any feedback on this promising avenue. ***

We have developed some preliminary next-best-view techniques for model construction (in two dimensions) by a team of mobile robots equipped with range-finders. Our goal is to minimize the time needed for constructing a model. In our approach we assume that the robots work independently and only exchange model data, so that the destruction of some robots does not prevent the remaining ones to complete the construction of the model. To reduce time, we try to avoid collecting the same data multiple time by making the available robots navigate through different subsets of the environment. Simultaneously, one major consideration is to plan next-best-view positions for each robot so that the robot can precisely localize itself by partially matching its current sensory data with the partial model built so far. We are currently refining algorithmic techniques based on this approach and planning their implementation.

We have also defined the various tasks to be performed by each robot we are planning to use in the TMR project. For our experiments, we will have three SuperScouts (which have recently been ordered) and two Nomad 200 robots (which were available prior to the TMR contract). Each SuperScout has a Pentium 233Hz PC. Let us call the three SuperScout robots R1, R2, and R3. One Nomad 200 has two Pentium 166Hz CPUs (call it R4) while the other Nomad 200 (R5) has one Pentium 90Hz CPU. R1, R2, R3, and R4 have one color framegrabber each, while R5 has one black-and-white framegrabber. Each color framegrabber can be used to process black-and-white images from three cameras.

R1 will be used for 2-D map building.

R2, R3, and R4 will be used for target finding and tracking.

R5 will be used for 3-D model building.

R4 and R5 will also be used to play the role of targets.

Each robot will have a camera to photograph landmarks. It will run motion control and landmark detection on its on-board CPU. R1 will include the detected landmark in the 2-D model of the environment. The other robots will use the detected landmark to localize themselves relative to the 2-D model.

All robots are equipped with wireless interfaces to the local network available in our laboratory.

R1 will be equipped with a time-of-flight laser range scanner to acquire 2-D contours of the environment. We can do the processing sensory data and the motion planning on board since 2-D map building is not as time-critical as target finding/tracking (see below). The next-best-view software mentioned above will be used to decide how R1 should move; it will run on its on-board CPU.

R2, R3, and R4 are each equipped with a second camera to detect/track targets. In the case of R2 and R3, this tracking camera will be mounted on a small pan-tilt platform to orient the camera appropriately. They will also have wireless video transmitters for the tracking camera to broadcast images to the visual tracking software that will reside on a remote computer (for time efficiency reasons). In the case of R4, the tracking camera is mounted on a turret that is part of the robot (no pan-tilt platform is thus needed); the software for detecting/tracking target will run on the second on-board CPU available on this robot. For each of the three robots, R2, R3, and R4, the motion planning software (deciding which motion strategies should be executed to find or track targets) will be running on an off-board workstation connected by a radio link to the robot's processors. Indeed, the running time for planning target-tracking motion strategies is critical, and we do not believe that it will realistic to run it on board.

Finally, R5 is equipped with the triangulation-based camera-laser range sensor. The laser projects a vertical laser plane of light. The plane's intersection with the environment is observed by a camera and the 3-D coordinates of the points on this intersection are computed by triangulation. The sensor is mounted on a turret embedded in R5. By rotating the turret, it is possible to build an entire 3-D image of the environment seen from a position of R5.

In summary: