We are making steady progress towards our milestones for the Third and future Quarterly IPR meetings. We have sharpened the focus of our project based on the comments and suggestions we received from the Army during the Second Quarterly IPR meeting in January 1999.
We are looking forward to the visit to our lab by Douglas Gage and John Blitch next week. During this visit, we are planning to introduce the members of our group to them, describe the experimental set-up we have assembled during the past three months in our lab, and demonstrate some of our software running on the robots, as well as other algorithms still running in simulation.
In preparation for the experiments we are performing on our SuperScout robots, we have mounted sensors on the SuperScouts. For the SuperScout to be used in the map-building phase, we mounted the laser ranging sensor and an upward-pointing camera (for detecting artificial ceiling landmarks) on the top surface of the robot. For one of the SuperScouts to be used in the target-finding and target-tracking phases, we have similarly mounted a pan-tilt camera and a landmark detection camera. The main goals of the mechanical design of the mounts have been:
Concerning software development, we have refined our next-best view technique for building two-dimensional maps to handle larger uncertainties in robot positions. One of the main goals of the next-best view algorithm is to minimize the amount of sensor data that needs to be processed to build the map. However, the need to handle larger uncertainties in robot positions imposes a conflicting goal: successive range images of the environment must have enough overlap so that these images can be matched in order to remove the uncertainty in the position of the robot. In order to meet such conflicting demands, we have implemented robust algorithms for processing and fusing the data returned by the laser range sensor. We are now experimenting with these algorithms on the SuperScout fitted with the laser range sensor.
We have also made some very interesting progress with our target-finding technique for an aerial observer moving amongst a set of buildings in search of a target moving on the ground. A previous version of our algorithm allowed the observer to move only at a fixed height. However, we have recently had some key insights into the problem. Incorporating these insights into the algorithm allows the observer to fly at different heights based on where it is positioned in the environment. We are now implementing the new algorithm and are expecting a simulation to be ready soon.
We are completing the implementation of a basic target-tracking algorithm on our SuperScout robots equipped with pan-tilt cameras. This algorithm enables the tracker to maintain a constant distance from a moving target. In our experiments, the target is another robot that is identified by special markings. We use computer vision software developed in the Stanford Robotics Lab to detect these markings. In the next stage, we will be implementing a target-tracking planner that uses the model of the environment constructed in the map-building phase in order to track the target more robustly.