We have improved significantly the algorithm to merge a new polygonal model obtained from a laser scan with an existing polygonal model. This step was a bottleneck in the map building process. For instance, in a typical run, it would take 30 seconds to perform the merge of the fifth view with the polygonal model obtained from the first four views. It now takes only 3 seconds.
We have improved the user-assisted navigation for the next-best-view by allowing the user to specify "way points" between view positions, that is, a suggested trajectory for the robot that the user considers safer than a straight line.
We have implemented a module to perform an automatic two-dimensional 360 degrees scan of the environment using the 180 degrees laser scanner. This is performed by rotating the robot, performing several scans, and merging them and their safe regions using a specialized version of our general algorithm.
We have, since last month, new ways to mount the laser scanner for 3D scanning. We have modified the 3D server accordingly to handle pseudo-orthogonal projection. The 3D server can now be instructed to perform a scan while translating. It coordinates the scanning and motion processes and corrects the bias in the data due to the fact that these two processes work concurrently.
As we now have a strategy for two robots to pursue two targets in simulation, we have taken steps to make this work with the actual robots, including integration with the target finding algorithm which can be invoked if the targets first need to be found, or are later lost.
We have connected a new Sony EVI/D30 camera to a desktop machine so that it can act as another (stationary) observer. We have also updated our drivers for Linux 2.2.5.
In order to simulate a camera with a large view angle for the target finder, we have implemented an automatic sweeping behavior using the pan capability of the camera. The camera automatically switches to sweep mode if no target is visible.
We have implemented the target finder for one real robot. The robot is given a path obtained by the target finding algorithm, and follows it until it finds the target.
At last, we have started putting everything together as follows: the mobile robot follows the path given by the target finding algorithm to look for the target. In the meanwhile, the camera connected to the desktop machine sweeps the environment in front of it. If the latter camera detects the target, it calls the mobile robot to the rescue; the latter positions itself in order to maximize the minimum time to escape of the target.