Monthly Technical Report
March 1999
We have divided this report into two portions. The first portion
discusses the site visit on March 17 by John Blitch and Douglas Gage
and the second portion describes our technical progress in the month
of March.
SITE VISIT
John Blitch and Douglas Gage visited our lab on March 17. During this
visit, we introduced the members of our group to them, described the
experimental set-up we have assembled during the past three months in
our lab, and demonstrated some of our software running on the robots,
as well as other algorithms running in simulation. After their visit,
they requested us to prepare videos that demonstrate
- the range data gathered by the SICK laser as the robot moves,
- the different trade-offs in the next-best view algorithm for
map-building,
- how our target-finding algorithm in 2D adapts to different
properties of the sensor (specifically, omnidirectional sensors versus
sensors with cone vision), and
- our target-finding algorithm for an aerial observer.
We are currently preparing these videos. They will be ready shortly.
One of the concerns raised by John Blitch during the visit was how our
algorithms will generalize to three-dimensional environments. There
are two avenues for such generalizations.
- As we explained in our report on environment, perception, and
mobility models, we represent the environment by a set of 2D plans
(each plan corresponds to the floor of a building). The plans are
connected to each other by portals (portals model staircases,
elevators, and escalators). It is straightforward to extend our
next-best algorithm to construct such 2-1/2D models, provided that the
next-best view algorithm is supplied with software to recognize
staircases, elevators, and escalators. Further, it is also simple to
extend our target-finding and target-tracking algorithms to work with
such representations.
- We can also extend our algorithms to deal with 3D obstacles on
each floor, where we model each obstacle as a prism. In the next stage
of the map-building task, we intend to construct 3D models of the
environment. In order to do so, we plan to mount the range sensor
vertically on the robot so that each scan of the sensor corresponds to
a vertical slice of the environment.
We can also adapt our target-finding and target-tracking techniques to
work in an environment with such obstacles. In fact, our
target-finding algorithm for an aerial observer operates in precisely
such an environment! We plan to use the expertise developed in the
aerial-observer project to extend our target-finding and
target-tracking techniques to environments with prismatic obstacles.
TECHNICAL RESULTS
We are also making steady progress towards our milestones for the
Third and future Quarterly IPR meetings. In particular, we are
refining the implementation of our next-best view technique for
building 2D maps to handle larger uncertainties in robot positions. We
have implemented robust algorithms for processing and fusing the data
returned by the laser range sensor. We are improving the performance
of these algorithms by executing them on actual range data collected
by the laser range sensor.
We have also completed the implementation of a basic target-tracking
algorithm on our SuperScout robots equipped with pan-tilt
cameras. This algorithm enables the tracker to maintain a constant
distance from a moving target. We are currently implementing a more
soophisticated target-tracking planner that uses the model of the
environment constructed in the map-building phase in order to track
the target more robustly. Our algorithm computes how quickly the
target can move out of the visibility region of the tracker and moves
the tracker in such a way as to maximize the escape time of the
target.