Fortunately, mapping and localization on an existing map are well-researched ideas. Robotic evidence grids, described by Martin C. Martin and Hans Moravec, are a good way to generate a basic map of an area. After two robots can communicate, they can attempt to use Monte Carlo Localization [Fox] to figure out where each robot is on the other's map, and how the two maps fit together. Monte Carlo Localization is not the only option for determining location, but othe options, such as the state-based system used by Dervish [Nour], should be considered. The fact that the robots may communicate with each other without having ever been to the same place will add a layer of complexity to this.
Requiring the robots to work out a plan together, rather than using a central planning agency, sounds to us like a more interesting task. In addition, it gives us more opportunities to attempt to design intelligent behavior into the robot, having each individual robot decide how existing maps fit together and where there is room for further exploration. As Rodney Brooks noted in his paper, AI through Building Robotics, "The best domain for trying to build a true artificially intelligent system is a mobile robot wandering around an environment which has not been specially structured for it". Although we will not have a true artificial intelligence for this project, having the robots interact with each other and the hallways of the Libra Complex will give us some interesting challenges.
One of the important features for these robots is that they should not run into objects (people, trashcans, stairs, chair legs), and they should not attempt to go down stairs when they encounter them in the Libra Complex. Unlike the PolyBot, our robots are not designed to handle stairs or go over obstacles[Yim]. However, a design similar to the PolyBot in a loop configuration might be very useful for future versions in a less controlled environment.
We plan to follow the robots as they move around in order to ensure that the robots don't accidentally damage themselves, but it would be nice to have software controls that help the robots avoid obstacles. One interesting idea proposed by Ian Horswill is to have the robot only operate on certain types of floors. In that case, they were working with a robot which would only be traveling on carpets, but we could try to add code to avoid changes in floor color, or which recognizes hazardous colors.
In order to create the code for these robots, we need to decide on an overall method for controlling their actions will be. One option is a potential field model, where the direction the robot travels in is decided by the interaction of multiple vectors, with vectors that cause the robot to seek out places it hasn't been to, avoid obstacles, seek wireless access, and fufil other goals. While this idea has been used sucessfully to control robots engaging in fairly complex behavior, such as herding ducks [Vaughan], we may wish to include some overriding instructions, such as if something is moving near one of the robots, it should stop until that thing goes away. This would help the robot avoid running into people, without adding yet another thing to be considered when calculating the vector field.