For our second project we have decided to do a mapping task using the ER1 robot platform. We plan to use optical mice instead of wheel encoders to sense the position of the robot more accurately.
Rodney Brooks states in "Achieving artificial intelligence through building robots" that robots should work in the real world instead of toy worlds. Our robot will work in real life environments. "The Polly System" by Ian Horswell describes how Polly navigates an office environment with a predefined map. Our robot will attempt to construct a map instead of using a predefined one. Unlike in "Experiments in Automatic Flock Control" we will not use ridiculous equations to define the motions of our robot. Dervish was much like Polly in that it needed a predefined map, which our robot will not. We considered using Monte Carlo localization but we hope that the mice will provide accurate enough tracking so that a probabalistic approach is unnecessary. "Probabalistic Robot Navigation in Partially Observable Environments" uses Markov models, which are probabalistic, unlike our approach to robot localization. The paper "Exemplar-based Primitives for Humanoid Movement Classification and Control" is completely unrelated to our robot in any way.
We have obtained two USB optical mice to use in tracking the robot's motion. We plan to use DirectInput to get input from both mice at once. We also have some C++ code derived from the ER1 sample code using their API. This code successfully drives the robot forward for 5 seconds. There is also sample code which can read the sensors. It should be easy to combine the sensor code to make a robot which drives forward avoiding obstacles.