For our final project, we used a laser pointer and a webcam to detect the proximity of objects in front of the ER1 robot platform. We also used IR sensors to detect the proximity of the robot to the walls next to it. We used this information to make our robot speed down hallways as fast as possible without crashing into any walls (most of the time).
Rodney Brooks states in "Achieving artificial intelligence through building robots" that robots should work in the real world instead of toy worlds. Our robot will work in real life environments. "The Polly System" by Ian Horswell describes how Polly navigates an office environment with a predefined map. Our robot cares nothing for your silly maps. Unlike in "Experiments in Automatic Flock Control" we will not use ridiculous equations to define the motions of our robot. Dervish was much like Polly in that it needed a predefined map, which our robot will not. "Probabalistic Robot Navigation in Partially Observable Environments" uses Markov models, which are probabalistic, unlike our approach to robot locomotion. The paper "Exemplar-based Primitives for Humanoid Movement Classification and Control" is completely unrelated to our robot in any way. "Coastal Navigation - Mobile Robot Navigation With Uncertianty in Dynamic Environments", the robot used laser range finders to detect its position. We will use a laser pointer and a camera to simulate the effects of a laser range finder and detect the robot's proximity to objects in front of it. In "Robot Evidence Grids", they discuss methods of building more confident maps, something our robot will accomplish. No, just kidding, our robot still doesn't care for your silly maps. In "Bayesian Integration in Sensory Motor Learning", they talk about probabalistic stuff that we don't do. That's not what James meant, really. Actually, that is what James meant, really. Just not in those words. Millibots are really small. Well, they're not that small. At any rate, in "Millibots" and "Walk on the Wild Side", they talk about robot collaboration. But since we're only programming one robot, they doesn't leave a whole lot of room for collaboration. In "RRT-Connect: An Efficient Approach to Single-Query Path Planning", they discuss ways to plan paths through interesting shapes. Again, our robot does not plan its path in any way. In "If At First You Don't Succeed..." they talk about ways to recover from failure. When our robot fails, it smacks into a wall and doesn't recover.
We used a webcam to sense the position of laser pointer dots. The laser pointers were positioned at an angle to the camera, so the position of the dots changed depending on the distance of obstacles to the camera. This formed a rudimentary laser rangefinder. We used this information to guide our robot at the highest speed possible around the Libra complex, without crashing into the walls. We used IR sensors to center the robot in corridors.
We successfully integrated the sensor data from the IR sensors to fill our robot's blind spots, since we only had one laser pointer installed. Our robot successfully avoids walls and navigates the corridors of the Libra complex at the top speed attainable by the ER1, which is about 70 cm/s. It can become confused by bright lights reflecting in the floor, but this doesn't usually cause it to crash into walls, it just turns around whenever it sees a light.
The biggest problem with our robot is the fact that it only has one laser. This makes it practically blind in all but one direction, because the IR sensors are not very good. It can miss corners jutting out into its path, crash into them, and still not realize it has hit something. More lasers would fix this problem.
Here is a screenshot of our software detecting the laser pointer. We were unable to get a video of the robot in action because a laptop wasn't available when we made this report.