Team Reaver Barbie

Jonathan Beall - Robin Schriebman - Whitney Buchanan

Introduction Set Up Software Ball Following Final Write-Up
Ball Tracking Set

Since ultimately, our Barbie Jeep project will be a path following robot in an unknown environment which it is not required to map, nor to return to previous positions in, implementing MCL on it did not make a lot of sense. (Not to mention that our robot still doesn't steer.) Instead, we chose to implement what may be the first step to a path following algorithm. Our set up used an iSight on a laptop with a human as the chassis.

Overview

Goal

We used an iSight camera for input, a MacBook Pro for processing, and it's built in speach for feedback. Our goal was to use this setup to allow a blind or blindfolded user to navigate to a bright yellow beach ball.

Process

We began using the OpenCV example provided by Professor Dodds. After compiling OpenCV for OS X, we began cleaning up the code provided. It contained much legacy cruft which we were able to remove or refactor. As the iSight optics are higher quality than those of the webcam the sample code is designed to run on, we also needed to remove the undistorting code which was now distorting our images. We also needed to fix the RGB to HSV conversion, as well as changing the checking of HSV bounds to properly handle negative hue values for selecting reds as well as properly handling the -1 or undefined hue value used for pure white. Then we tested to determine the range of color that would capture our object of interest, which in this instance was a bright yellow smiley face ball. We masked the appropriate portion of HSV colorspace as in the example..

At this point we integrated cvBlobsLib in order to pick out the different blobs from our masked image. We filtered out blobs smaller then a certain threshold to eliminate noise and small bits of yellow (to enable detection of a lack of ball) and the background from consideration. From that subset we chose the largest one and assumed that it was the object to follow.

By checking the position of this blob we were able to determine the direction the user should move in order to approach the ball. Our output was a direction for the human to move the laptop, expressed through the medium of voice commands. We fed this output through a Python program that limited the rate of the spoken commands to prevent them from overlapping. Through this manner we were able to accurately follow a yellow ball (assuming there was no other large objects that registered as yellow.)

Media

The Setup

Successful identification

Misidentification

Tracking in Action (video)

The code

Results

Conclusions

Overall, the algorithm worked pretty well. The biggest challenge was lighting. Lighting changes the absolute color of objects. It can make objects appear yellower than they are. In some environments, our system would identify an open door or the brass top of a trashcan as the biggest yellow object instead of the yellow beach ball we were using as a target. In the halls in the libra complex, the lighting was such that the wood chair rails and the walls themselves were yellow enough to cause problems. Lighting also can cause problems in that it can change the appearance of the beach ball. In order to capture the range of colors that the ball could be it is necessary to capture a lot of other objects that to our eyes appear not yellow, but to a camera have the property of "yellowness".

Future Goals

We plan to modify this to control our robot directly and to follow a path of orange (or some other) color cones (or similar objects). Then from there we hope to further modify the code to have the robot follow a single path object. This will form the framework of our vision algorithm.