Lard Lad

Andrew McDonnell and Patrick Vinograd


Patrick: "Could I get nineteen strawberry donuts?"
Donut Guy: "Nineteen? Fuuuuck."

Introduction

As we said in our proposal for the final project, our dream goal was something with donuts. One day in bio I suddenly had a wonderfully fiendish idea: a robot that taunted people with a donut but didn't let them get it. While discussing the idea with Professor Dodds, we refined the plan into a workable solution. Using the framework from our last project, we could have an untethered mobile robot carry a donut and avoid donut predators. The robot would be visually monitored and remote controlled by a server using a digital camera and a transmitter. Patrick had already developed a serial communications package that would allow communication between our vision code and a handyboard and he had purchased the hardware to build an infrared communications array. It seemed that everything we needed to build and implement our idea was in place.

The Robot

After our experience with building our extinguisher robot, we decided that an alternative method of locomotion would be preferable. The Char-Jar extinguisher team seemed to have relative success with their tank treads (and I think they look cool) so we explored using them with our robot. After a lot of experimentation, we came up with a chassis that was much more robust than our extinguisher. We chose to use a belt drive between the motors and axles rather than chain as it was smaller and less cumbersome. With some tweaking, the robot moved quite well and we went on to the control and communication packages.

Though our initial plan was to track the robot from the side, this proved problematic and we moved the identifying plate to the top of the robot. Though this made it easier for the vision code, it made it impossible to actually carry a donut. This was disappointing, but considering the donuts we bought were incredibly sticky, it was probably for the best. Plus we got to eat the donuts. This was integral to the success of the project. In any case, what emerged is pictured above and here. The custom circuitry for IR communication is underneath the recognition plate as is the handyboard itself. The top of the robot was designed and built in tandem with the communication array so that they could accommodate each other.

The receiver side of the communications system is rather straightforward. A Panasonic PNA4613 IR transistor receives modulated infrared signals; we hooked this into the serial input of the handyboard, and that was that. The transistor protrudes through the identifying plate, allowing for the best possible reception. The transmission system is somewhat more complicated. A slave handyboard receives serial commands from the computer, and retransmits them to the communications circuit. The circuit uses a timer to generate a 40kHz signal which modulates the serial signal. This modulate signal is sent through an infrared emitter, which is powered from the handyboard's motor output to get additional current. This additional current happened to lead to the untimely demise of two emitters - it seems that the maximum rated current is for transient currents. In any case, the transmitter was mounted atop the camera, both of which point down at the robot to make vision and transmission work as well as possible.

Robot Code

Vision

The vision algorithms for this project are based on those from Predator. The algorithm finds regions of particular colors in each frame and tags them. Though we did not incorporate windowing to reduce noise (though the color recognition is very tight), we did eliminate the problems caused by the bugs we noticed last time. The control code ignores frames in which no regions are identified. Before this modification, the entire frame was used as the region and this made it impossible to precisely track the robot and predator. We patterned our robot's recognition plate after that of Richard Vaughn's duck herding robot. With two different colors on the front and back of the robot, it is easy to calculate the robot's heading in the frame relative to the position of the predator. We used yellow, red, and green for this project since we discovered last time that the carpet in the research lab is annoyingly blue. Skin tones, being somewhere between yellow and red, posed a problem during testing, so we built an extension to the hand that may or may not resemble a spanking apparatus.

After isolating the regions that correspond to the robot and hand, the program determines the angle between the robot's heading and the predator. If the predator is sufficiently far away, the robot is instructed to rotate so that its heading will be parallel to the predator's approach (in whichever direction achieves this more quickly). When the predator is too close for comfort, the robot is instructed to run forward or back, whichever is more away from the predator. Once the robot is again out of threat range, it resumes turning parallel to the predator's approach.

Vision Code

Evaluation

Much of the challenge of this project was integrating the several subsystems together, and many of the problems we encountered were technical details not particularly related to our design. Our testing indicated that the correct commands were being sent to the robot; most of the time it did approximately what one would expect, generally turning away from the "hand" and scooting away when it got too close. There was some erratic behavior, but we suspect that the problem was transmission error. Earlier testing of the infrared system showed data errors at the ranges we were typically using, so in all likelihood this was the case during our test runs. In hindsight, a radio based system that was not restricted to line of sight would have been much more robust. The untimely destruction of all but one of our infrared emitters made us hesitant to push the equipment to its maximum range.

We also had the same issues with our video capture system as in the Predator project. The low frame rate made it hard to figure out whether the vision and control systems were synchronized; implementing a better frame capture system would be a good time investment. As noted above, we were able to eliminate some of the problems with frames not being processed correctly.

Another potential improvement would have been to do something other than color-based tracking. Particularly, a counter-based system might have worked better for detecting the hand. Unfortunately, limitations on equipment, and more importantly time, prevented us from trying such a system out. While the color-based system is kind of a hack in terms of tracking the hand, we think it a reasonable choice for the robot tracking. After all, realistically, how often does one see such outlandish colors as red, or green, or yellow for god's sake?!

Accumulated Wisdom