Team


Vision Assignment

A zip file of the code.

Summary

We first have the program look at all 81 cards in the deck and take a few statistics about each one. In particular, these statistics include number of columns that have any pixels that show up as red, green, or blue (appropriate hsv or rgb thresholds), and the average number of appropriately colored pixels per column in each one that has any. This is also done with rows for twelve statistics. In addition, the average pixels per row/column that show up as any color (appropriate intensity threshold) are recorded. This adds two more statistics. All these stats are stored before any of the cards on the table are examined.

The cards on the table are examined in exactly the same ways, same stats, everything. They are then compared to the 81 cards in the deck to determine which card is which. The identities of the cards in the deck are hard-coded into the program, and are represted by a 4-digit int. Each of the four digits are numbered 0 to 2 and represents one of the card's attributes. This makes it easy to find sets after all the table cards are identified. The program then extracts each of the digits of the identifying number and compares it to the corresponding digits of the other cards. When the same digits in three cards add up to 0, 3, or 6 for all four digits, then the three cards form a set.

I am aware that this process seems a little like cheating, but I was under the impression that we would not be playing with the images that we were given. Right now, when our program identifies a card, it finds an exact match with all the stats with one of the cards in the deck. Our program only looks for the closest match. So, if we were to play on a deck of cards whose pictures our program had never seen before, it still might do rather well. I was hoping to try this out, but it would have been rather time consuming to take another set of set photos.


Second Project: XPort

Introduction

For our second project, we worked on creating an extinguisher robot using a Game Boy Advance, an XPort from Charmed Labs, and a holonomic Lego base. After some work, we had to revise this goal to just creating a robot that holonomically follows light.

Progress

We started out our project just by playing with the XPort and exploring its capabilities through its demo programs. In the package, we had three omniwheels, two IR sensors, and a Bluetooth module among other things. We considered building a holonomic robot out of the three omniwheels, but decided to order more so that we could have a fourth. Building a Lego base that supported three omniwheels exactly 120 degrees from each other would have been an unnecessary challenge. Additionally, the example software supported holonomic kinematics with four wheels and not three.

During the time in which we were waiting to receive a fourth omniwheel, we played around with the XPort's differential drive example code. We used two omniwheels as the differential drive wheels and the third wheel as a passive stabilizer. We took the example code for a wandering differential drive robot and tweaked it to our needs. Here are some videos of the results:

After receiving the fourth omniwheel, we were able build a Lego base for the robot and utilize the built-in kinematics for a four-wheeled holonomic robot. This is known as "underconstrained," because three wheels are the minimum for holonomic behavior. Four wheels add an unnecessary wheel. This extra wheel gives the robot more power, but also adds the possibility of the wheels working against each other. One must be extra careful to get the kinematics correct in the underconstrained robot, as mistakes will not only lead to incorrect behavior, but to erratic wheel spinning. With three wheels, incorrect kinematics will not lead to any wheel spinning. The robot would move around just fine, but perhaps in the wrong direction.

The first thing we did once we had our 4-wheel base built was to test out the example code for holonomic behavior. The example code simple had the robot translate and rotate at the same time, generating a "frisbee" behavior. This example program seemed to have some problems. It was difficult to tell if the problem was in the program or in the wheels. The robot would generally spin its wheels on the floor outside the lab when running this demo. There was no problem when we had the robot only translate or only rotate at one time, but it always exhibited odd behavior when trying to do both at once. We found early on that our four wheels were not lined up all in the same rotation direction, but the problems persisted even after this fix.

In order to create an extinguisher, we would need light sensors. So we obtained two light sensors and fixed them with Lego enclosures to block out as much ambient light as possible. We figured out how to obtain these readings in our code so that the robot could react to them. We decided to avoid using the IR sensors until after completing a working holonomic robot that reacts to light. In the end, we barely got that far, so the IR sensors were never incorporated into our final robot, although we did test them early on with success. Moreover, we found that after utilizing four wheels, we did not have any more motor outputs for a fan. At this point, we decided to revise our goal to create a robot that follows light.

Our first task was to get the robot to respond to light. Since we were having trouble with simultaneous translation and rotation, we decided to have it run the tasks separately. We decided to make a routine that the robot could repeat, such as rotating in a circle, then moving a little bit towards the light, then repeating. First, we had the robot rotate indefinitely and remember the greatest light intensity that it had recently seen, and its rotation position at which it saw that greatest intensity. In our next step, we had the robot rotate 360 degrees, then return to the orientation it was in when it saw the most light. In our next step, we had the robot stop after the first rotation, then move holonomically in the direction of the most light.

The above description was as far as we got with our robot. In fact, in our final robot, the above behavior was not even correct. We were using the above tasks as tests to build upon, then after giving up on expanding, we tried to move back to the above behavior and failed for unknown reasons. In hindsight, we should have saved all code that exhibited good behavior, whether we were going to try to expand upon it or not. It would have been a good idea to make a copy of the implementation and save it after we achieved the above behavior.

During the many hours between successfully achieving the above behavior and when we decided to give up on improvements, we were trying to achieve the following. We would have liked to get the robot to rotate and translate at the same time. While we may have had this behavior down, it was never smooth and always looked like something was wrong. Our final goal at this point was to have the robot continually rotate while always translating in the direction in which it had recently (within the last full rotation) seen the most light. This could be used in a standard extinguisher maze such as the one Janna used, but with a cover on top and a strong light source at the entrance. The object for the robot would be to find its way out of the maze by following the light.

Here is a zip file with some of the examples that came with the XPort. This code probably won't run, as the rest of the install directory is excluded. Most of the examples we used can be found in xrc/robot1/. In particular, from this directory, we used the wander example to create the wandering differential drive robot in the first half of the project. We used the frisbee example to create the holonomic robot for our final robot. I don't believe either of these examples are still intact as we found them anywhere in the directory tree, but they can both be found on the CD that came with the XPort. The majority of the wander example found in this zip file was already written; we only tweaked some parameters and made other minor changes. Nearly all of the frisbee example was our work. The only commands in the original were to translate a certain distance and rotate a certain distace, at the same time to create a frisbee effect.

Here is a summary of what we know about the code for future users. We also found a useful article on using the XPort.


First Project: ER1

Introduction

We plan to work on the default "Silicon Mudder" project for Lab 1 (unless we get a brilliant idea really soon). We plan to develop a specific task for the ER1 and utilize at least computer vision and some learning algorithm.

Progress and Performance Results


References