CS154: Robotics
The B-Team

Our project for the Spring 2005 semester will be to make a "Big Brother Bot" (a.k.a. sentinel). This mobile platform will be equipped with a camera, which it will use to recognize and track people. We will spend the first half of the semester getting the platform (an Evolution-class robot) and the second half getting the motion tracking and recognition implemented. This website will be our central location for progress on the project.

Why make a robot like this? Besides furthering what is, I am sure, a common goal of a futuristic dystopia in which everyone is under constant surveillance by secret police with unabridged powers, the systems involved pose an interesting challenge. Recognition of individuals is not an easy task; motion tracking may be slightly easier in a static environment but isn't trivial either (and we don't even want to think about tracking in a dynamic environment...yet). Moving to follow people seems to be a natural extension of the concept. Conceivably a sort of robotic guard dog could be made from this project, that knows who its masters are and tries to chase off others. So there are practical applications. It must be admitted, however, that we're primarily doing this because it seemed like something interesting and fun to do for our Robotics class.

We decided to use the evolution platform for our "Big Brother Bot." This platform was chosen over the possibility of using some sort of mobile blimp because the Evolution seems more feasible. So far our only difficulty is that the software did not properly run with our platform. We could not get our robot moving. Professor Dodds is trying to fix this problem so that we can continue work on our robot.

For the second half of the semester, we will implement our "Big Brother Bot" idea. Hopefully, we can get it to recognize people based on the color of their shirt or if we get really ambitious, by their face. By 4/8, we hope to be able to find people, i.e. find a big blob of localized color instead of a long horizontal line of color. This blob, we will hope it someone's t-shirt or something of that sort. Once we can find people, we hope to follow them around in the tunnels. By 4/29, we hope to be able to find a person, and follow them by keeping the blob of color more or less in the center of the robot's vew. We may also incorporate MCL to track where people are going in the tunnels.

Now that we've actually got our Big Brother Bot up and running and following, you must be asking yourself, "How does it do that?!" That's what this section is about: a description of our approach and the cool algorithms we used to do it. Our robot wanders around until it finds something interesting to look at and its current interests are yellow blobs. We scan thru the pixels and store them in a matrix with yellow ones marked differently than non-yellow ones. With the matrix, we can run closing and opening to fill in small holes from odd lighting issues. Filling these holes in helps to make sure that the blob is connected because we are only concerned with the largest yellow blob. We iterate through the matrix to find components an check to see if there was a component above it (3 pixels [upper left, up, upper right]) or to the left (1 pixel). If there was, than we add this yellow pixel to that component. If there were multiple components, we reconcile them all to be the same one and continue. At the end, we can just take the average x and y values for the largest connected component and that is our centroid. Based on the position of the centroid, and number of pixels in that component, the robot adjusts its position. When the centroid moves to high or the pixel count goes to high, it backs off. When the pixel count goes below a threshold, it moves closer. It always strives to keep the centroid centered in the x direction as well, so that the blob is centered. We actually found that its field of view is roughly 45 degrees and based its movement off of that. It works quite well. Also, if the robot destroys its target or the blob somehow disappers, it just reverts to its original wandering program until it finds a new target to destroy or follow.

Time for another look back. In retrospect, some things we might have done differently are finding colors better and maybe a bit of pseudocode. If we had our code from Set working with better color recognition, i.e. using all six parameters as a general color finding tool instead of just heuristically, we might have been able to do more with the Big Brother Bot's intruder detection capabilities. Currently, it only stalks yellow tshirt wearing or beach ball holding people. Our code also went through a lot of changes as we implemented different parts of it to meet different goals. If we had planned it out a little better from the beginning, our code would probably be more efficient and easier to read. Overall, we think that working with the robot was a great experience. Some of the assignments, like MCL and Set showed us how simple tasks can be quite challenging and vice versa. The big brother bot has come a long way from being a silly robot with no senses running about in "squares" to being a high level security robot that follows dynamically changing people (blobs of yellow). There is still room for improvement in the future and we remain interested in pursuing further work with robotics.

Team members: Kevin Mistry, Jacob Seene, Chris Weisiger

Source code
Gallery
References

2005-5-4: PicobotRules.txt was written and pictures of it in action can be seen in our Assignment 10 Part 2 Page. Also check out the Path planning questions, which were answered in .jpeg form. Have fun! We sure did!

2005-4-29: Demo Day!!! We fiddled with parameters and the following works very well now. Due to the recent issues with finding red in the hall, (namely, the robot was content with staring at the stripes) we switched to finding yellow! Also we have implemented a connected components feature that only averages the x and y values for the pixels in the largest connected component of yellow. We used an algorithm similar to the one seen in class. With all of these great add ons and features, we are ready to tackle anyone, literaly. Actually, it only follows them, shadily. For our demo, we had our robot follow Jacob (and a bright yellow beach ball) down the hall. When the robot got too close or Jacob came close, it backed off. It follwed when he went off down the hall. It also kept him centered at all times. If Jacob ever got too wily, the robot went into autonomous wandering to find some other yellow object that was over the threshold. If you want to see the demo in action, ask Prof Dodds. He's got live footage from Demo Day!!

2005-4-11: Set works! And we're sure that follow-the-red-blob would work, if only one of the wires in our sonar hadn't come loose. Barring further complications, that should be up and running in no time. In the meantime, you can download our Set player from the source code page, or see a screenshot of it playing in the gallery.

2005-4-9: We can find red blobs, and we use dilation/erosion to fill in holes in the red blob. We convert the picture to a matrix of pixels in order to close the holes. We also average the x and y coordinates of the colored areas to find the centroid. This is happily marked by a green plus sign. For set, we have done all four attributes and have them working.
Number: We checked for any transitions when we make a vertical cut at 1/2 width. If there is one, we check again at 9/14 width. If there is one at 9/14, the number is 3, otherwise it's 1. If there wasn't one at 1/2, then the count is 2.
Color: We record the color of the first transition that exists, (i.e. for 2 shapes, we have to make a cut at 11/14 width to find color) and that seems to work fine...the averaging of the color over the whole shape (or a line through the shape) does not work because the green shaded ones have yellow shading, and it goes all downhill from there.
Texture: We take the average intensity/value of the two colors the shape is NOT from an 11x11 pixel patch in the middle of the shape. If there is high intensity, then it is background (since the background is white), and thus the texture is clear. If the intensity is in the middle range, then the texture is shaded, and if the intensity is low, the texture is solid.
Shape: We make 7 horizontal cuts across the shape. At least 6 cuts always hit the shape, sometimes 7. The only information we really retrieve from the cuts is the x value of the first transition pixel. If the cut misses the shape entirely, this value is -1. The minimum x value (meaning the leftmost intersection between a cut and the shape) is called "min". This value is always at least 15 or 20 pixels from the left (because I forced it to be, since occasionally noise would cause a wrong transition). Since counting transitions isn't perfect, and sometimes a first transition ends up on the wrong side of the shape, or in the middle, a threshold is set (20 pixels) within which all the "first transition" have to occur in order to be counted. The difference between the min and valid transitions is summed (along with the number of valid transitions), and the average distance between the min and the transitions is calculated. If this average is less than 2.5, then the shape is an oval. Next, we check a number of different tests (mostly consisting of things like: does line 5's transition have a greater x value than line 4's transition?). If it passes these tests, it's a triangle, otherwise it's a squiggle. This stuff was VERY finicky, and there is a lot of random thresholds and stuff to help filter noise.

2005-4-8: Set works a lot better now. Also we can color red things in for the blob detection. That's not too bad. We need to get a more accurate texture and color finding algorithm and figure out shape in set still.

2005-4-6: Set is almost sort of working. We can get number. Color and texture sort of work most of the time. We are having some issues with counting transitions in a vertical cut. We don't want to count random miscolored pixels as a transition, but we also can't skip transitions that are one pixel wide in the shaded cards. We are also working on finding a red blob for the big brother bot.

2005-4-1: We've been working on recognizing the four aspects of set cards: number, color, texture, and shape. So far, we are trying to get color. We figure number and texture shouldn't be too hard, but we need a way to find shape.

2005-3-25: MCL works! We've made some significant changes to everything, so I'll try to remember what all we changed.
1) We no longer use the sonar for bump sensing. This greatly speeds up the rate at which we can poll the IR sensors. Net effect: the robot can move much faster without running into walls (we've safely doubled its speed).
2) As a consequence of the above, we no longer do MapTool updates as frequently. We now only try to run MCL when the IR sensors have detected something. The logic behind this is that the sonar is most useful when it can see things close up; its accuracy is poor at long range (and especially when the robot is moving). So now the robot wanders until it finds a wall, does its three sonar pings, sends the data to MapTool, and keeps wandering.
3) Since MCL effectively no longer is an on-the-fly process (we run it only when the robot is stopped, and wait for it to finish before we start moving again), we can greatly increase the number of particles we use. Not really a change per se but it's nice.
4) We fixed MCL. I don't remember exactly what was wrong (we made a lot of changes to MapTool), but it works now.
5) We fixed the sonar display in MapTool. As it turns out, our particle resampler was making exact copies of the old particles except for one minor detail: the sonar distance. Now that that gets copied over, we can accurately display the sonar distances for all particles.
6) On the theory that the robot's odometry is generally pretty good, resampling now always puts a few particles where the robot's odometry says the robot is. This should help us if the particles ever get completely lost. Note that this only happens if we aren't...
7) ...trying to solve the kidnapping problem. This works in the lab but not yet in the field (i.e. it works with simulated robots but not with real ones). We suspect that our resampling may not be entirely fair; we may need to fiddle with some parameters.
8) We replaced the batteries in the sonar. What? They don't all have to be really important.

2005-3-23: We basically did some debugging on the robot. We discovered that MCL isn't working because the robot's sonar readings are, for whatever reason, inaccurate. The left reading typically goes through walls; the right reading is usually pretty good, and the center reading is always too short. This causes a disjoint between the robot's readings and the particles' virtual sonar readings, making particles die off improperly. We're going to try fixing the sonar a bit (put some threshhold values in for "infinite" distances, try replacing the batteries, possibly recalibrate it).

2005-3-11: UPDATE: With the help of Prof. Dodds, we found out what was causing all sorts of problems. In the draw function, there is a line of code that was given to us and had worked in the past for us and works in other places and works for everyone else. It was a line of openGL that created smooth circles to represent particles. After doing nothing, the program decided to break some more. After changing nothing, all new builds of mapTool did not accept conenctions from the wandering client. If it is built using debug instead of release, it suddenly works. These seem to be problems that are happening on the computer, so we might try switching to the desktop to see if it works there. In any case, we'll fix these problems after spring break.

2005-3-11: The newest build of mapTool still doesn't work, but it has been narrowed down to a problem with drawing the particles, I think. It seems to work ok when moving the bot around, but when 'p' is hit to toggle the particles, it goes sloooooooowly. Hopefully the culprit will be found and kil- I mean commented out.

2005-3-10: Some additional work was done on MCL in trying to get the IRs to act as localization sensors in addition to their bump sensor capabilities. At first this apparently made MCL work worse. On a second trial, it seemed that the IRs might not be as reliable as thought, but it turns out that the artificial robot readings were set at 25 instead of actually finding the distances. This made testing annoying at the workstation. For now we commented it out again and ran mapTool with keyboard input. All seemed well. Some time, around 1:00 am, it was decided that it would be a good idea to actually test the robot in the halls with the mapTool. The one in the folder was the build from Wednesday, it seemed to work except the particles headed off in the wrong direction. When this version was tested with keyboard input, however, it worked just fine. Hmmm...maybe we should use the new version of mapTool. After building the newest one and trying it out with keyboard input, all hell froze over. Actually, just mapTool. For some reason, any new build of mapTool starts using >90% of the cpu. Nothing noticable was changed from the afternoon version to this one, so this shouldn't be happening. Murphy's New Law: Whatever can or SHOULDN'T go wrong, will go wrong.

2005-3-7: MCL update! We have successfully integrated the new map tool with the wandering/remote control client. Our robot now uses sonar and IR as bump sensors. It also uses sonar for MCL. The new map tool needed some work done to it, so we fixed it up a bit. We began by figuring out how to move the particles. After agonizing over trig for hours we finally realized that theta was in degrees, but cos and sin were in radians and there was PI all over the place. We fixed things up in radians, created a noisyMove function for the particles and voila! Next, we fixed up the resampling. Our fair enough resampling uses a step of (total prob/number of particles). It starts between 0 and step size and goes up incrementally, popping off old particles and creating new ones based on how many steps land in that particles probability area. This being done, we cranked up the nosie and changed the probabiltiy updating to use a gaussian curve. So far, so good. Next we'll have the robot use vision in MCL.

2005-2-21: On Wednesday, we hooked up the sonar and got it twitching. We also started work on a wandering program as part of the remote control. On Friady, we took apart the top of the Evo and built a setup to hold the camera, sonar, and sonar extras. We tried to make a reasonably sturdy setup and allow for the camera and sonar to be above the top of an open laptop. We also kept the sonar alone on the top of the robot to give it an unrestricted sensing region. We tested the sonar to make sure we could control its movement and recieve input. In the wandering algorithm, we at first gave it a deterministic way of wandering. After parsing the ir input, we used the three ir sensors as "bump" sensors so we wouldn't actually bump into anything. We changed the wandering loop to go straight 3/4 of the time, turn 30 degrees left 1/8 of the time and turn right 1/8 of the time. Also, when it sense something in front of it it will randomly turn either left or right. On Saturday, we expanded the functionality of the remote control. It now lets us control the sonar and get a reading from it, but it has not been calibrated yet. We noticed that the ir sensors sometimes give a false reading, but do not do this continuously. We incorporated this by stopping upon sensing something and only responding to the sense if it persists. We also incorporated the Libra complex map and tracking system by taking info from the odometry of the robot. The first attempt at allowing the robot to have autonomous wandering ended poorly when the robot went backwards when it should have been going forwards. After this disappointment, we switched laptops and it worked quite well. The first run of the robot in the halls ended in the conclusion that we had a foolish robot because on the map, the robot kept heading north and went through the wall. Clearly it did not actaually do this. Upon careful analysis, we realized that the base drive test had to be reopened to reset the odometry, otherwise the mapping system was based off of the previous run. When we ran the wandering loop again, we had a slowly but surely wandering robot. It continued on its merry was and wisely navigated the halls until it went into a doorway that was, oddly enough, not on the map. Prof Dodds seems to have missed at least one door. Next on our list is the Monte Carlo Localization. Although the odometry look fairly accurate, there is always room for improvement. We will also incorparate the sonar after calibrating it to look for far off obstacles and open hallways and doors. Hurray!

2005-2-16: This day was spent working on the "remote control" client for the robot, and on making the robot recognize red moldings. The source code for all this still needs to be updated, but you can see a picture of the red recognition in the gallery. The remote control client is up on the source code page.

2005-2-11: Last Wednesday we put the robot through its paces running in a square. Or some approximation thereof. It turns out that our robot's idea of where it is and our idea of where it is tend to differ, sometimes rather drastically so. We'll have an Excel spreadsheet of the results up soon. We also put together a basic servo. Then, on Friday we worked on the homework assignment, and broke the servo. Such is life. Oh, we also added a small gallery containing pictures and movies that we've taken during the development process. It's a gallery of DOOM, though, so watch out.

2005-2-05: Our team installed the requisite software at our workstation, and checked that the wiring was correct on our Evolution robot. After trying to run a simple test to move the wheels, and failing, we called on the aid of Professor Dodds. He has assured us that he will contact us, so that we may progress further, as soon as he has determined what was going wrong (and fixed it).
The website was updated with new information (notably, the Approach and References pages, and an expanded introduction.

2005-1-25: website created, for sufficiently small values of "website".