Final Report 2

Back to main page

History of the Project

Our robot and strategy for Botball have changed quite a bit as the semester progressed. We originally focused on a two bot strategy using the main bot with the Handyboard to pick up ping-pong balls for depositing in the nest, and a secondary RCX bot for the foam ball. We designed a vacuum system using Lego treads to scoop up the balls and put them in a hopper on the bot. This hopper would then be raised to drop the balls into the nest or a green basket. Meanwhile, the secondary RCX bot would manipulate the central nest, pulling it into our end-zone and giving us points for the foam ball.

However, we ran into some problems with this strategy fairly quickly. The nest is heavy, and so the motors on the RCX bot were not powerful and fast enough to be able to push the nest. Additionally, there was some trouble with having the main bot knock over the toilet paper rolls in such a way that the balls were easily accessible, and the TP rolls didn't also get pulled in. We were also worried about size, as the Handyboard robot seemed to take up most of the space, leaving us very little for the RCX bot.

We decided to redesign the bot and make a new strategy. Since the foam ball was worth a lot of points, and readily identifiable by color, we decided to concentrate on this. Putting the foam ball in a green basket on our side would be worth a lot, especially if we could move the basket to help prevent the other team from taking it. A secondary goal was to pick up TP rolls and dump the ping-pong balls into a hopper on the robot, then to take the opponent's green basket, and put the balls in that.

To support this, we scrapped the idea of a second robot, and decided to use both boards on the single bot. A claw was designed capable of picking up the foam ball, manipulate the green basket (pushing and pulling) and picking up TP. A hopper on the back of the bot was made to hold the ping-pong balls, with a gate to release them into the green basket. The camera was to be used for detecting both the orange foam ball and the green baskets. Because of the dual-function nature of the camera, a swivel-arm was designed that could rotate the camera so that it pointed either in front of the bot or behind the bot. Sonar would be used for finding the range to targets, and various sensors would be used to enhance the ability of the bot to locate itself.

The current bot accomplishes the primary goal of picking up the orange foam ball and depositing it in the green basket. The camera and sonar are used for finding the targets, and the claw is used to pick up and deposit the foam ball. The support for picking up TP rolls was not added in to the current claw design, so while the support is there for holding and depositing ping-pong balls, the bot cannot actually do this. However, we feel that putting the orange foam ball into the green basket is a big accomplishment, and should help us to be competitive at the Botball competition.


Design Decisions

Several design modifications occurred during the development of the current bot. Because of the difficulty of acquiring and manipulating the foam ball and the time constraints of the project, the last three weeks of the project focused on developing a consistent algorithm to acquire and deposit the foam ball into the nest. As such, the camera swivel was no longer necessary for the design. Therefore, to facilitate consistent angle and placement for the CMUcam, the swivel was removed from the mounting and placed statically on the side of the bot, pointing at an inwards angle. This aided immensely in the robot's performance consistency.


A closeup of the current camera and sonar mountings.

Besides sonar, CMUcam and bump sensors, the original (second) bot design also called for the use of odometry and IR-tophat sensors. Both sensors would aid in the localization of the bot. In practice, the former sensor proved to be too inconsistent to be useful. Although the odometry calculations could potentially provide accurate orientation and location readings, the actual odometry readings themselves varied drastically between runs. The IR-tophat sensor could successfully detect the electrical tape lines. The sensor reading was used to detect generally where the bot was in the arena; these readings were used for timing in the robot's initial rush to the nest.


Code

Our code
Requires the following files from the CMU cam library: cmucamlib.ic cmucam3.ic cmcsci2b.icb. (The last file is a binary assembled from cmcsci2b.asm). The robot's current program consists of the following script:
  1. Initialization
    1. Initialize sensors
    2. Spawn bump sensor thread: for front corner sensors, immediately reacts when bot is bumped by backing away from obstacle and turning to face it (the obstacle is assumed to be our goal, which we want to face head-on), then continue with regular program. For back-sensors, halt current movement and continue with program. Acts like an interrupt routine by using hog_processor to not let anyone else run until it is done.
    3. Spawn the IR sensor thread -- this keeps track of how many times we have crossed tape.
  2. Implemented in dead_reckoning_sprint
    1. Using timed dead-reckoning, arc to the left
    2. Start heading straight forward
    3. Stop heading forward when 0.25 seconds after crossing the tape line the second time
  3. Implemented in move_to_nest
    1. Spin around to the right until we see something orange.
    2. Move to about 20 cm away from obstacle in front of us using sonar (function do_distance which converges to target distance with speed proportional to how far we need to move)
    3. Turn left a little, then spin to right until we see orange again
    4. Move to about 5 cm away from the object in front of us using do_distance
    5. use camera_sweep to record direction with best orange measurement and spin back until we duplicate that measurement
    6. turn right a little to compensate for bias; do_distance to 5 cm
    7. Make a grab for the ball!
    8. scan to see if the ball is still visible = grab failed; if we don't see it, proceed. if we do see it, go back to step 3e.
  4. implemented in move_to_basket
    1. Spin around to the right until we see something green.
    2. Move to about 50 cm away from obstacle in front of us using sonar.
    3. Turn left a little, then spin to right until we see green again
    4. Move to about 8 cm away (WRONG IN CODE, SHOULD BE MUCH BIGGER NUMBER) from the object in front of us using do_distance
    5. use camera_sweep_green to record direction with best green measurement and spin back until we duplicate that measurement
    6. turn right a little to compensate for bias; do_distance to 20 cm
    7. drop the ball!
    8. scan to see if green(WRONG IN CODE, SHOULD BE ORANGE) is still visible = grab failed; if we don't see it, we're done! if we do see it, barf with error (recovery code not written yet)
The first problem noted in capital letters above with move_to_basket probably accounts for it working so poorly... We adjusted the other numbers to be bigger (small round basket gives deceptively long readings compared to the large square nest) but apparently forgot that one. That is good though, as it shows that there are ways to improve our ball-depositing (which is in need of improvement).

Performance Analysis

We bought a piece of semi-reflective white dry erase board from Lowe's to finish the BotBall arena. It is a very accurate reconstruction of the official arena.


Figure: The testing grounds for our robot

Here the green baskets are placed in one of the 4 possible configurations.

To analyze the performance of our robot we had the robot start in the starting area (the small taped rectangle) and execute the code listed above.

The robot was always able to find the orange blob and green basket, however the claw would not reliably lower and open, nor would it repeatedly drop the ball into the green basket. One in three trials resulted in a successful acquisition of the orange ball, one in five resulted in a successful drop into the green basket.

These problems seem to be systematic errors, in that more tuning needs to be done with the parameters governing the distance from the robot to the green basket, and the speed at which the claw is lowered.

The difficulties finding the green basket could be ascribed to the sonar sensor giving misleading readings if the perpendicular bisector of the sonar sensor is not colinear with a diameter of the green basket. Because the sonar bounces off the basket, the reading is larger than in reality. These errors should be accounted for.


Future Work

There are still a considerable number of improvements to be made before the robot will be ready for competition. The most immediate work needs to be done on the code currently running on the robot. It needs to be made more robust so that the robot can recover if it misses putting the ball in the basket, and the depositing routine still requires some more careful calibration, so the bot does not end up accidentally throwing the ball out of the arena. We also need to work more on exactly how we want the robot to respond to the bump sensors while performing each of its tasks. One thing that will definately need to be changed is the distance that the robot currently backs up when hitting a front bump sensor. Eventually code will also need to be added to start the robot automatically when a light is detected and then to stop all of the robot's subroutines after 90 seconds.

Because the current static mounting of the CMUcam worked out well for dealing with the foam ball, we are reconsidering a front design for the robot that would provide enough clearance for the camera to be centered below the arm. This would greatly simplify the code for our entire search routine and would increase the consistency with which the robot aligns itself correctly on the ball and/or basket. Another structural change that needs to be made is the addition of a light sensor to detect the starting signal lightbulb. Also, during our many trials the rear bump sensors have proven to be a bit fickle, so they also need to be redesigned to a more sturdy and reliable configuration, like the front bump sensors.

If all of this is completed sufficiently before the conference date, we will also reconsider the strategy of trying to go after toilet paper rolls and ping-pong balls once we have the ball in the basket, and putting the LEGO RCX back into play. Otherwise the RCX is simply a weight that serves to balance the robot, and we may consider removing it completely and having a smaller and thus more agile bot.


Perhaps without the RCX and hopper, the robot would be more agile?


HMCHammer in action Video Clips


References


0. Collegiate Botball Rules - 2004
1. Horswill, Ian The Polly System.
2. Brooks, Rodney Achieving Artificial Intelligence Through Building Blocks, MIT AI LAB, Memo 889, May, 1986.
3. Botball Kit Part List.
4. Handy Board and Interactive C Documentation.
5. Vaughn, Richard et al. Experiments in Automatic Flock Control. Publication details unknown.
6. Nourbakhsh, Illah et al. Dervish: An Office Navigating Robot. AI Magazine, Summer 1995, pp. 53-60.
7. Dellaert, Frank et al. Monte Carlo Localization for Mobile Robots. Publication details unknown.
8. Drumwright, Evan et al. Exemplar-Based Primitives for Humanoid Movement Classification and Control. IEEE 2004 Conference on Robotics and Automation. 9. Simmons, Reid et al. Probabilistic Robot Navigation in Partially Observable Environments. Publication details unknown.
10. Martin, M.C. et al. Robot Evidence Grids. Tech report #CMU-RI-TR-96-06, March 1996.
11. Leonhardt, David. Subconsciously, Athletes May Play Like Statisticians. New York Times, Jan. 20, 2004.
12. Roy, Nicholas et al. Coastal Navigation — Mobile Robot Navigation with Uncertainty in Dynamic Environments. Publication details unknown.