1 - Constructing "Click"
In this lab I will be modifying and extending my extinguisher robot
into a new robot dubbed "Click." Click's purpose will be navigation and
localization in an office-like environment. He will be equipped with
two sonar range-finders atop a servo motor which will do the bulk of
the work, as well as bump sensors and possibly light sensors or IR
range-finders if necessary. My goal for this lab will be to get Click
working, which involves adding the new elements to the robot, getting
the sonars to give meaningful input, learning how to use the servo
motor successfully, and assembling and using the compass. I will
probably create a set of simple test goals for Click to perform in
order to demonstrate progress in these areas.
The general problem of the extinguisher project, while not excessively
difficult, was an important one, as search and navigation are central
problems of robotics. In particular the problem of finding and
extinguishing a flame has been solved many times before by many people.
But it was ideal for getting my robot built and tested, and did offer
several challenges. (Not to mention that my robot still doesn't handle
all situations perfectly.) The main problems to be solved are maze
searching, light following, and figuring out how to blow out the
candle, as well as how to know when you've blown it out. The problem is
limited in many ways including that it is in a very simple, enclosed
environment, that the light and dark values are known ahead of time,
that the height of the candle is known, and that there is only one
candle to find and blow out. All of these make it easier to solve the
problem with the given hardware.
The localization project extends and focuses upon the searching aspect
of the extinguisher project. The central problem I will be attacking is
how, given a representation of the environment (a map) and real-time
sensor input, can the robot determine where it is. Once the robot is
constructed and working, the next lab will be implementing Markov
localization to achieve this goal.
I intend to implement a subsumption-based architecture (as described by
Rodney Brooks in his paper "Achieving Artificial Intelligence Through
Building Robots" at M.I.T. in 1986) using a HandyBoard-controlled lego
robot. The robot will at least have bump and light sensors which it can
use to traverse the maze. As I am using this project as a jumping-off
point for future projects, I may also build sonar and compass sensors
into the robot, and make use of them if I have time.While I have not
yet worked out the details, I envision the following behaviors
implemented as a subsumption architecture:
- Avoid obstacles (respond to bump sensors)
- Wander (search in darkness)
- Go towards light (when light is detected)
- Enable fan (when light reaches certain threshold, until
light decreases significantly)
In practice, my implementation turned out to be very similar to this.
My architecture had the four following levels, in order of highest
priority to lowest priority:
- Extinguish: Kill all other processes, because we've reached
the goal and don't want to be distracted. Slow down motors, and turn on
fan until the light decreases to predefined threshold for darkness. At
that point, turn off fan and motors. If light is encountered again,
repeat this process.
- Escape: Respond to bump sensors by backing up, turning a
random amount, and going forward again. If "stuck" (same command
repeated too many times,) back up for longer and spin to a new heading.
- Follow: If light is above threshold for darkness, follow
the light by setting motors to full forward and turning towards light.
- Cruise: If light is below threshold for darkness, simply go
forward. If light is encountered or a bump sensor is triggered, one of
the above modes will take over automatically.
Ideally, I believe the robot should spend much of its time in
cruise mode, which is basically navigating the maze without any light
information. I can simulate this by putting the darkness threshold
higher, allowing for the robot to think it is in total darkness when
there is some light. But the light provides valuable information and
naturally tends to lead the robot to correct paths, so the robot does
well in follow mode. Light coming over the top of an obstacle, however,
can be dangerous and cause the robot to get stuck against the obstacle,
though the random escape routines should be enough to get it out
eventually. (The robot's candle light and darkness thresholds can be
set interactively when it is started up, by pressing either of the
touch sensors rather than the start button and then following
instructions.) Any false alarms as to the location of the candle are
quite dangerous, as the robot kills the follow, cruise, and escape
routines in order to focus on putting out the fire. This was necessary
to make sure the fire is put out. If the robot mistakenly stops too
soon, and the flame comes back, it just starts up the fan again and
keeps working on it.
Just getting the robot built and functioning took a little bit of
engineering. I followed a tutorial step by step for the basis of the
robot, but then added my own fan tower. My solution for attaching the
fan (twist ties) is probably not the best choice, but seems to work
despite the awful racket the vibrating fan makes. It took a lot of
testing to find light sensors which reported similar values. I had done
some of the necessary coding for a previous light-finding assignment in
a different course, but had to rewrite much of it as that robot was not
successful at several of its tasks. I rewrote the escape routines
entirely. In addition, the extinguisher portion was all new to me.
As I mentioned above, the user has the option of providing the robot
with new light and dark threshold values interactively. I made this
design choice in order to allow for a higher degree of precision and
adaptability to different environments. The dark threshold determines
how much light the robot must see before it switches from cruise mode
to light following mode. The light threshold determines how much light
the robot must see to be convinced that it found the candle. It
surmises that it blew the candle out when light drops back to the dark
threshold. I could have opted for a relative approach, e.g. turn off
the fan if the light drops by x amount, but that seemed like it would
require a lot more tweaking of the code to get it to work properly. I
wanted to be able to take the robot to a new environment, set it in the
dark and get the dark threshold, set it next to the candle to get the
light threshold, and then have it work for that environment.
Original Localization 1
I have a pretty good base to work off of from my
extinguisher robot, Trill. I added a pretty solid tower to hold the fan
and light sensors, and so it shouldn't be too difficult to mount the
sonar unit on top. The unit, consisting of two sonars mounted on a
servo motor, was constructed by a previous lab group. I need to figure
out how to get it working accurately and reliably, which may or may not
be a difficult task. I'll start by trying to get good readings from the
sonars, and then figure out the servo motor. I'll also need to add
sensors to the wheels, to get some sort of rough estimate of odometry.
By the end of the lab the robot should be able to perform some basic
functions which involve all of these sensors, such as orienting itself
in the midpoint between two walls.
Final Localization 1
A couple of changes, mostly simplifications, to the
design. After looking more into what the other lab had done with the
sonar tower, particularly the impressive work they did to get the
handyboard to work with two sonars, I decided that a simpler, more
solid approach is ideal for my situation. The servo can easily rotate
180 degrees, which means that just one of the two sonars is necessary
to look in three of the four necessary directions from the robot
without physically turning the robot. My new approach is to use an IR
rangefinder for the final direction. This IR rangefinder will look
directly to the right of the robot, and the robot will use it to
closely hug the wall ont he right side. The sonar can then look left of
the robot, forward, and backwards as needed. A quick dropoff due to a
doorway will hopefully be differentiable by the IR rangefinder, which
an detect up to 80 cm away. It will also be easy to keep the robot
going forward, since presumably the wall is straight and it is trying
to stay a set distance away from that wall (unless a sudden dropoff
above some threshold occurs.) Rotation sensors may not even be
I've been able to get good results from the sonar. It measures
distances pretty much as accurately as my tape measure, and is fairly
straightforward to use. The servo is also working well. It is now
capable of switching to predefined forward, sideways, and backwards
positions quickly and reliably. The sonar will most of the time be
checking to the left for doors, and trying to maintain a constant
distance from the left wall as the IR rangefinder is with the right.
The bump sensors on the front of the robot are also of course working
well. The IR rangefinder is more problematic. I'm using a Sharp GP2D02.
Frankly, I'm having trouble getting it to work at all, leading to
nagging suspicions that it might in fact be dead. More likley, though,
it's a problem of hooking it up correctly. I was following this guide
and using the provided code, but had little luck. It refers to port
PA5, which is also T03. This port seems to be trapped under the lcd,
requiring a shunt to get to. (This
guide uses T03 to hook up a sonar.) I tried to get a hold of
another one, but the others I borrowed were of a different type (Sharp GP2D12).
These analog IR rangefinders are less accurate, and require a physical
modification to the handyboard to work at all. I'm going to
continue looking into the issue and try modifying the code to work with
a different timer than T03 while at the same time checking to see if
any other IR rangefinders are available. But without it, I can simply
switch the sonar to check forward, left, and right, since behind
probably doesn't provide very much information, and ideally would
provide none after the first move.
Original Localization 2
Now that Click, albeit a simplified version due to
hardware problems, is built, it's time to implement Markov
localization. As per suggestion, I will divide this up into the
following 5 steps:
1) Using sonar to follow walls.
2) Creating an environment for testing.
3) Mapping the environment by hand.
4) Performing Markov localization by updating probabilities on the map
5) Determining and printing the highest-probability location, and a
list of all probabilities on demand.
Note: I've decided to put my more detailed reports for Localization 2
in the Progress section below.
2/4/2004 - Progress has been severely limited this week
as I have been finishing up course work from the previous semester. I
have acquired all of the parts and pieces from multiple sources, setup
the Interactive C environment on my computer, and intend to have the
basic robot constructed by Friday.
2/13/2004 - Robot is built
and up and running, with basic light-following, maze-searching
behavior. Still doesn't have fan or code for controlling the fan.
2/19/2004 - It's alive!
I'm content with the performance of the robot. Fan and everything seem
to be working well. Updating final report today. Pictures and movies
will be available here by Friday evening.
2/25/04 - Updated site
to begin new project. I've acquired the necessary parts, and begun
construction. I also updated the old site with pictures, a movie, and
2/25/04 - A little
research into the reliability and function of the compass, including
with its creator, makes me doubt its usefulness for my project. I think
I'd better focus on the sonars and wheel rotation info to get an idea
of orientation, unfortunately.
3/12/04 - Plans have
changed a bit after looking into the sonar setup in more detail, see
the Final Localization 1 Approach section for more information.
4/15/04 - Updated site
for Localization 2. My last lab ironically left Click somewhat
deconstructed, so I spent a while getting the tower setup again and
everything onboard and connected. I've been working on wall-following,
an important and somewhat time-consuming first step for the actual
implementation. I'm using just the sonar to follow the right wall, so
most of the time it will be pointed in that direction. It's not quite
working, but should only require maybe a half hour more of tweaking.
I'm working on adapting my light-following code from the first lab to
the task, as that did a pretty good job of making continuous, smooth
directional updates while always moving forward. It should work with
the sonar input. I'm still debating the use of odometry. I'm thinking
maybe I can have one of two things happen to trigger the robot thinking
it's entered a new area of the map. The first is if the right wall
suddenly falls away, implying an open doorway or hall, or if its bump
sensor is triggered. The second is some sort of regular update, where
the robot stops and looks forward, left, and right to see if anything's
changed and update probabilities. This could be triggered either by
odometry or by a time-based mechanism. I'm actually leaning a bit
towards the latter. If I can keep the robot moving at relatively the
same speed, which experience has dictated it's fairly good at doing,
this should be reliable enough. The timer gets reset whenever the robot
stops for either reason. I'm going to give this approach a try first,
as it should be quicker and easier to implement. I'll make the interval
user-specifiable via the menu wheel.
4/16/04 - I've got
wall following working fairly well. It's based on a couple of
parameters which the user can set interactively when the robot first
starts up. One is the distance it should try and maintain from the
wall. The other is the threshold, or the amount of change it needs to
see before it reacts to a variation in the input from the sonar. Right
now it's probably too sensitive, see the video to watch it follow the
wall a little too well, but by tweaking the values I should be able to
get it to work pretty smoothly. I've got enough now that I should move
on, anyway, and tweak more as necessary once I get the maze built and
of Click following walls (5.55mb)
So far, it seems to be doing well. I am taking it into the lab for
another round of tests and some picture taking this morning. With the
thresholds properly set, it really does a nice job transitioning
between modes and blowing out the candle. I enjoy how the robot slows
to a crawl and employs its fan when it gets near the light. It is also
rewarding to see it start back up and try again if it fails to
completely blow out the candle on the first try. It could use a better
light-following algorithm, more specifically one that doesn't put as
much emphasis on the light, as it can easily be trapped by a malicious
maze designer if the light comes over the top of a v-shaped obstacle.
Its reflexes cause it to turn in order to try and get out, but then
once it gets attracted by the light and heads back for it. As I have
mentioned above, I feel like this was a good project in preparation for
a more challenging navigation problem. Most likely I will attempt
navigation with a map over the next three weeks, and then mapping for
the remainder of the course.
I'm pleased with the progress I've been able to make in
preparation for the final project. I spent a good deal of time
researching the sonars, the IR rangefinders, and the handyboard in
hopes of making them all work together, and learned quite a bit about
each along the way. I'm also pleased with the changes I made based on
this knowledge. I think my simpler, more straightforward design will
pay off in the long run. I do wish I could get the IR sensor to work
properly, as I'm interested in working with the wall-hugging robot. I
think it will be a lot faster and more solid, as the sonar-only version
will have to go for a bit, check left and right, then go for a bit
more, and will probably be more prone to bumping into things and
generally less graceful. Something about the assymetry also appeals to