| |
CS 154 Homework #5
Due Sunday, March 28, 2010
Readings for HW 5:
- Monte Carlo Localization: Efficient Position Estimation for
Mobile Robots by
Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun, Proceedings of the 16th AAAI,
1999 pp. 343-349. Orlando, FL. (pdf)
-
Coastal Navigation: Mobile Robot Navigation with
Uncertainty in Dynamic Environments by
Nicholas Roy, Wolfram Burgard, Dieter Fox, and Sebastian ThrunProceedings of the
International Conference on Robotics and Automation,
1999.
-
Dervish: An office-navigating robot
by Illah Nourbakhsh, Powers, R., and Birchfield, S. AI Magazine, vol 16 (1995), pp. 53-60. (pdf)
Part 1: Midterm project demos
The first part of this assignment is the mid-semester update and demo of your
robotics project. In the absence of another task, consider the default task to demonstrate
to be Monte Carlo or Markov Localization - this fits well with the second-half
of this assignment (to implement MCL in simulation). Some details:
- Set up a time to demo your project -- most likely some time around Friday 3/26
through Sunday 3/28 or Monday 3/29. Regardless of the demo time, you should
have a wiki update on your project's progress by the 28th that includes these
pieces:
- One more more videos of the latest system behavior(s)
- One more more still images of the system, work on it, or other related scenes!
- A description of the features added since Hw #4.
The exact nature of these features will depend on the robot and
project, but in the absence of other plans, features that dovetail with this
assignment (Monte Carlo Localization or Markov Localization) are the default... .
- Include a paragraph or two on your overall progress thus far, and include
a paragraph with additional detail on your goals for the project during the latter half of
the semester. You are welcome to change directions during the second half of the term
-- either slightly, in response to issues that have appeared, or drastically - switching
to a new platform - if you would
like to do so.
Part 2: Monte Carlo Localization
Overview
The basic object of this piece of the assignment
is to implement Monte Carlo localization in a robot simulator, using a
known map (and culling particles that pass through walls), along with a
range sensor and a simple "vision" sensor.
The starting files for this assignment are available at this link
MCL is a passive localization algorithm in that the robot does not
need to be doing anything in particular for MCL to work. Thus, you can be
remote-controlling the robot or it can be randomly wandering, or it might
be trying to accomplish some task (such as the Mini Grand Challenge
tasks). For this hw, it is absolutely OK to be controlling the simulated robot
via the keyboard -- indeed, this will certainly help with the development of the
algorithm.
MCL can take advantage of odometric sensing, i.e., the robot's estimate of how far it has
traveled. With a known map, it does not require any other sensing, but typically
at least one other sensor is used. For this project, you should use at least the simulated
range sensor provided.
The simulator can model a noisy robot - simply pass a percentage of your
commands you'd like allocated to noise as the third parameter to the setVels
function, e.g.,
r.setVels(50,0,0.1) # 10% noise
will send the robot forward at 50 cm/sec, give or take 10% (so 40 to 60 cm/sec)
I would suggest keeping error levels low at first - in fact, you might start with no error at
all.
Capabilities
In order to build MCL step-by-step, you should make sure these key presses correspond to the following behaviors:
- P (capital P) should place a number of particles representing pose hypotheses - this hw
doesn't require you to start them away from the robot's actual starting position. Such an approach thus
solves the robot tracking, but not the robot kidnapping problem.
As an extension, you might have an option that spreads
them out uniformly within the environment, as well.
- M (capital M) should move all of the particles (with some noise!)
according to a differential-drive motion model. Be careful about how you update the particles here -- since
the robot is differential drive, you should model each displacement as a sequence of three pieces:
- a rotation from initial orientation to the orientation of the displacement itself
- a straight-ahead (or back) motion from initial to final (x,y) location
- a rotation from that displacement orientation to the robot's final orientation
Each of those three pieces will likely have error associated with it, and the result will be
the familiar "crescent" distribution. Python's random module has uniform, Gaussian, and other
distributions from which to sample. The crescent is most obvious for large amount of noise in the
rotations. However, to begin, I would suggest keeping the noise small.
In particular, in the absence of noise, you will want to be sure your particles are all following the
odometry of the robot!
- S (capital S) should consider the sensory data (from the simulated map) and update
the probabilities of the particles according to the sensor error model being used. In order to facilitate
the development of this step, there is no noise in the range finder to begin with. However,
you should add noise (on your own, so that you can easily change its magnitude) in order to demonstrate
localization in the presence of noisy sensing as well as noisy odometry.
You should
add color to each particle to check that this is working.
You will not want to use a linear
map from probability to color, since the probability ranges tend to cluster
at low probabilities that are visually indistinguishable. One way to use color effectively is to
sort the particles by probability and to vary a color band (R, G, or B) or all three
according to a particle's
position in that sorted list.
- R (capital R) should resample the population of particles, based on the updated
probabilities that were computed in the S step, above. Ideally, your system should
resample fairly and broadly. That is, there should
be no particle with non-zero probability that is algorithmically
ruled out from surviving to the next generation (fair).
In addition, your resampling should make sure that
every particle whose probability (after sensing) is greater than 1/N is represented
in the subsequent generation, potentially multiple times, in proportion to its probability.
(N is the number of particles being used.)
- G (capital G) should use a timer (using time.time() not
time.clock(), since the latter is known to run differently on different OSes)
in order to run all three of the above steps every so often, e.g., every
second as you drive around.
Also, you're encouraged to add other features as you wish -- see the
extensions list below --
but please do implement these keypresses in order to make it easy to test the basic
functionality in a consistent way!
Representing particles
Though not required, I would suggest representing
each particle (each pose hypothesis) as a list of five elements:
[ x coordinate, y coordinate, orientation in radians, probability, range data ]
The probability should be set initially to 1.0, and the range data may be
initially set to 0.0. This last component represents the ray-traced range from the particle to the
map's walls.
Getting particles' ranges
The
getParticleData(x, y, thr, rangeSensorHeadingd=0) function takes
in a pose (and an optional direction in which the range sensor is pointing) and returns
a four-tuple of the distance to the environment and the r, g, and b components of the
color of that nearest obstacle.
One warning: the units are not consistent, in that the particle pose is expected in radians and
the sensor heading in degrees. Fortunately, math.radians and math.degrees
are available -- unfortunately, their availability leads to such inconsistencies... .
Other provided facilities
On the graphics side of the program, there is a function setParticles that
takes in a list of particles and simply places the
appropriate number of particles at those locations and orientations, with the range
data provided. Simply give ranges of 0.0 or omit the final component to avoid
seeing the range data. You can access this through the queue that ferries graphics-updating
commands from the robot-controller (main) to the graphics thread (GuiPart
in ThreadedHandler.py). Here is an example call from within main:
self.queue.put([self.setParticles, particles])
where particles is the list of particles, represented as described above.
What should you provide?
You should provide a .zip file with all of your code (most likely, posted to your
team's project site), and a write-up of ~1 page that describes
- the approach you took to implementing the motion model
- the approach you took to implementing the sensor model
- the approach you took to implementing particle resampling
- two or more screen shots showing your system in action! especially
if they highlight an added feature or some interesting facet of your implementation.
In addition, you should describe any additional features you implemented (see the list below)
and how to test and see their effects.
Additional features
Completing the basic MCL algorithm,
with a cluster of particles tracking the robot even in the presence of
modest amounts of noise (5% or 0.05 as a third parameter to setVels
is the foundation of the MCL problem and would, alone, earn about 80% of the credit for it.
For full credit on the MCL piece of this hw, you should add an additional
feature of your own choosing - perhaps one that would be useful in your project, if
MCL will factor into it or one that you simply want to experiment with. Here are some
possible ideas, but by no means all of them:
-
Implement a Markov-localization variant, where you divide the space up into different cells
and adjust the probability of each cell holding the robot at each update.
- You might propose a subset of particles at each generation, based on the
last-sensed value and the map itself. This would require determining
which locations in the map would have been most likely to generate
that particular sensor value.
- For the graphically minded, you might choose to improve how the
particles are visualized throughout the process, e.g., you might make
less likely particles smaller. This would involve delving into the
code for specifying the characteristics of each visible object. Fortunately,
it is not that deeply buried. Keep in mind that you'll turn in all of the
files, so I'll be able to test things with whatever changes you make to any of them.
- Speaking of which, any other features or improvements to the code
as it stands may also be part of your work for this assignment. Simply
describe what you did and how to see the effects!
- You might offer a series of different ways of placing/resetting the particles, e.g.,
uniformly through the environment, on the robot but with random orientations,
or fully aligned with the robot. In addition, you might provide a way of
changing the number of particles on the fly, e.g., through the keyboard
or algorithmically.
- You might allow the user to change the noise model or sensor model
to be either "more forgiving" or "less forgiving" as the simulation runs.
In addition, you might allow the user to experiment with different forms
of motion or sensor models: uniform distributions, triangular distributions,
or Gaussian.
- You might determine a method for measureing the entropy of the
sets of particles -- researching and implementing (and reporting) the entropy
would suffice, but you could go even further to create an entropy-based
navigation system, as well... .
|