New and Improved MapTool

The MapTool is at the heart of our Hide and Seek implementation. We will use the MapTool as a central repository for all of the sensory data taken in by the robots as well as the control center for each of their actions.


The Algorithms

We had planned on implementing a variety of algorithms for both the hider(s) and the seeker(s).

The algorithms below are listed generally in the order in which they will be implemented. Depending on how things go, algorithms may be added or subtracted from these lists. The eventual idea was to try out various hider and seeker algorithms against each other and see which ones are more effective.

Seeker Algorithms

Hider Algorithms


Additions to MapTool

Most of the progress thus far has been in getting the MapTool revamped to accommodate the kind of functionality required for hide and seek. Below is a quick summary of the enhancements made to MapTool. There are in addition to the previous changes to MapTool.


rMCL Algorithm (Implementation Details)

Conceptually, the seeker starts out looking for one or more hiding robots. In practice, the seeker can start and stop looking for robots in real time as the rMCL algorithm is running. It can also look for any type of robot, even other seeking robots (even itself!). However, for the purposes of explaining the algorithm, let us consider one seeker trying to find two hiders.

At first the robot has no idea where the other two robots are, so it randomly distributes a number of sParticles (described above) around the map as possible locations of the hiders (separate distributions for each hider).

With the initial distribution in place the seeker now needs somwhere to look, and starts a process called rMCLreasses which contains a number of steps:

  1. Finding the largest clump of sParticles

    As mentioned previously, the area the robot can move in is divided into a grid. Each grid square calculates the number of particles within it and its eight neighbors, a number which becomes the "count" for that grid square. The counts for all the grid squares are calculated separately for each target.

    Among all the grid square counts, among all the targets, the largest is chosen. This both determines which of the two hiders the seeker will be actively looking for as well as the grid square that it will target.

  2. Searching for positions from which it can see the target

    The robot does not actually have to arrive at the grid square to check if the hider is there (in fact, actually being there might make it hard to see a nearby robot if we look in the wrong direction); rather, it only needs to see the targeted grid square. Thus, given the robot's knowledge of it's camera's capabilities, it calculates all the grid squares that can "see" the hider that it is looking for (hereafter "spotting points").

    This is currently accomplished by a brute force search through each of the grid squares, using the center of the grid square as the spotting point from which the robot check if it can see the targeted grid square. Of course, there are better algorithms for this. A simple flood-fill algorithm from the targeted grid square that stopped at walls and checked (when expanding) if each of its children was a spotting point, would be a fair bit more efficient than our current algorithm (perhaps an improvement for next year's class). However, the real computational bottleneck is the next step.

  3. Plotting a path to the nearest spotting point

    Although we learned all sorts of cool pathfinding algorithms in class (like random trees), none of them were feasible to implement given the time scale that we found ourselves faced with. As such, our method of finding a path from the seeker's current position to the nearest spotting point is a simple breadth-first search through the grid, stopping when we first hit a spotting point.

    If anyone wished to optimize our code, this would be the place to do it. Put in some algorithm that has a little bit of theory behind it, as opposed to our attempt.

Once this process is concluded, the robot has a path toward the nearest spotting point from which it will be able to see its "best guess" as to the location of a hider. The seeker proceeds along this path until one of the following occurs:

Whenever any of these occur, the robot reassesses using the previously explained algorithm.

The seeker continues this process of reassessing and moving until it finds all the hiders that it is looking for. (At which point we experience "undefined behavior", which, though it doesn't post naked pictures of you on the internet, it does usually crash the program, unless Dodds is watching.)


Pictures


The Code!

Updated MapTool

New commands in MapTool:

Other notes:


The Setup

If we had gotten it set up, we would have controlled the remote robots from MapTool in the following way.