New and Improved MapTool
The MapTool is at the heart of our Hide and Seek implementation. We will use the MapTool as a central repository for all of the sensory data taken in by the robots as well as the control center for each of their actions.
We had planned on implementing a variety of algorithms for both the hider(s) and the seeker(s).
The algorithms below are listed generally in the order in which they will be implemented. Depending on how things go, algorithms may be added or subtracted from these lists. The eventual idea was to try out various hider and seeker algorithms against each other and see which ones are more effective.
The seeker just wanders aimlessly hoping to bump into the hider.
The seeker puts particles everywhere as if it were trying to localize itself without an initial position. However, the purpose of these "seeker particles" is not to localize the seeker but to give it an idea of where the hider might be. Whenever the seeker "sees" one of the seeker particles, and does not see the hider, the seeker particle dies because it is obviously not the location of the hiding robot.
Particles destroyed in this fashion are randomly redistributed about the map. The seeker's strategy comes directly out of the seeker's particles. All the seeker must do is go toward the largest clump of particles and, in doing so, eliminate the largest number of hypothesis as to the hider's location. This process continues until the seeker eventually finds the hider.
Machine Learning (Not Implemented)
Why bother ourselves trying to figure out a good algorithm for finding hiding robots when we can just pass all of the sensor data into a Machine Learning engine and let it figure out how to find robots. Our learning algorithm category of choice will probably be evolutionary algorithms, especially genetic algorithms (because python has a beautiful module for it).
Same as seeking, except we now hope to hide by wandering aimlessly.
Hide in a Corner (Not Implemented)
Every five-year-old knows that the best place to hide is in a corner. This algorithm would tell the hiding robot to go a corner on the map and stay there, hoping to evade detection for as long as possible.
pKr2MCL (Not Implemented)
Winner of the longest-most-confusing-acronym prize, it stands for "perfect knowledge reverse reverse Monte Carlo Localization". This algorithm is the antithesis of rMCL. Firstly, this algorithm "cheats" by knowing starting location of the seeking robot (hence the "perfect knowledge" component). From there, it simply runs rMCL and from the rMCL calculations, it can figure out where the largest clump of seeking particles is going to be and can try to get as far away from that clump as possible.
Machine Learning (Not Implemented)
Same thing as for the seeker, except we reward the robot for staying hidden instead of finding the other robot.
Additions to MapTool
Most of the progress thus far has been in getting the MapTool revamped to accommodate the kind of functionality required for hide and seek. Below is a quick summary of the enhancements made to MapTool. There are in addition to the previous changes to MapTool.
Greatly improved the speed of placing particles randomly on the map. Now uses a bounding box to get close and only then resorts to dart throwing.
Improved the organization of Program and Map classes
Each now frees memory
More intelligent loading of maps and robots into the Program (including the loading of multiple robots)
In Map, normalizations and unit conversions are now both consistent and correct
Zoom buttons ('x','z') now zoom to center of screen instead of origin
Moving around the map now works in a more intuitive way
Movement of the map is now scaled by the zoom level
Retooled Maptool to deal with multiple robots
Can switch between them
All commands now only apply to current robot
Reworked robot and particle drawing to account for multiple robots
Currently selected robot is drawn with thicker lines
Particles are outlined with the color of their parent robot
Added Seeker and Hider robot types
Differentiated by drawn "H" and "S" on the screen
Different colors for each
General grid for any map (resizes automatically)
Allows for clustering by grid spaces
All calculations are done on the fly instead of by a lookup table so the grid has no memory overhead.
rMCL algorithm (see below)
rMCL Algorithm (Implementation Details)
Conceptually, the seeker starts out looking for one or more hiding robots. In practice, the seeker can start and stop looking for robots in real time as the rMCL algorithm is running. It can also look for any type of robot, even other seeking robots (even itself!). However, for the purposes of explaining the algorithm, let us consider one seeker trying to find two hiders.
At first the robot has no idea where the other two robots are, so it randomly distributes a number of sParticles (described above) around the map as possible locations of the hiders (separate distributions for each hider).
With the initial distribution in place the seeker now needs somwhere to look, and starts a process called rMCLreasses which contains a number of steps:
Finding the largest clump of sParticles
As mentioned previously, the area the robot can move in is divided into a grid. Each grid square calculates the number of particles within it and its eight neighbors, a number which becomes the "count" for that grid square. The counts for all the grid squares are calculated separately for each target.
Among all the grid square counts, among all the targets, the largest is chosen. This both determines which of the two hiders the seeker will be actively looking for as well as the grid square that it will target.
Searching for positions from which it can see the target
The robot does not actually have to arrive at the grid square to check if the hider is there (in fact, actually being there might make it hard to see a nearby robot if we look in the wrong direction); rather, it only needs to see the targeted grid square. Thus, given the robot's knowledge of it's camera's capabilities, it calculates all the grid squares that can "see" the hider that it is looking for (hereafter "spotting points").
This is currently accomplished by a brute force search through each of the grid squares, using the center of the grid square as the spotting point from which the robot check if it can see the targeted grid square. Of course, there are better algorithms for this. A simple flood-fill algorithm from the targeted grid square that stopped at walls and checked (when expanding) if each of its children was a spotting point, would be a fair bit more efficient than our current algorithm (perhaps an improvement for next year's class). However, the real computational bottleneck is the next step.
Plotting a path to the nearest spotting point
Although we learned all sorts of cool pathfinding algorithms in class (like random trees), none of them were feasible to implement given the time scale that we found ourselves faced with. As such, our method of finding a path from the seeker's current position to the nearest spotting point is a simple breadth-first search through the grid, stopping when we first hit a spotting point.
If anyone wished to optimize our code, this would be the place to do it. Put in some algorithm that has a little bit of theory behind it, as opposed to our attempt.
Once this process is concluded, the robot has a path toward the nearest spotting point from which it will be able to see its "best guess" as to the location of a hider. The seeker proceeds along this path until one of the following occurs:
The seeker arrives at the spotting point at looks at the targetted grid square.
The seeker finds the hider that it had been looking for while en-route to the spotting point.
After the seeker travels a certain distance along the path (currently 10 moves at 50 distance units each).
Whenever any of these occur, the robot reassesses using the previously explained algorithm.
The seeker continues this process of reassessing and moving until it finds all the hiders that it is looking for. (At which point we experience "undefined behavior", which, though it doesn't post naked pictures of you on the internet, it does usually crash the program, unless Dodds is watching.)
Multiple Robots - The MapTool now supports multiple robots. Not only that, there are now two new kinds of robots: Hiders and Seekers. (These are in addition to the normal robot class). The non-standard robots are marked with an "H" and a "S" respectively.
The Matrix The Grid - In order for the seeker to quickly find the
largest clump of seeker particles it must discretize the space. In
this picture a seeker is trying to find two hiders using the grid.
Zoom! - Zooming in and out now zooms you toward the center of the map, as you see it, rather than toward some arbitrary "origin" point. Try the animated gif, it's dizzying!
Manual Seeking (Animation!) - An animation of one seeker trying to find a hider. Although the seeker's movements are human controlled, the deletion of any particles "seen" by the seeker is automated. This animation also highlights the new rotatable camera (all other instruments can also be rotated or even moved, relative to the robot). While the current robot hardware cannot actually rotate its camera, it's nice to know that it is nevertheless supported in the software. (The dotted red lines indicate the field of view of the seeker's camera. The display of all the other instruments has been turned off).
Pathfinding - a few examples of how our seeker finds its way during rMCL
Smart Pathfinding - This is an example of a really smart path found by our seeker. It actually veers slightly away from the targeted grid square because it finds a closer spotting point by doing so.
Really Dumb Pathfinding (A B C) - These pictures detail a very silly path found by our algorithm. Due to a bug which is probably in the grid code somewhere, the seeker thinks that it can run off one end of the grid and end up on the other side of the map. This bug has been fixed in the most recent version of the code by adding a sanity check to make sure that each step of the path doesn't go more than a certain distance. (This would be another great speed up. If you fix the bug in the grid, this check would no longer be necessary and its absence would greatly increase the speed of the pathfinding algorithm.)
rMCL Animations! - A few animations of the seeker doing what it does best!
New commands in MapTool:
'g' - toggle grid on/off
'y' - deletes the current robot(s) and spawns one hider and two seekers.
'Y' - Like 'y' except the hiders' positions are random.
'M' - Run one step in the seeker's search for hiders.
'!' - Run entire rMCL simulation (like constantly hitting 'M' and 'W'). Also resets the window from the default so you can see everything.
'?' - Kills off any sParticles that a seeker can see. (Subset of 'M').
'[' ']' - Select between the available robots.
'C' - Change the color of the current robot randomly (note the capitalization)
'R' - Randomly respawn the current robot somewhere else on the map
'W' - Wander all robots a little ways
'E' - Wander all robots a long ways
'S' - Print to console whether each robot can see each other robot
'\' - Center on current robot
'|' - Center on the current target of the seeker (crashes if there does not exist such a thing)
'<' '>' - Rotate camera and sonar of current robot (also affects its particles)
'3' - Spawn a few particles randomly on the map (for current robot)
'4' - Spawn a LOT of particles randomly
The usual commands for a robot, only apply to the current robot. For example, turning on sonars ('a') will only display the sonar of the currently selected robot. Turing on the sonars of the particles ('A') will only turn on the sonars of the particles of the currently selected robot.
Unlike normal, particle drawing is off by default. If you wish to see the particles that are trying to localize a given robot, select the robot and press 'p'. Updating the particles can be accomplished with '.'.
The camera is on by default to show off the cool "cone of vision" dotted line effect. 'c' toggles camera drawing.
If we had gotten it set up, we would have controlled the remote robots from MapTool in the following way.
The Server Side
Our current model puts MapTool running on a separate machine from the robots. This host will have a python "master host" program which will act as an intermediary between each of the robots and the MapTool itself. Essentially, it will forward all the messages from the robots to the MapTool. As for messages from the MapTool, it will figure out which robot they are destined for and send them there. The main need for this "master host" program is that MapTool and the evolution clients want their input and output in slightly different formats. It will also handle all of the synchronization of the asynchronous messages being handed off between the various robots and the server, allowing the host MapTool code to concentrate on algorithms rather than network interactions.
The Client Side
The clients will be running on the individual laptops on each of the robots. These laptops will be running the main python client as well as the VisionTool for the camera and the servoServer for the sonar. The client will get the odometry and sensor data and send it to the server. In a perfect world, this data will be set to the MapTool via the maser host python program where there will be much calculating. Eventually, the MapTool will decide on a course of action for the robot and send the corresponding command which will journey back through the master host and to the robot's python client.