Project 2, Algorithm 1 results
For this project, we implemented the algorithm described for bug I in the problem statement. When the robot encounters an obstacle, it runs a wall following routine to find the closest point between the object, and the destination point. It then loops back around to that point, and drives in a straight line towards the goal. If the robot encounters another object, it goes back into the wall following routine, and performs the above stated method to handle that object. This approach is robust, however it is also very slow. The robot will only be fooled by terrain that causes the wall follower problems.
In more detail, the algorithm puts the robot in one of three states: follow, seek, or blitz. The robot begins in a follow state, so it locks on to the nearest wall, and begins to map out the wall, looking for the closest point on the wall following path and the destination. Once that point is found, and one circuit of the wall has been completed, the robot goes into seek mode. In this mode, the robot continues to wall follow, but will stop once it gets within 2.5 inches of the closest point to the destination. Once this happens, the robot switches into blitz mode, where it rotates itself to face the destination, and then moves forward rapidly. If another obstacle is encountered, the robot is put back into the follow state. If the destination is reached, the algorithm terminates.
Since the robot will never loop around to precisely the same place, there is a tolerance set within the source file for how close the robot needs to come to a destination before it changes states. For the below examples, the tolerance was set to 2.5 inches. The speed could probably be increased, as this is a relatively lose tolerance, however after several test runs, the robot never was forced to do a second loop due to missing a target destination. The only other difficulty with this project came from lack of thought about the properties of arctangents in quadrants other than the first. When the robot enters the blitz state, it first orients itself based on the arctangent of the destination position and the current position. For angles greater than 90 degrees, one must take sign of the arguments to arrive at the correct angle measurement.
One optimization that could be done to this implementation would be to change the seek state so that it picks the quicker path back to the target destination. This is partially implemented, as the robot tracks iterations through the wall following code while in the follow state. Once the state changes, one only needs to see if the iteration when the target destination was found is greater than half the max iteration. If so, then the robot needs to follow the right wall, as opposed to the left to get to the destination quickly. The only thing missing from the code is a generalization of the wall following routine that allows for left and right wall following.
Below are test runs showing the robot navigating bugmapA and bugmapB.
The code can be found here: