The goal of this project is two-fold. We were to design and implement two modules for the Nomad 200 simulator. One designed to statelessly navigate an environment without crashing for as long as possible, and another which would follow the walls in its environment to circumnavigate obstacles or just explore.
As suggested by the assignment, we used data from the on-board sonar of the Nomad to establish its immediate surroundings in the simulator for the survivor program. Fairly early on we decided that though the Nomad has 360 degrees of sonar coverage, we would only need to pay attention to the front half since those sensors depict the immediate future of the robot if it continues its current heading. For the wall-follower, we initially attempted to use the infrared sensors since they provide better information of the close-range world of the robot. As it turned out, the tolerances were within the capabilities of the sonar sensors, so they were utilized instead.
To keep our robot from being voted off the island, we had to direct it such that it could navigate its environment at a constant speed for as long as possible without colliding with an obstacle.
The survivor program uses three primary pieces of information to decide its next rotational velocity. A scaled reading of the three sonars on the front of the robot, and two more from the sets of four sonars on either side of the front three. Essentially, the crv() function determines which side is more imminent and turns away from it. To prevent oscillatory wandering into corners, the robot only changes direction if the difference between the two sides is above a tuned threshold. Once the robot has determined which direction it will turn, it computes a rotational speed based on the frontal readings combined with the side it's turning away from. In addition, the program gets panicky when it is is particularly tight corners. If the frontal readings are stronger than the zeroBarrier parameter(which I tuned to improve the robot's performance), it determines from which side the closer readings are from and turns very strongly away from that side.
Our instruction code for the Nomad simulator is written in C++. The speed was set at compile time for simplicity.
Initially, we developed an elaborate system to find sensor readings that indicated a wall by checking the relative angles of adjacent sensors; it worked beautifully after much fumbling with the Laws of Cosines and Sines. Then, we realized that this was unneccessary. Trigenometry is easy when all the angles are 90!
With this issue resolved, a fairly straightforward strategy soon materialized. The robot would start off by finding a wall and turning so that the wall was on its left side. Once it was tracking a wall (indicated by one of the left sonar sensor readings being under a certain threshold), it would go straight. If it lost track of a wall, the robot would make a left turn, and go straight until it found a wall again. If the robot sensed something in front of it, it would turn so the impending obstacle was on its left, and track it. Since all the walls are at 90 degree angles, all the turns are at 90 degree angles.
Obviously this approach is not terribly robust. While we haven't tested the robot in non-right angle environments, we imagine that it will at best lurch along very slowly. However, in more orthogonal environments, things work quite nicely.
In the survivor program, most of our problems resulted from cases of robot and obstacle orientation (such as the sharp, protruding corner in survivor_map_2 upon which the third run eventually crashed) which were not taken into account by the program. Once the robot stopped falling for the same obstacles over and over again, it was just a matter of properly tuning the parameters.
For the wall following program, most of the problems were due to what is often termed gross stupidity on the part of myself, Patrick Vinograd. I spent a good amount of time debugging code that calculated the Law of Cosines, while all along I was forgetting to take a square root. More insidious than this was the fact that the robot mysteriously neglected to make the expected turns from a south heading to an east heading. While I had already made the turning code based on modular arithmetic, I missed one crucial mod operation, which caused me to spend 2 hours debugging everything but the line in question. That problem solved, things were pretty easy.
With the relatively straightforward sampling techniques used, the robot didn't have many problems with losing track of its environment. Probably the biggest flaw in the behavior occurs when the robot needs to make a left turn due to the wall it was tracking coming to an end. Oftentimes, the robot stops before it has cleared the wall, so when it tries to turn and go forward, it is impeded. It then has to turn back to its original heading, scoot forward, and try again. While this performance is not optimal, the robot very rarely collided with the wall, so the code was left as is for simplicity's sake.
The survivor program performs admirably when it can find a space to circle indefinitely. In tight quarters, it can get stuck trying to circle if its speed is too high. While the robot does not typically explore much of its environment, it seemed to do well avoiding collisions with obstacles in most cases. Even the bus in Speed eventually crashed...
In right-angle universes, the wall follower works well. It tracks walls, is able to detect when said walls disappear, and avoid crashing into most sizable obstacles. Some tuning could certainly be done to improve the behavior, particularly with the thresholds used for obstacle detection. The behavior with required left turns mentioned above was sub-optimal, but not a terrible problem. All in all we feel that the wall following is sound.
One of the hardest parts of this project was the math involved in our solution to the wall-follower section. As Patrick will attest, square roots are important when solving quadratic equations. He also assures me that modular math is impossible. <it is! - P>
Despite all the frustration we experienced, we feel that nothing compares to coercing a physical robot of one's own creation to perform its task. The Nomad simulator does what it's told when it's told to do it. Our extinguisher model, on the other hand, seemed to have a deranged intellect all its own.
Though there was no physical robot to try and kill me this time, I'm sure that if we were to run our code on the actual Nomad that results similar to the Extinguisher robot's behavior would ensue.