LARS - Laser-guided Autonomous Robotic System

Team Other
Jacob Creed
Brandt Erickson
CS 154 Website
Prof. Dodds

LARS is being developed as a fully autonomous robot, which can create a two-dimensional floor map of an unknown environment. From this, LARS can evolve towards building a full three-dimensional model of the environment, which could be navigable by an outside user. One possible use of LARS in a search-and-rescue operation. For instance, it can explore a building damaged by an earthquake, before humans (or other robots) are sent in. The rescue workers could explore the virtual environment and know where potential risks, survivors or the safest routes lay.

The key points in our initial approach to LARS are as follows. First, we plan to use binocular vision to gather distance information about the world. The main sensor will consist of two cameras with a laser mounted between the two. This will allow LARS to see the laser with each camera and triangulate the distance to the object on which the laser is shining. Concurrently, we will be using infrared sensors to do short-range collision avoidance. All the while, another process will be compiling and updating the map.

This approach follows the Three-Layer Architecture model described in Gat's paper. Doing this allows us to separate our functionality into the three typical classes of robotic algorithms: "fast, mostly stateless reactive alogrithms with hard real-time bounds on execution time [i.e. avoiding walls], slow deliberative algorithms like planning [or mapping], and intermediate algorithms which are fairly fast, but cannot provide hard real-time guarantees [such as distance finding]." (Gat 5) This follows along the lines of Brooks' argument that an intelligent system "can not be designed and built as a single amorphous lump. It must have components." (Brooks 5)

LARS will be based on an event-driven approach. For example, if the IR sensors detect an obstacle, they will raise an event, which can then be reacted to by some other procedure - say turning left by 45 degrees. In this way, we can break each event into a separate, reactive task algorithm that simply waits for an event to occur for which it gains priority. Then, the task will execute quickly, and return to waiting for the next event to trigger it. Meanwhile, we can be taking distance measurements or plotting out the map in other processes. Simply put, the event-driven approach that we are taking mirrors the design guidelines that both Gat and Brooks recommend.

Progress and Performance Results

Future Work
Obviously, there is quite a bit of room for future work on this project. First of all, we could incorporate the y-value into the distance measurement calculation. This should be relatively easy to do, simply by raising or lowering the laser compared to the camera, and adding a second lookup table. Secondly, we could obviously add a second camera, as well as fix the image capturing so that it was automatic, rather than manual.

Once automatic binocular vision is added to the project, we could use the distance measurements to construct a distance-accurate floor map of our environment. Perhaps some version of localization could employed as well, although we would probably need to add at least another camera to the robot, in order to get a worthwhile sensor reading of the current position, especially if we wanted to use Monte Carlo Localization.

Another option for future work is to attempt to accomplish some of the original goals which we abandoned early on. For instance, we could build a system that took the pictures and the map and built a mosaic of the images. We could also place the camera on a servo and tilt it up and down, to construct a three-dimensional environment map.

Yet another option would be to add in one or more additional robots, and do any number of vision based distributed robotics projects. As can be seen, there are many possible directions in which to further expand the LARS project.