Harvey Mudd College
Computer Science 154 - Robotics
Assignment H

Behavior-based Control

Lab #3 write-up
Alvin Kou, Brian Ji Ho Shin, Fernando Matto
4/01/01

Introduction

A robot does not have to have range sensors (sonar sensors) in order to execute a wall-following algorithm. The Rug Warrior has three touch sensors, two infrared emitters and one sensor, shaft encoders, and two light sensors. The purpose of this project is to build a behavior-based controller that will allow the rug warrior to roam the halls of a building and remember where it has been

 

Part 1 - Testing the waters

Start by familiarizing yourself with the robot and its capabilities by programming the robot to perform a series of tasks in Interactive C. Interactive C, is a programming environment available for the Rug Warrior (as well as the Handyboards used in the Extinguisher project. An online reference is available, as well as documentation in the Rug Warrior's assembly guide.

Checking the sensors

As a starting point, you should write Interactive C programs to use and test the sensitivity of the following subsystems on the Rug Warrior:

Chapter 6 in the Rug Warrior Pro Assembly Guide describes a number of example programs. Chapter 7 in Jones & Flynn's book suggests a method for implementing a behavior-based architecture using the multitasking capabiltiies of Interactive C. get a sense for what the robot can do.

 

Part 2 - Wall following

The basic idea is to build a program that will follow walls in an indoor environment.

 

Part 3 - Returning home

Once the wall follower is working, the final task is to keep track of "landmarks," i.e., large right- or left-turns. Then, add the capability of returning to the robot's approximate starting point upon hearing a signal (a clap or another loud noise). Basically, the rug warrior should reverse direction, start following the wall on its other side, and stop when it has passed the "landmarks" it encountered on the trip out.

You might also consider creating landmarks (bright lights that would be picked up by the photodetectors, for example) to help this homing procedure.

I apologize that I haven't written a more detailed description of this, I'll let you know when the specification is revised.


Results

You can download the code here: rug.cc

PART 2:
To start using the Rug Warrior we created several functions to control the sensor. These functions are independent processes started by the main function and killed after the warrior reaches home. The sensor control functions are: Bumper(), Infrared(), Microphone() and Photocell(). Each of these controls a right/left variable to indicate where the sensor comes from (where appropriate) and a _Detected variable to indicate their presence. These variables are used by the wallFollow() function to control the robot's motions.

We also created short functions to set the speed and rotation of the robot according to the action required. These functions are stopRobot(), driveStraight(), turnLeft(), turnRight(), rotateLeft(), etc. The rotate functions set a rotation velocity for a certain amount of time. This amount of time was calibrated by testing the robot several times until we reached the desired results.

Two other supporting functions are remaining: LCD() and setSpeed(). Both of these run as independent processes. The LCD() function prints out the strings LCD_line1 and LCD_line2, which are set by the wallFollow() function. It also has an option to create a beep (but this option was not used). The setSpeed() function basically sets the speed and rotation of the robot to the variables "speed" and "Rotation". Notice that we multiply the two variables by -1, since we wanted the front of the robot to be the face.

The most important function is wallFollow(). This function constantly checks the states of the sensor variables and acts accordingly. It first checks for the returnHome variable, which is set when you flash the light at the robot. If this happens, we basically just turn 180 degrees (calibrated with testing) and switch the values of the lastrightWall and lastleftWall variables. This results in the robot following the opposite wall on the way back. If you flash light at the robot again, then it kills all processes and quits the program. As for the sensors, wallFollow() first checks for the infrared. It's pretty easy to follow what it does from the code. Then it checks for bumpers. If no sensors are present, it simply turns towards the side of the last wall it found.

Basic use of rightWall/leftWall and lastrightWall/lastleftWall: The former variables are controlled by the Infrared() function and reflect the result of the IR sensor. The latter variables record the last wall that the robot encountered. This is necessary since the IR sensors are either on or off, and it's hard to know how far from the wall the robot is. This way, when the IR goes on (say on the right), the robot moves slightly to the left. Thus, the sensor soon goes off. When it does so, the robot moves back towards the right to meet the wall again.

PART 3:
We were not very clear about the requirement to remember landmarks for this part of the project. It seems unnecessary, since the robot is always following walls, so it will end up in the same place. In order to make the robot return home, simply flash light towards it. It will turn 180 degrees (it works better in the hall surface, since it was calibrated for that) and will start heading back towards its original position. Then place the flashlight at home and the robot will stop once it reaches it.