Team AIBO

Brian Bentow - Yu-min Kim - Joshua Lewis

Introduction Set Up Progress Responsibilities

AIBO ERS-7M2 Team AIBO used Sony's ERS-7M2 robotic dog as a platform to learn robotics. We incorporated some of the AIBO's sensors, which include a camera, IR, microphone, and touch sensors, into behaviors that involve object discovery and object tracking. Over the course of this project, our goals changed based on our experiences. For the first half of the semester, we spent most of our time researching prior work with the AIBO, setting up our development environment, and completing a few CMU RoboBits labs. For the second semester, we decided to create a behavior that relied primarily on camera input to find the ball and kick it into the goal. We used the CMPACK development framework for the majority of our implementation. We also evaluated the OPEN-R SDK and other accessory tools from Sony such as the Remote Framework and R-CODE (also available from the OPEN-R site).

Approach

In the first 7 weeks, we read papers that can be found on the CMU RoboBits website, watched Robocup videos, and completed a few of the labs from the RoboBits course. During this time, we set up our development environment and surmounted many hardware/software issues. After the first half of the semester, we decided to take on the non-trivial task of creating the Follow Ball and Score Goal behaviors. As promised, we implemented a functional Score Goal behavior without having to rely on computationally intensive algorithms like Monte Carlo localization.

Notes On Readings

Touretzky(8) has implemented a neural network-based learning algorithm on the AIBO that was able to generalize over time and solve the negative patterning problem. Our AIBO implements lower level functions such as visual servoing in a style similar to Brooks'(1) methods--there is not much state interfering in the sense-act loop. Unlike Horswill(3) we localize via the AIBO ball, rather than known architectural features. Unlike Vaughan(9) we did not use a potential field method to guide our robot. Though the AIBO has a few repeated components (foot sensors, front legs, back legs), it does not have a modular design like those found in Yim(10). Since our AIBO's field does not have obstacles, we did not use Moravec's (5) evidence grids for obstacle avoidance, and we did not use Fox's (2) MCL algorithm to localize within the field.

References

  1. R. Brooks. Achieving artificial intelligence through building robots. 1986.
  2. D. Fox, W. Burgard, F. Dellaert, S. Thrun. Monte Carlo Localization: Efficient Position Estimation for Mobile Robots. 1999.
  3. I. Horswill. The Polly System.
  4. S. Lenser, M. Veloso. Automatic Detection and Response to Environmental Change.
  5. H. Moravec. Robot Evidence Grids.
  6. H. Moravec. Robots, After All.
  7. I. Nourbakhsh, R. Powers, and S. Birchfield. DERVISH An Office-Navigating Robot. 1995
  8. D. Touretzky, N. Daw, and E. Tira-Thompson. Combining configural and TD learning on a robot. 2002.
  9. R. Vaughan, N. Sumpter, A. Frost, and S. Cameron. Experiments in automatic flock control.
  10. M. Yim, D. Duff, and K. Roufas. PolyBot: a Modular Reconfigurable Robot. 2000.