Pixel-powered robots
This project will investigate algorithms that learn environmental representations (maps) from input images (pixels). Building from 2016's RCs to Robots project, this research will seek to reproduceably quantify a 3d environment from multiple images of it—both staged (intent- most likely start machine-learning algorithms andinclude several hands-on conversions of remote-control vehicles into autonomous robots. We will start by following in the footsteps of others who have converted RC devices into autonomous ones, looking for ways to make that process straightforward, accessible, and low-cost. From there, we will add sensing–mostly wireless cameras–in order to make the resulting platforms as capable as possible. Most of the sensing effort will be computer vision: experimenting with flexible and efficient software to interpret the incoming images. Because RC vehicles and wireless cameras are so inexpensive, we hope that this project will make it possible for more institutions and groups to take on sophisticated robotics challenges, even without substantial resources. Join in!
Mentor: Professor Zach Dodds
Zach has been a professor
at Harvey Mudd since 1999. His general research interests are in
computer vision and robotics. In
addition to research and teaching, he likes to play in foam pits
with his children.
Required Background
While background in computer vision and robotics/robotics algorithms is wonderful, it's not required. This project will be enjoyed most by students who are willing to take on unfamiliar areas and make them their own. Students interested in working on this project should feel confident in applying their programming ability through new frameworks and for new problems (e.g., vision & robotics...). We will use Python as much as possible, a language that we like to consider accessible enough to learn that it's not worth worrying about beforehand; if necessary, we'll drop into C or C++, but it's almost never necessary!


