Projects.

The Gopher Grounds

What do natural selection, laser beams, statistical models, and gophers have to do with each other? When the pesky gophers in your backyard learn how to use a statistical model to avoid the laser beams you want to zap them with, you need to fight back. You decide to use a genetic algorithm, an optimization technique inspired by the biological process of natural selection, to generate traps that will trick even the smartest gophers into walking into their demise. Will this plan put your gopher problem to rest? Or will the gophers have the last laugh?

read more

What Makes Neural Architecture Search Work?

Imagine you are a data scientist designing a neural network. After weeks of tinkering, you finally create a neural network that performs pretty well on your dataset. However, it fails in some very important cases. How can you edit your model so that it’s more robust? Maybe you just need some tiny tweaks here and there? Or will the changes you have to make be so profound that you might as well scrap your idea entirely and start anew? When all hope seems lost, you recall the framework of Neural Architecture Search, and a metric called reach...

read more

Identifying Bias in Data

How can you tell if a coin is fair? How about a die? How about a company’s hiring process? Statistical hypothesis tests can help us answer these questions by analyzing the fairness of a given dataset. Our team is interested in using novel hypothesis tests as tools to identify bias in machine learning training data, and give insight into the processes that generated them. We illustrate methods for testing null hypotheses using real-world datasets, highlighting potential industry applications of this technology.

read more

Geometrization of Bias

Suppose a new machine learning algorithm has just been invented, claimed to be unlike any other before. How can we determine if this is true? There is another problem: like many modern machine learning algorithms, this new one is a black box. How can we learn more about it? We investigate a way to capture bias in algorithms using the inductive orientation vector. Given a classification problem where we want to classify a test set, an algorithm assigns more probability mass to some solutions and less to others. The inductive orientation vector captures the distribution of probability mass over the solution space and provides us with a way to compare and evaluate algorithms.

read more

The Predator's Purpose

You're surrounded by predators, who want to eat you before you eat lunch. If they're gathered near a food pile, should you risk your life for a quick dash to eat, or steer clear and potentially starve to death? You watch to see if they've noticed you. This ability to perceive the intentions of others could save your life. Through a series of experiments we measure the effect of intention perception in improving a software agent's survival likelihood in a multi-agent predator and prey simulation. Does intention perception make any noticeable difference to survival?

read more

The Gopher's Gambit

Hey Admiral Ackbar, is it a trap? Being able to tell the difference between traps intended to harm you and random collections of debris could be life-saving. We test whether an agent’s ability to perceive intention through the examination of environmental artifacts provides measurable survival advantages. Moving a squad of artificial gophers through a series of room-like environments, each of which could be a dangerous trap designed to kill them, we equip some of the gophers with statistical hypothesis tests that allow them to distinguish designed traps from configurations that are randomly generated and most likely safe. Gophers with this ability tend to survive much longer than those without it.

read more

The Futility of Bias-Free Learning and Search

Building on the view of machine learning as search, we prove the need for bias in learning, quantifying its role in increasing the chances of success. To be biased in favor of some targets implies being biased against others; we demonstrate that bias is a conserved quantity, so no algorithm can be favorably biased towards a large proportion of targets simultaneously. We also show that finding a favorably biasing distribution over a fixed set of datasets is hard, unless the collection of datasets itself is already favorable. Our results apply to machine learning, AI, evolutionary algorithms, and many other subdisciplines of artificial learning.

read more

Probabilistic Abduction

Imagine you walk outside one morning and find that your car window is smashed and all your valuables are missing. You decide that the most likely explanation for these observations is that a thief robbed your car. But how did you arrive at this conclusion? This process of identifying the most likely explanation for a set of observations is called abductive inference. Although we often use abduction to explain surprising events, it remains a largely heuristic human type of inference. We justify the use of abduction by presenting a formalization for abduction in the context of machine learning, information theory, and probability.

read more

The Hero's Dilemma

You and your crew suddenly find yourselves near a droid that hasn't noticed you yet. The droid is powerful, so a fight isn't in your best interests, but a surprise attack could potentially disable it before it knows what hit it. With luck, you might also escape detection, avoiding a potentially deadly fire-fight. If you're detected, you lose the element of surprise, but attacking when you might escape discovery seems foolish. What should you do? Given our previous research showing that an agent's ability to perceive the intentions of others increases its chances of survival, we introduce a simple game, the Hero's Dilemma, that allows us to measure the increase in survival rate, within the context of a two-player adversarial game. We find this ability makes a huge difference.

read more
Home Publications AMISTAD LinkedIn CV GitHub