In the broadest sense, I have two research interests:
My long-term goal is to develop intelligent agents that can collaborate in real-time and as first-class citizens with musicians that are "causally jamming." For instance, I'd like my agents to be able to play in the kind of environments that might emerge when several acoustic musicians pull out their guitars, fiddles, flutes, etc. and start jamming on top of old time, jazz, or blues tunes.
One aspect of my research that sets it apart from many other computer music efforts is its preoccupation with improvisation: as opposed to performing precomposed music, musicians in this domain spontaneously generate their own material. I call these agents Improvisational Music Companions (IMCs).
A primary tool used in this research is machine learning, for it provides a computational means by which music models can be configured to specific environments by automatically tuning a model's parameters to explain and generalize from a given training dataset. With this technology, musical agents can be adapted as needed---training to specific musicians, genres, songs, etc. by constructing datasets based on what is played in those specific situations.
Computation is an ideal medium for understanding human musical perception because it facilitates transforming vague hunches into viable procedures, producing software that can be exploited to understand musical processes and spark creative new ideas about how to model them. To a large extent, learning algorithms are only as good as the data they receive---as they say: garbage in, garbage out---so to use machine learning successfully we need to use adequate representations when encoding musical phenomena in the first place---be they harmonic, melodic, rhythmic, or what have you. Computer simulation is really helpful, because it allows systematic exploration of the models trained and the features used to construct them. This systematic exploration lies at the core of my current research effort.
As systematic exploration requires data, my immediate task is to develop a comprehensive database of melodic improvisations and their accompanying underlying harmonic contexts. This database will be symbolic: improvised melody will be stored as a sequence of semi-tone pitches occurring at specific metrical locations. Harmony will be similarly represented as a sequence of chord symbols. From this data, sequential learning algorithms will be used to construct probabilistic models. Probabilistic sequential models are ideal because they can be sampled, providing a means for the computer to generate new material. They can also assign a likelihood to existing material, providing a means for the computer to perceive what a musician plays.
For more information, please refer to my detailed research plan. Note: the file this link pulls up is a browseable pdf file. To utilize this functionality properly, you need to use Adobe Reader (I've tested it on version 7.0). If you have a Mac and view this file with Preview, the links won't work correctly.