No classes are held today.
Fall semester classes begin at 8:10 AM. Convocation occurs at 11:00 AM.
In our early days at Mudd, how many of us took a turn in Keck - or is it Jacobs? - or Parsons? - only to find we were somewhere totally unexpected and unfamiliar? It may take a while to build useful mental models — maps — of the corridors of the Libra Complex, but it’s those internal maps that make ITR games, HvZ, and on-time arrival to class possible. Maps are even more important for robot tasks, because those systems lack other sources of environmental context; for example, accurate maps are the fundamental enabler of Google’s driverless cars. In this talk we present several types of maps that support robot navigation, drawing examples from the group’s work in creating and using them on a variety of robot platforms.
In the Origin of Species (1859) Darwin wrote that he could imagine a pair of species - such as flowers and bees - evolving to adapt to one another. Darwin was imagining co-evolution. Over the next 150 years, scientists found considerable evidence for the co-evolution of species. However, only with the advent of computational methods have researchers been able to gain deep insights into co-evolutionary processes. In this talk, we describe an ongoing research and development project at Harvey Mudd that has resulted in a widely-used software tool called “Jane.” We show some of the features that the 2012 Jane team developed and implemented and we describe some of the ways that Jane has been used by researchers, including in a recent study (2011) of co-evolution in the Galapagos.
Full semester and first half-semester courses must be added by this day.
What is graduate school like? Why would someone want to
go? How does one apply? How do you decide where to apply? Is it
true that you generally actually get paid to get a Ph.D. and there’s
no tuition charged? (The answer to one of these questions is “yes”!)
This informal colloquium talk (with plenty of time for Q&A) will
address these questions and more. If you’re curious about graduate
school or considering applying (this year or in the future), we
encourage you to join us.
The concept of “determinism” of parallel programs and parallel systems has received a lot of attention since the dawn of computing, with multiple formal and informal definitions of deterministic execution. In this talk, we start by reviewing two related properties of task-parallel programs —- determinacy and repeatability. Our focus is on structured task parallelism as exemplified by the Habanero-Java (HJ) programming language, and we observe that data races play a central role in establishing determinacy of programs written in such languages.
Type systems that prevent data races are a powerful tool for parallel programming, eliminating whole classes of bugs that are both hard to find and hard to fix. Unfortunately, it is difficult to apply previous work on such type systems to general task-parallel programs, as each such type system is designed around a specific synchronization primitive or parallel pattern, such as locks or disjoint heaps. In contrast, real-world task-parallel programs often have to combine multiple synchronization primitives and parallel patterns. We introduce a permissions-based type system for HJ called “Habanero Java with permissions” (HJp), which supports multiple patterns for parallelism, synchronization, and data access (e.g., task parallelism, object isolation, array-based parallelism). To demonstrate the practicality of this approach, we have ported 15 benchmarks from HJ to HJp, totaling almost 14,000 lines of code and covering a range of parallel patterns. This port only required modifications to 5% of the source code on average. Further, HJp is a gradual type system, meaning that some or all of these annotations can be omitted, and the compiler will insert dynamic type-casts where necessary to ensure race-freedom at run time. We present a simple and effective algorithm for inserting these type-casts, as well as an efficient runtime approach for checking permissions on entry to specific regions of code.
For HJ programs that do not use the HJp type system, we introduce a dynamic data race detection algorithms for structured parallel programs that overcome limitations of past work on dynamic datarace detection, such as worst-case linear space overhead per memory location, worst-case linear time overhead per memory access, dependence on sequential execution, dependence on a specific task scheduling technique, and generation of false positives and false negatives in their data race reports. We refer to our algorithm as SPD3 (Scalable Precise Dynamic Datarace Detection). The SPD3 algorithm supports a rich set of parallel constructs (including async, finish, future, and isolated). For async, finish, and future, SPD3 is guaranteed to be precise and sound for a given input. In the presence of isolated, SPD3 is precise but not sound. An experimental evaluation of SPD3 on programs that use async, finish, and isolated constructs shows that SPD3 performs well in practice, incurring an average (geometric mean) slowdown of 2.78× on a 16- core SMP system for a suite of 15 benchmarks. In contrast, past approaches such as Eraser and FastTrack incurred average overheads that exceed 10× for the same benchmarks and platform. We believe that SPD3 brings us closer to the goal of building dynamic data race detectors that can be “always-on” when developing parallel applications.