I'm very excited to be co-teaching CS 70: Data Structures and Program Development with my new colleague, Prof. Julie Medero. We are holding joint office hours (at least one of us around) on Mondays 3-5pm, Tuesdays 4-5:30pm, Wednesdays 3-5pm, and Fridays 1:30-3:30pm. You can also drop by at other times, to see if our office doors are open.
Multicore processors are ubiquitous in desktops, laptops, and even cell phones have multicore processors, yet they are still too hard to program. The most common approach involves “locks,” which are tricky and error-prone (not enough locks → race conditions; too many or out of order → deadlock). Newer techniques like Transactional Memory can be awkward to use.
Observationally Cooperative Multithreading (OCM) is an approach to parallel programming being investigated by me, Professor Melissa O'Neill and an impressive series of undergraduates (27 and counting).
We start with classical Cooperative Multithreading (CM), where only one thread runs at a time, and each thread gets the whole machine to itself until it explicitly yields control to the next. CM is a very convenient programming model; if threads are only interrupted when they choose to be, then locks and the problems that go with them can largely be avoided.
OCM lets us write programmer-friendly CM code, yet still advantage of multiple processors. Code runs “as if” one thread were running at a time. But when two threads are independent, an OCM system can run them simultaneously to get the same answer, but faster!
To learn more about OCM, I recommend starting with this (unpublished) introduction to Observationally Cooperative Multithreading.
Errors in software become more expensive and dangerous each year. Testing is helpful, but proofs are the only way to truly guarantee that an algorithm is correct, or that a cryptographic protocol is secure, or that a programming language is safe.
Unfortunately, it’s hard to get all the details of a big proof right; even careful reasoners make mistakes, and even careful readers miss them. Automation seems a natural solution, and there exist “proof assistant” systems to help people construct and verify proofs, using special-purpose computer languages. In the end, though, we have a proof that we are sure is correct but that is almost unreadable if you’re not a computer!
I want a system that can verify proofs that were written for humans to read and understand, the kind of proofs you see in textbooks and research papers. Essentially, I want to go beyond spell-checking and grammar-checking to logic-checking.
I plan to start investigating this (with the help of an undergraduate research student) in Summer 2014.
Type Theory blurs the lines between mathematical proofs and executable programs, and
is the basis of most systems for creating computer-checked proofs. Such systems distinguish
between definitional/judgmental equality—two ways of writing the obviously the same thing, like
and provable/propositional equality—equalities that must be justified by a proof.
The dividing line is not always clear. For example, if
rev is a function for reversing lists, then obviously
rev(rev(l)) == l; are these just two ways of
describing the same list, or is simplifying
l a step worth noting?
Most computer-assisted proof systems have some fixed definition of definitional equality built in; everything else requires proof. Professor Andrej Bauer and I are looking at more flexible systems where the definition of definitional equality can change as needed, in the context of a particular system called Homotopy Type Theory (HoTT).
The RZ system translates specifications for mathematical structures into specifications for code. It answers questions like, "What would a programmer need to implement in order to get a complete and correct implementation of the mathematical (“real”) real numbers [ or of a compact metric space, or a space of smooth functions, ...]?"
From an educational perspective, it also provides explanations (in terms a programmer can understand) of the non-classical distinctions that arise in constructive mathematics. For example, it can be used to explain the difference between a finite set and a set that is not infinite, and why there is a "size" function only for the former.
I'm glad you asked! I put together an implementation as an offshoot of the OCM research project.
Octet locks are interesting for concurrent programming because threads aren't required to unlock when they're done; this can be a big speed advantage for resources that are accessed mostly by a single thread. Further, Octet locks can be acquired in any order without fear of deadlock (though while acquiring one lock you might lose others).
Teaching occasionally requires some tedious tasks, so I've thrown together small utility programs (for Mac OS X).
Some of my favorites appear on the official SIGCSE Nifty Assignments web page for Random Art. I've also put together a PDF poster containing nearly all the student-generated pictures that I've collected while teaching CS 131: Programming Languages.
Of course there's also the original Random Art webpage, which inspired the CS 131 assignment.