------------------------

Harvey Mudd College
Computer Science 131
Programming Languages
Spring Semester 1999

Lecture 14 (3/22/99)

------------------------

------------------------

Before we get started with what I want to cover today, let me take a couple of minutes to run down a rough outline of what we'll be doing for the rest of the course.

------------------------

First, for the next couple of classes we'll talk about another use of higher-order programming which is the notion of passing around what are called ``continuations.''

After that we'll spend a couple of classes looking at the idea of name scope in a variety of languages. In particular, C, Pascal, Lisp, Scheme, and ML. In fact all but Lisp use the same basic mechanism, Lexical Scope, but we'll see that it only comes to its fullest use when we are in a higher-order language like Scheme or ML.

Secondly, I'll show you why on the one hand dynamic scope can be pretty useful in certain types of programming, but on the other hand it limits the usefulness of function arguments and the understandability of code.

We'll also talk briefly about Lazy Evaluation --- which is what happens when you delay the evaluation of arguments to functions so they are evaluated as late as possible rather than before they are actually passed --- and streams, which is a way of adding an aspect of Lazy Evaluation to an otherwise eager language.

We'll spend the remaining lectures looking at the lambdacalculus, which is a formal system developed in the 30's as an attempt to model the notion of computable function. It is, as I've said, the underlying model for all modern functional languages. It is by formalizing and studying variations of the lambda calculus that all these ideas, like lazy versus eager evaluation were originally understood. A great deal of programming language research today is still described in terms of variations of the lambdacalculus.

If time allows, we'll spend a couple of classes on exactly how you infer types for programs.

------------------------

Now, I want to spend today and next class talking about what are called continuations. This material is really not covered at all in the text, so these notes will be your only source on it.

Let's start with a simple (highly artificial) motivating example: Suppose you want to write a function that multiplies the elements of a list together. That's a simple recursion that we've seen a million times before:

fun product nil = 1
  | product (h::t) = h * (product t)

But, suppose you know that the lists will be long and further that there is a high probability that the list contains a 0 in it somewhere. You decide you want to write the code to deal efficiently with this special case, but you don't wan't to make the ordinary case much more costly.

Now in ML, we could just use an exception to handle this as in:

exception Zero;
local
   fun prod_aux nil = 1
     | prod_aux (0::_) = raise Zero
     | prod_aux (h::t) = h * (prod_aux t)
in
   fun product l = (prod_aux l) handle Zero => 0
end;

So if no zeroes are encountered, this function will behave in the usual way, but if the function ever sees a zero it will immediately exit with the proper result, without any extraneous computation.

This is a nice example of something we never really talked about which is using an exception as a proper part of a computation, rather than as an error.

But, let's suppose we were in a higher-order language that looks just like ML but it doesn't have exceptions. Can we accomplish the same thing?

A first pass might be something like:

local
   fun prod_aux nil = 1
     | prod_aux (0::_) = 0
     | prod_aux (h::t) = h * (prod_aux t)
in
   fun product l = prod_aux l
end;

While this will stop when it reaches a zero, in the process of exiting it will compute all the useless intermediate products. So, if the list is [3,2,1,0,-1,-2,-3], on the way out it will compute the product 3*2*1*0.

Another attempt might be:

local
   fun prod_aux nil    result = result
     | prod_aux (0::_) result = 0
     | prod_aux (h::t) result = prod_aux t (h * result)
in
   fun product l = prod_aux l 1
end;

While this does manage to exit immediately without additional computation when it sees a zero (and has the additional benefit of being tail-recursive) it still builds exactly the same product on the way down to the zero. What we need is a way to delay building the result product on the way down.

What if instead of actually doing the multiply, we instead just make a note to ourselves that we must promise to make the multiplication later? How can we represent such a promise? As a function waiting to be applied. The code looks like this:

local
   fun prod_aux nil    promise = promise 1
     | prod_aux (0::_) promise = 0
     | prod_aux (h::t) promise = 
          prod_aux t (fn result => 
                           h * (promise result))
in
   fun product l = prod_aux l (fn x => x)
end;

How do you read this?

Finally, the initial promise is just to return whatever we are given.

Trace call of (product [5,4,3]) here.

Notice that prod_aux is tail-recursive, but that the constructed promise is not. We can make them both tail-recursive simply by changing the last case of prod-aux so the function reads:

local
   fun prod_aux nil    promise = promise 1
     | prod_aux (0::_) promise = 0
     | prod_aux (h::t) promise = 
          prod_aux t (fn result => promise (h * result))
in
   fun product l = prod_aux l (fn x => x)
end;

Trace call of (product [5,4,3]) here.

In technical terms the ``promise'' we have constructed is called a continuation, because it represents the future of the current computation. It is often abbreviated k. Now, even if we are not interested in quick aborts of a function, the Continuation Passing Style (CPS) conversion is useful, as it can turn virtually any function into a tail-recursive one. Modern functional-language compilers generally rely on an initial CPS conversion of all the code. This idea has reached its current pinacle in the SML-NJ compiler which is heavily reliant on the CPS conversions. Here are the CPS versions of a couple of familiar functions:

local
   fun fact_cps 0 k = k 1
     | fact_cps n k = 
          fact_cps (n - 1) (fn res => k (n * res))
in
   fun fact n = fact_cps n (fn x => x)
end;
 
local
   fun app_cps nil    l2 k = k l2
     | app_cps (h::t) l2 k = 
          app_cps t l2 (fn res => k (h::res))
in
   fun append l1 l2 = app_cps l1 l2 (fn x => x)
end;

You should try tracing a couple of calls to these on your own.

Now, so far we have talked only about explicit continuations that we construct and pass around. But every value in SML has an implicite continuation, which is a function that represents the future plans the evaluator has for that value.

Consider the expression (3 - 4). Let's look at each of the component values and figure out what its continuation is.

3:
Well, the system plans to take the 3 and subtract 4 from it and return the result. Thus the continuation of 3 is fn x => x - 4.

4:
Similarly, the system plans to take the 4 and subtract it from 3. So its continuation is fn x => 3 - x.

-:
Now, what is the plan for the minus operator? Well, the evaluator is just going to apply it to 3 and 4. So, its continuation is fn f => f (3,4).

(3 - 4):
Finally, what are the plans for the whole expression. Let's assume it's just going to be returned, then its continuation is just fn x => x.

Now, if the continuation for the whole expression were more complex, then the continuation of each of the parts would reflect that. So, if the whole result were going to be printed, its continuation would be fn x => print x, while the continuation of the 3 would be fn x => print (x - 4).

SML-NJ (as with many modern functional languages) allows you to access this implicit continuation by using the function callcc which stands for ``Call with Current Continuation''. The function is contained in the structure SMLofNJ.Cont This function allows you to use continuations for breakpoints and such in a very simple manner and can be seen as a generalization of exceptions. For instance, the original list product example would be written using callcc as:

 
local
   fun prod_aux nil    exit_k = 1
     | prod_aux (0::_) exit_k = SMLofNJ.Cont.throw exit_k 0
     | prod_aux (h::t) exit_k = h * (prod_aux t exit_k)
in
   fun product l = SMLofNJ.Cont.callcc (fn exit_k => prod_aux l exit_k)
end;

The command throw is used to invoke a continuation and give it its argument.

We could also rewrite this using let, to avoid having to pass around the exit continuation, making the code look more normal:

 
open SMLofNJ.Cont;

fun product l = 
	callcc (fn exit_k =>
			let
			   fun prod_aux nil    = 1
		             | prod_aux (0::_) = throw exit_k 0
		             | prod_aux (h::t) = h * (prod_aux t)
		        in	
			   prod_aux l
			end)

------------------------

This page copyright ©1999 by Joshua S. Hodas. It was built on a Macintosh. Last rebuilt on Monday, March 22, 1999 at 1:00 PM.
http://cs.hmc.edu/~hodas/courses/cs131/lectures/lecture14.html