Harvey Mudd College
Computer Science 60
Assignment 10
Due Tuesday, December 10, 2002

Program Analysis for Power Computing

This problem asks you to establish the correctness of, and measure the empirical running time of two different methods for finding powers under two different data assumptions, and compare these real times with the big-O running times that you obtain analytically.

 The algorithms we are going to contrast are the straightforward ("regular") power-computing algorithm:
 

        static double regDoublePower(long base, long N)
          {
          double result = 1.0;

          while (N > 0)
            {
            result = result * base;
            --N;
            }
          return result;
          }
 

vs. a power-computing algorithm called the "Russian Peasants" (rp) method:
 

        static double rpDoublePower(long base, long N)
          {
          double sq = base;
          double result = 1.0;

          while (N > 0)
            {
            result = (N%2 == 1 ? result * sq : result);
            N = N/2;
            sq = sq * sq;
            }
          return result;
          }
 

The Russian Peasants algorithm originally concerned multiplying two numbers together rather than raising one to the power of another, but the underlying principle is the same.

 

Problem 1:

 

Prove, using a loop invariant, that regDoublePower is partially correct with respect to:

Input assertion: N == N0 ô N0 > 0

Output assertion: result == bN0.

 

Problem 2:

 

Prove, using a loop invariant, that rpDoublePower is partially correct with respect to:

Input assertion: N == N0 ô N0 > 0

Output assertion: result == bN0.

 

In this activity, you may wish to make use of the fact that:

                  N%2 == 1 implies N/2 == (N-1)/2 because / is integer (truncating) division.

 

Problem 3:

 

We will examine four methods for finding powers of some integer base (b). The running time of each will be expressed in terms of the size of the exponent, i.e., the power, N. For simplicity, we will always use the same base, 7 and we will always use values of the power, N that are 2i for some i. That is, the powers will themselves be powers of 2.

 

The two methods above use doubles; they will be compared to the same methods using exact arithmetic with Java's BigInteger class. First, the ordinary method for computing powers -- this time with BigIntegers:

 

        static BigInteger regBigIntPower(long b, long N)
          {
          BigInteger base = BigInteger.valueOf(b);
          BigInteger result = BigInteger.ONE;

          while (N > 0)
            {
            result = result.multiply(base);
            --N;
            }
          return result;
          }
 

and the Russian Peasant's version with BigInteger:

        static BigInteger rpBigIntPower(long b, long N)
          {
          BigInteger sq = BigInteger.valueOf(b);
          BigInteger result = BigInteger.ONE;

          while (N > 0)
            {
            if (N%2 == 1) result = result.multiply(sq);
            N = N/2;
            sq = sq.multiply(sq);
            }
          return result;
          }

What to do:

  1. Analyze the two basic algorithms (regular and Russian Peasants method) and find their big-O running time in terms of N (the power to which the base is raised), for both the regular and rp methods, using the number 7 as the value of b in both cases.

  2. The file /cs/cs60/a/10/Power.java already contains these four methods. It also contains a main method that creates a table of running times for each method. Describe how well the empirical evidence supports the big-O running times you found in step a. Justify based on reasonable assumptions that you make about the individual operations times. For example, it is reasonable to regard floating point (double) multiplication as taking constant time. What is a corresponding reasonable assumption about BigInteger multiplication?

What to hand in:

Everything is on paper: proofs, tables of runtime, and analysis.

Reading

Proving correctness is covered in Chapter 10 of the text. Complexity and its analysis is covered in Chapter 11 of the text.