Assignment 10 Solution
Program Analysis for Power Computing

This problem asks you to establish the correctness of, and measure the empirical running time of two different methods for finding powers under two different data assumptions, and compare these real times with the big-O running times that you obtain analytically.

 The algorithms we are going to contrast are the straightforward ("regular") power-computing algorithm:
 

        static double regDoublePower(long base, long N)

          {
          double result = 1.0;

          while (N > 0)
            {
            result = result * base;
            --N;
            }
          return result;
          }

 

vs. a power-computing algorithm called the "Russian Peasants" (rp) method:
 

        static double rpDoublePower(long base, long N)
          {
          double sq = base;
          double result = 1.0;

          while (N > 0)
            {
            result = (N%2 == 1 ? result * sq : result);
            N = N/2;
            sq = sq * sq;
            }
          return result;
          }

 

 

Problem 1:

 

Prove, using a loop invariant, that regDoublePower is partially correct with respect to:

Input assertion: N == N0 ô N0 > 0

Output assertion: result == bN0.

 

Solution 1:

 

As the loop invariant, try: result == bN0 - N ô N > 0

 

Use transition induction to establish the invariant:

 

Basis: The first time the loop test is reached, result == 1 and N == N0 by initialization. Hence we have to show 1 == bN0 - N0 ô N0 > 0. But bN0 - N0 == b0, which is 1 and N0 > 0 by the given pre-condition.

 

Induction step: Assume that result == bN0 - N ô N > 0 when the loop test is reached in general, to show that resultÕ == bN0 Ð NÕ ô NÕ > 0, where primes designate the next value, if  there is a next time.

 

For there to be a next time, the loop test must be satisfied, so we know N > 0. We also know the loop semantics: resultÕ == result * b and NÕ == N-1. Therefore, it is sufficient to show result * b == bN0 Ð (N-1) ô N-1 > 0. By the inductive assumption, the left-hand side of the equation is bN0 Ð N * b which is the same as bN0 Ð N + 1 which is algebraically the same as the right-hand side. The inequality N-1 > 0 follows from N > 0 and the fact that N must be an integer.

 

To conclude, we show that the invariant together with the negation of the loop test imply the overall post-condition. The negation of the test N > 0 is N < 0. When we couple the latter with the invariant N > 0, we get N == 0 on exit. Substituting 0 for N in the invariant result == bN0 Ð N we get result == bN0 as desired.

 

Problem 2:

 

Prove, using a loop invariant, that rpDoublePower is partially correct with respect to:

Input assertion: N == N0 ô N0 > 0

Output assertion: result == bN0.

 

In this activity, you may wish to make use of the fact that:

                  N%2 == 1 implies N/2 == (N-1)/2 because / is integer (truncating) division.

 

Solution 2:

 

As the loop invariant, try: result * sq N == bN0 ô N > 0

 

Use transition induction to establish the invariant:

 

Basis: The first time the loop test is reached, result == 1, N == N0 , and sq == b, by initialization. Hence we have to show 1* bN0 == bN0 ô N0 > 0. But the equality is an algebraic identity and N0 > 0 by the given pre-condition.

 

Induction step: Assume that result * sq N == bN0 ô N > 0 when the loop test is reached in general, to show that resultÕ * sqÕ == bN0 ô > 0, where, as before, primes designate the next value, if there is a next time.

 

For there to be a next time, the loop test must be satisfied, so we know N > 0. We also know the loop semantics: resultÕ == (N%2 == 1 ? result * sq : result), sqÕ == sq2, and NÕ == N/2. Therefore, it is sufficient to show (N%2 == 1 ? result * sq : result) * (sq2)N/2 == bN0 ô > 0.  The inequality follows from N > 0. To show the equality, consider the cases N even vs. N odd separately:

 

N even:

                  N%2 == 1 is false, so we need to show result * (sq2)N/2 == bN0 . But this is algebraically equivalent to result * sqN == bN0 which is the inductive assumption.

N odd:

                  N%2 == 1 is true, so we need to show result * sq * (sq2)N/2 == bN0 . But for N odd, N/2 == (N-1)/2 for integer division (by the hint given), so we need to show result * sq * (sq2)(N-1)/2 == bN0  and this is algebraically equivalent to result * sq * sqN-1 == bN0 which is equivalent to result * sqN == bN0 which is the inductive assumption again.

 

To conclude, we show that the invariant together with the negation of the loop test imply the overall post-condition. The negation of the test N > 0 is N < 0. When we couple the latter with the invariant N > 0, we get N = 0 on exit. Substituting for 0 N in the invariant result * sq N == bN0, but since sq0 == 1, this is result == bN0 as desired.

 

Problem 3:

 

We will examine four methods for finding powers of some integer base (b). The running time of each will be expressed in terms of the size of the exponent, i.e., the power, N. For simplicity, we will always use the same base, 7 and we will always use values of the power, N that are 2i for some i. That is, the powers will themselves be powers of 2.

 

What to do:

  1. Analyze the two basic algorithms (regular and Russian Peasants method) and find their big-O running time in terms of N (the power to which the base is raised), for both the regular and rp methods, using the number 7 as the value of b in both cases.

Solution:

         For double-precision floating point
representation, we assume that the
        multiplication time is constant regardless of the operands. In this case, we have:

                           Regular algorithm
: O(N) since there are N iterations, and  the loop body
                           is constant-time.

                           Russian Peasants algorithm
: O(log N) since there are log N iterations,
                           and  the loop body is constant-time.


         For BigInteger
representation, we assume that the
        multiplication time is proportional to product of the lengths of the operands.
         This would correspond to the obvious way to multiply large integers,
        as we did it in grade school. In this case, we have:

                           Regular algorithm
: There are N iterations, and  the loop body
                           is requires time O(N) itself. This is because we are multiplying b in each case
                           by increasingly longer multipliers, namely the value of result, which increases
                           linearly as the algorithm progresses. So the overall time taken is
                           proportional to (1 + 2 + 3 + É + N), which we know to be O(N2)
.

                           Russian Peasants algorithm
: There are log N iterations. The run-time
                           of the loop body is dominated by the multiplication sq = sq*sq; The number of
                           digits in sq doubles
each iteration. Thus the overall time it takes is
                           bounded by some constant c times (1 + 2*2  + 4*4 + 8*8 + É + N*N),
                           which is also O(N2)
, since the last term dominates the sum of the preceding
                           terms.

                           Note that we could cut the execution time down substantially
                           by not computing the final sq = sq*sq;, since once N becomes 0,
                           the value of sq is no longer needed. A loop break could be used for
                           this purpose, for example.

Discussion

The surprising result for BigInteger is the Russian Peasants doesnÕt improve the asymptotic performance over the regular method: both are are O(N2). However, it may still be better in terms of the multiplicative constant.

It is also possible to multiply in less that time proportional to the product of the operands. One such method is the Karatsuba method, which uses divide-and-conquer to achieve O(Nlog(3)
), and log2(3) is approximately 1.58496. There are still faster methods that work for very large numbers of digits.

  1. The file /cs/cs60/a/10/Power.java already contains these four methods. It also contains a main method that creates a table of running times for each method. Describe how well the empirical evidence supports the big-O running times you found in step a. Justify based on reasonable assumptions that you make about the individual operations times. For example, it is reasonable to regard floating point (double) multiplication as taking constant time. What is a corresponding reasonable assumption about BigInteger multiplication?

Solution:

For Regular Double
, the  tightest upper bound appears to be O(N). All values of T/N are around 6.875E-02 or less. However, log N is clear not also an upper bound.

For Russian Peasants Double
, O(N) is an upper bound, although O(log N) is too and is tighter.

Both are of the above are consistent with our white-box analysis.

For Regular BigInteger, the  tightest upper bound appears to be O(N2). This is the only option that clearly upper-bounds the run-time; all the other ratios are increasing.

For Russian Peasants BigInteger
, the  tightest upper bound appears to be O(N2). This is the only option that clearly upper-bounds the run-time; all the other ratios are gradually increasing.

Both are of the above are also consistent with our white-box analysis.