Everything is on paper: proofs and
analyses. Embody your answers with a problem summary; donŐt just write down
answers.
Establish the correctness of, and measure
the empirical running time of two different methods for finding powers under
two different data assumptions. Then you are also asked to compare these real
times with the big-O running times that you obtain analytically. No code needs
to be written. Please type
your solution and turn it in on paper.
The algorithms we are going to
contrast are the straightforward ("regular") power-computing
algorithm:
static double regDoublePower(long b, long N)
{
double result = 1.0;
while (N > 0)
{
result = result * b;
--N;
}
return result;
}
vs. a power-computing algorithm
called the "Russian Peasants" (rp) method:
static double rpDoublePower(long b, long N)
{
double s = b;
double result = 1.0;
while (N > 0)
{
result = (N%2 == 1 ? result * s : result);
N = N/2;
s = s * s;
}
return result;
}
The Russian Peasants algorithm originally concerned
multiplying two numbers
together rather than raising one to the power of another, but the underlying digital
principle is the same:
exploit the binary representation of one of the numbers.
Problem 1:
Prove, using a loop invariant, that
regDoublePower is partially correct with respect to:
Input
assertion: N == N0 and N0 > 0
Output
assertion: result == bN0.
Problem 2:
Prove, using a loop invariant, that
rpDoublePower is partially correct with respect to:
Input
assertion: N == N0 and N0 > 0
Output
assertion: result == bN0.
There is a loop invariant that is simple
and elegant, but finding it will be a little tricky. When you get it, do not
share it with others, depriving them of the joy of discovering it. I am happy to provide a hint or two if
needed.
In proving the invariant, you may wish to make use of the fact that:
N%2
== 1 implies N/2 == (N-1)/2
because / is integer (truncating) division.
Problems 3 and 4:
Perform a complexity analysis of two
implementations of each technique for finding powers of some integer base (b).
Express the running time of each in terms of the size of the power, N.
For simplicity, we will always use the same base, 7 and we will always use
values of the power, N that are 2i for some i. That is, the powers will themselves be
powers of 2. Due to a single digit base, the length of the result in decimal
will be linearly proportional to N, so N is reasonable argument
of a complexity measure.
The two methods shown earlier use doubles; they will be compared to the same methods using exact arithmetic with Java's BigInteger class. First, the ordinary method for computing powers, this time with BigIntegers:
static BigInteger regBigIntPower(long _b, long N)
{
BigInteger b = BigInteger.valueOf(_b); // b = _b;
BigInteger result = BigInteger.ONE; // result = 1;
while (N > 0)
{
result = result.multiply(b);
--N;
}
return result;
}
and the Russian Peasant's version with BigInteger:
static BigInteger rpBigIntPower(long _b, long N)
{
BigInteger s = BigInteger.valueOf(_b); // b = _b;
BigInteger result = BigInteger.ONE; // result = 1;
while (N > 0)
{
if (N%2 == 1) result = result.multiply(s);
N = N/2;
s = s.multiply(s);
}
return result;
}
Problem 3:
Using asymptotic analysis, find the big-O
running time of the two basic algorithms (regular and Russian Peasants method) in
terms of N (the power to which the base is raised),
using the number 7 as the value of b in both cases.
Assumptions:
For double, assume that the multiplication time is
constant. For BigInteger, assume that the multiplication time is
proportional to the product of the lengths of numbers being multiplied (as it
would be for elementary school multiplication).
Problem 4:
Describe how well
the empirical evidence supports the big-O running times you found in Problem 4.
Justify based on the assumptions made about the individual operations times.
The file /cs/cs60/a/11/Power.java already
contains these four methods. It also contains a main
method that creates a table of running times for each method. The result of
running this program is found in Power.out.