1. (Complexity) [45 points total]

(A) 25 points    

Derive a tight "O" upper bound on the computational complexity of the following program, as a function of N:

for( int i = N; i >= 1; i = i/2 )
  for( int j = 1; j <= i; j = j*2 )
    for( int k = 1; k <= j; k++ )
    {
    … some computation using O(1) time …
    }


Solution:

  The innermost loop runs in j steps.
  The middle loop, then, does j steps of work
  for each of these values for j:
    
    1, 2, 4, 8, ... i/4, i/2, i
    
  That is the same as doing a total of
  
    1 + 2 + 4 + 8 + ... + i/4 + i/2 + i
    
  steps of work.
  This sum is 2i - 1. 
  
  To see this, add 1 to the above expression and you obtain 2j.
  
  Thus, the inner two loops do O(i) work.
  
  Finally, the outermost loop does that O(i) work
  for i =
  
    N, N/2, N/4, ... , 4, 2, 1
    
  That is, it does 
  
    N + N/2 + N/4 + ... + 4 + 2 + 1
  
  work, which by the same reasoning as above is O(N).



(B) 20 points     Three different classes of sorting algorithms we have seen have run times of O(N2), O(N*log(N)), and O(N). What are some reasons one might choose to use one of the "inferior" algorithms, rather than an O(N) algorithm?

Solution:

  An algorithm that is "inferior" asymptotically might have
  _better performance than a "superior" algorithm for small
  input sizes. If the algorithm is going to handle only
  inputs of such small size, it makes sense to use the 
  "inferior" one in those situations.
  
  Also, O(N) sorting algorithms require the data to be sorted
  to satisfy certain assumptions (basically, to be of a 
  known, bounded size). To handle more general sets of data
  would require a comparison sort, i.e., an algorithm
  with O(N logN) running time.