Chapter 7: How Hard is the Problem?

It is a mistake to think you can solve any major problems just with potatoes.

—Douglas Adams

7.1 The Never-ending Program

Chances are you’ve written a program that didn’t work as intended. Many of us have had the particularly frustrating experience of running our new program and observing that it seems to be running for a very long time without producing the output that we expected. “Hmmm, it’s been running now for nearly a minute. Should I stop the program? Or, perhaps, since I’ve invested this much time already I should give the program another minute to see if it’s going to finish,” you say to yourself. Another minute passes and you ask yourself if you should let it run longer or bail out now. Ultimately, you might decide that the program is probably stuck in some sort of loop and it isn’t going to halt, so you press Control-C and the program stops. You wonder though if perhaps the program was just a moment away from giving you the right answer. After all, the problem might just inherently require a lot of computing time.

Wouldn’t it be nice to have some way to check if your program is eventually going to halt?

../_images/Alien5.PNG

The answer to that question is “yes!” Of course that would be nice!

Then, if you knew that your program wasn’t going to halt, you could work on debugging it rather than wasting your time watching it run and run and run.

In fact, perhaps we could even write a program, let’s call it a “halt checker,” that would take any program as input and simply return a Boolean: True if the input program eventually halts and False otherwise. Recall from Chapter 3 that functions can take other functions as input. So there is nothing really strange about giving the hypothetical halt checker function another function as input. However, if you’re not totally comfortable with that idea, another alternative is that we would give the halt checker a string as input and that string would contain the code for the Python function that we want to check. Then, the halt checker would somehow determine if the function in that string eventually halts.

For example, consider the following string which we’ve named myProgram:

myProgram = 'def foo(): \
        return foo()'

This string contains the Python code for a function called foo. Clearly, that function runs forever because the function calls itself recursively and there is no base case to make it stop. Therefore, if we were to run our hypothetical haltChecker function it should return False:

>>> haltChecker(myProgram)
False

How would we write such a halt checker? It would be easy to check for certain kinds of obvious problems, like the problem in the program foo in the example above. It seems though that it might be quite difficult to write a halt checker that could reliably evaluate any possible program that we would give it. After all, some programs are very complicated with all kinds of recursion, for loops, while loops, etc. Is it even possible to write such a halt checker program? In fact, in this chapter we’ll show that this problem is impossible to solve on a computer. That is, it is not possible to write a halt checker program that would tell us whether any other program eventually halts or not.

How can we say it’s impossible!? That seems like an irresponsible statement to make. After all, just because nobody has succeeded in writing such a program yet doesn’t allow us to conclude that it’s impossible. You’ve got a good point there – we agree! However, the task of writing a halt checker truly is impossible – it doesn’t exist now and it will never exist. In this chapter we’ll be able to prove that beyond a shadow of a doubt.

7.2 Three Kinds of Problems: Easy, Hard, and Impossible.

Computer scientists are interested in measuring the “hardness” of computational problems in order to understand how much time (or memory, or some other precious resource) is required for their solution. Generally speaking, time is the most precious resource so we want to know how much time it will take to solve a given problem. Roughly speaking, we can categorize problems into three groups: “easy”, “hard”, or “impossible.”

../_images/Alien5.PNG

Recall that an “algorithm” is a computational recipe. It is more general than a “program” because it isn’t specific to Python or any other language. However, a program implements an algorithm.

An “easy” problem is one for which there exists a program – or algorithm – that is fast enough that we can solve the problem in a reasonable amount of time. All of the problems that we’ve considered so far in this book have been of that type. No program that we’ve written here has taken days or years of computer time to solve. That’s not to say that coming up with the program was always easy for us, the computer scientist. That’s not what we mean by “easy” in this context. Rather, we mean that there exists an algorithm that runs fast enough that the problem can be solved in a reasonable amount of time. You might ask, “what do you mean by reasonable ?” That’s a reasonable question and we’ll come back to that shortly.

In contrast, a “hard” problem is one for which we can find an algorithm to solve it, but every algorithm that we can find is so slow as to make the program practically useless. And the “impossible” ones, as we suggested in the introduction to this chapter, are indeed just that - absolutely, provably, impossible to solve no matter how much time we’re willing to let our computers crank away on them!

7.2.1 Easy Problems

Consider the problem of taking a list of \(N\) numbers and finding the smallest number in that list. One simple algorithm just “cruises” through the list, keeping track of the smallest number seen so far. We’ll end up looking at each number in the list once, and thus the number of steps required to find the smallest number is roughly \(N\). A computer scientist would say that the running time of this algorithm “grows proportionally to \(N\).”

In Chapter 5 we developed an algorithm, called selection sort, for sorting items in ascending order. If you don’t remember it, don’t worry. Here’s how it works in a nutshell: Imagine that we have \(N\) items in a list– we’ll assume that they are numbers for simplicity - and they are given to us in no particular order. Our goal is to sort them from smallest to largest. In Chapter 5, we needed that sorting algorithm as one step in our music recommendation system.

Here’s what the selection sort algorithm does. It first “cruises” through the list, looking for the smallest element. As we observed a moment ago, this takes roughly \(N\) steps. Now the algorithm knows the smallest element and it puts that element in the first position in the list by simply swapping the first element in the list with the smallest element in the list. (Of course, it might be that the first element in the list is the smallest element in the list, in which case that swap effectively does nothing - but in any case we are guaranteed that the first element in the list now is the correct smallest element in the list.)

In the next phase of the algorithm, we’re looking for the second smallest element in the list. In other words, we’re looking for the smallest element in the list excluding the element in the first position which is now known to be the first smallest element in the list. Thus, we can start “cruising” from the second element in the list and this second phase will take \(N-1\) steps. The next phase will take \(N-2\) steps, then \(N-3\), and all the way down to 1. So, the total number of steps taken by this algorithms is \(N + (N-1) + (N-2) + \dots + 1\). There are \(N\) terms in that sum, each of which is at most \(N\). Therefore, the total sum is certainly less than \(N^2\). It turns out, and it’s not too hard to show, the sum is actually \(\frac{N(N+1)}{2}\) which is approximately \(\frac{N^2}{2}\).

../_images/Alien5.PNG

You might hear a computer scientist say: “The running time is big-Oh of \(N^2\)” - written \(O(N^2)\). That’s CS-talk for what we’re talking about here.

A computer scientist would say that the running time of this algorithm “grows proportionally to \(N^2\).” It’s not that the running time is necessarily exactly \(N^2\), but whether it’s \(\frac{1}{2} N^2\) or \(42 N^2\), the \(N^2\) term dictates the shape of the curve that we would get if we plotted running time as a function of \(N\).

As another example, the problem of multiplying two \(N \times N\) matrices using the method that you may have seen in an earlier math course takes time proportional to \(N^3\). All three of these algorithms - finding the minimum element, sorting, and multiplying matrices – have running times of the form \(N^k\) where \(k\) is some number: We saw that \(k =1\) for finding the minimum, \(k = 2\) for selection sorting, and \(k=3\) for our matrix multiplication algorithm. Algorithms that run in time proportional to \(N^k\) are said to run in polynomial time because \(N^k\) is a polynomial of degree \(k\). In practice, polynomial time is a reasonable amount of time. Although you might argue that an algorithm that runs in \(N^{42}\) steps is nothing to brag about, in fact using polynomial time as our definition of “reasonable” time is, um, er, reasonable – for reasons that will be apparent shortly.

7.2.2 Hard Problems

Imagine now that you are a salesperson who needs to travel to a set of cities to show your products to potential customers. The good news is that there is a direct flight between every pair of cities and, for each pair, you are given the cost of flying between those two cities. Your objective is to start in your home city, visit each city exactly once, and return back home at lowest total cost. For example, consider the set of cities and flights shown in Figure 7.1 and imagine that your start city is Aville.

../_images/star.png

Figure 7.1: Cities and flight costs.

A tempting approach to solving this problem is to use an approach like this: Starting at our home city, Aville, fly on the cheapest flight. That’s the flight of cost 1 to Beesburg. From Beesburg, we could fly on the least expensive flight to a city that we have not yet visited, in this case Ceefield. From Ceefield we would then fly on the cheapest flight to a city that we have not yet visited. (Remember, the problem stipulates that you only fly to a city once, presumably because you’re busy and you don’t want to fly to any city more than once - even if it might be cheaper to do so.) So now, we fly from Ceefield to Deesdale and from there to Eetown. Uh oh! Now, the constraint that we don’t fly to a city twice means that we are forced to fly from Eetown to Aville at a cost of 42. The total cost of this “tour” of the cities is \(1 + 1 + 1 + 1 + 42 = 46\). This approach is called a “greedy algorithm” because at each step it tries to do what looks best at the moment, without considering the long-term implications of that decision. This greedy algorithm didn’t do so well here. For example, a much better solution goes from Aville to Beesburg to Deesdale to Eetown to Ceefield to Aville with a total cost of \(1 + 2 + 1 + 2 + 3 = 9\). In general, greedy algorithms are fast but often fail to find optimal or even particularly good solutions.

It turns out that finding the optimal tour for the Traveling Salesperson Problem is very difficult. Of course, we could simply enumerate every one of the possible different tours, evaluate the cost of each one, and then find the one of least cost. If we were to take this approach in a situation with \(N\) cities, how many different tours would we have to explore?

../_images/time.png

Table 7.1: Time to solve problems of various sizes using algorithms with different running times on a computer capable of performing one billion operations per second. Times are in seconds unless indicated otherwise.

../_images/Alien5.PNG

We’re using \(N!\) rather than \((N-1)!\) here for simplicity. They only differ by a factor of \(N\), which is negligible in the grand scheme of things.

Notice that there are \(N-1\) first choices for the first city to visit. From there, there are \(N-2\) choices for the next city, then \(N-3\) for the third, etc. Therefore there are a total of \((N-1) \times (N-2) \times (N-3)\dots \times 1\). That product is called ” \(N-1\) factorial” and denoted \((N-1)!\). The exclamation point is appropriate because that quantity grows very rapidly. While \(5!\) is a modest \(120\), \(10!\) is over 3 million and \(15!\) is over one trillion. Computers are fast, but examining one trillion different tours would take a long time on even the fastest computer.

../_images/Alien5.PNG

That’s a lot of exclamation marks!

Table 7.1 demonstrates just how bad a function like \(N!\) is in comparison to polynomial functions like \(N\), \(N^2\), \(N^3\), and \(N^5\). In this table, we are imagining a computer that is capable of performing one billion “operations” per second so that our algorithm for finding the minimum element in a list in time proportional to \(N\) can find the minimum in a list of one billion items in one second. We’re being pretty generous here, and assuming that this computer can also enumerate one billion traveling salesperson tours in one second, but this will only make our case against the brute-force enumeration of traveling salesperson tours even stronger!

Ouch! \(N!\) is clearly very bad! One might be tempted to believe that in a few years, when computers get faster, this will no longer be a problem. Sadly, functions like \(N!\) grow so fast that a computer that is 10 or 100 or 1000 times faster provide us with very little relief. For example, while our \(N!\) algorithm takes \(8400\) trillion years for a problem of size \(30\) on our current one billion operations per second computer, it will take \(8.4\) trillion years on a computer that is 1000 times faster. That’s hardly good news!

The Traveling Salesperson Problem is just one of many problems for which algorithms exist but are much too slow to be useful - regardless of how fast computers become in the future. In fact, the Traveling Salesperson Problem is in a group - or “class” - of problems called the NP-hard problems. Nobody knows how to solve any of these problems efficiently (that is, in polynomial time like \(N\) or \(N^2\) or \(N^k\) for any constant \(k\)). Moreover, these problems all have the property that if any one of them can be solved efficiently (in polynomial time) then, amazingly, all of these problems can be solved efficiently.

Metaphorically, we can think of the NP-hard problems as a giant circle of very big dominoes - one domino for the Traveling Salesperson Person and one for every NP-hard problem. We don’t know how to knock any of these dominoes over (metaphorically, how to solve any of them efficiently) but we know that if any one of them falls over, all of the rest of them will fall (be solvable by an efficient algorithm) as well.

Unfortunately, many important and interesting problems are NP-hard. For example, the problem of determining how a protein folds in three dimensions is NP-hard. That’s truly unfortunate because if we could predict how proteins fold, we could use that information to design better drugs and combat a variety of diseases. As another example, imagine that we have a number of items of different size that we want to pack into shipping containers of some given size. How should we pack our items into shipping containers so as to minimize the number of containers used? This problem too is NP-hard. Many games and puzzles are also NP-hard. For example, the problem of determining whether large Sudoku puzzles are solvable is NP-hard as are problems related to solving Minesweeper games and many others.

The good news is that just because a problem is NP-hard does not mean that we can’t find reasonably good - albeit probably not perfect - solutions. Computer scientists work on a variety of strategies for dealing with NP-hard problems. One strategy is something called approximation algorithms. An approximation algorithm is an algorithm that runs fast (in polynomial time) and finds solutions that are guaranteed to be within a certain percentage of the best possible solution. For example, for certain types of Traveling Salesperson Problems we can quickly find solutions that are guaranteed to be no more than 50% more expensive than the best possible tour. You might reasonably ask, “How can you guarantee that a solution is within 50% of optimal when you have no way of efficiently finding the optimal solution?” This is indeed surprising and seemingly magical, but in fact such guarantees can be made. This topic is often studied in an “Algorithms” course.

../_images/Alien5.PNG

That’s a shameless sales pitch for another course if there ever was one!

There are many other approaches as well to dealing with NP-hard problems. One approach is called heuristic design. A heuristic is essentially a “rule of thumb” for approaching a problem. For example, for the Traveling Salesperson Problem, you might use the “greedy” heuristic that we mentioned earlier, namely, start by visiting the city that you can get to most cheaply. From there, visit the cheapest city that you get to that you haven’t visited yet. Continue in this fashion until you’ve visited each city once and then fly back home. That’s a simple rule and it often finds reasonably good solutions, although it sometimes does quite badly as we saw earlier. In contrast to approximation algorithms, heuristics don’t make any promises on how well they will do. There are many other approaches for dealing with NP-hard problems and this is an active area of research in computer science. For an example of an interesting biologically inspired technique for dealing with NP-hard problems, see below.

Note

Genetic Algorithms: A Biologically-Inspired Approach for Dealing with NP-hard Problems.

One technique for dealing with NP-hard problems is borrowed from biology. The idea is to “evolve” reasonably good solutions to our problem by simulating evolution on a population of possible solutions to our problem. Returning to our Traveling Salesperson example, let’s call the cities in Figure 7.1 by their first letters: \(A\), \(B\), \(C\), \(D\) and \(E\). We can represent a tour by a sequence of those letters in some order, beginning with \(A\) and with each letter appearing exactly once. For example, the tour Aville to Beesburg to Deesdale to Eetown to Ceefield and back to Aville would be represented as the sequence \(ABDEC\). Notice that we don’t include the \(A\) at the end because it is implied that we will return to \(A\) at the end.

Now, let’s imagine a collection of some number of orderings such as \(ABDEC\), \(ADBCE\), \(AECDB\), and \(AEBDC\). Let’s think of each such ordering as an “organism” and the collection of these orderings as a “population”. Pursuing this biological metaphor further, we can evaluate the “fitness” of each organism/ordering by simply computing the cost of flying between the cities in that given order.

Now let’s push this idea one step further. We start with a population of organisms/orderings. We evaluate the fitness of each organism/ordering. Now, some fraction of the most fit organisms “mate”, resulting in new “child” orderings where each child has some attributes from each of its “parents.” We now construct a new population of such children for the next generation. Hopefully, the next generation will be more fit - that is, it will, on average, have less expensive tours. We repeat this process for some number of generations, keeping track of the most fit organism (least cost tour) that we have found and report this tour at the end.

“That’s a cute idea,” we hear you say, “but what’s all this about mating Traveling Salesperson orderings?” That’s a good question - we’re glad you asked! There are many possible ways we could define the process by which two parent orderings give rise to a child ordering. For the sake of example, we’ll describe a very simple (and not very sophisticated) method; better methods have been proposed and used in practice.

Imagine that we select two parent orderings from our current population to reproduce (we assume that any two orderings can mate): \(ABDEC\) and \(ACDEB\). We choose some point at which to split the first parent’s sequence in two, for example as \(ABD | EC\). The offspring ordering receives \(ABD\) from this parent. The remaining two cities to visit are \(E\) and \(C\). In order to get some of the second parent’s “genome” in this offspring, we put \(E\) and \(C\) in the order in which they appear in the second parent. In our example, the second parent is \(ACDEB\) and \(C\) appears before \(E\), so the offspring is \(ABDCE\).

Let’s do one more example. We could have also chosen \(ACDEB\) as the parent to split, and split it at \(AC | DEB\), for example. Now we take the \(AC\) from this parent. In the other parent, \(ABDEC\), the remaining cities \(DEB\) appear in the order \(BDE\), so the offspring would be \(ACBDE\).

In summary, a genetic algorithm is a computational technique that is effectively a simulation of evolution with natural selection. The technique allows us to find good solutions to hard computational problems by imagining candidate solutions to be metaphorical organisms and collections of such organisms to be populations. The population will generally not include every possible “organism” because there are usually far too many! Instead, the population comprises a relatively small sample of organisms and this population evolves over time until we (hopefully!) obtain very fit organisms (that is, very good solutions) to our problem.

7.3 Impossible Problems!

So far, we’ve talked about “easy” problems –those that can be solved in polynomial time - and “hard” problems - those that can be solved but seemingly require impractically huge amounts of time to solve. Now we turn to problems that are downright impossible to solve, no matter how much time we are willing to spend.

First, we’d like to establish that there even exist impossible problems. We’ll do this, in essence, by showing that the number of computational problems is much larger than the number of different programs. Therefore, there must be some problems for which there exist no programs. This is a strange and beautiful idea. It’s strange because it ultimately will allow us to establish that there are problems that can’t be solved by programs but it doesn’t actually tell us what any of those problems are! This is what a mathematician calls an existence proof: We show that something exists (in this case uncomputable problems) without actually identifying a concrete example! The idea is also beautiful because it uses the notion of different sizes of infinities. In a nutshell, it turns out that there are an infinite number of different programs that we could write but there is a larger infinite number of different computational problems. “Larger infinities!?” we hear you exclaim. “Have you professors totally lost your minds?” Perhaps, but that’s not relevant here. Let’s dive in to our adventure in infinities, large and small.

7.3.1 “Small” Infinities

../_images/Alien5.PNG

Sorry, you’ll just have to imagine that. After all, if we actually gave them to you, you’d probably eat them.

Imagine that we gave you three jelly beans. Of course, you are good at counting and you instantly recognize that you have three jelly beans, but pardon us for a moment as we look at this another way. You have three jelly beans because you can match up your jelly beans with the set of counting numbers \(\{1, 2, 3\}\). A mathematician would say that there is a bijection – or “perfect matching” – between your set of three jelly beans and the set of counting numbers \(\{1, 2, 3\}\).

More precisely, a bijection is a matching of elements from one set (e.g., our set of jelly beans) to another set (e.g., our set of numbers \(\{1, 2, 3\}\)) such that every element in the first set is matched to a distinct element in the second set and every element in the second set is matched to something from the first set. In other words, a bijection is a “perfect matching” of elements between our two sets. We say that two sets have the same cardinality (or “size”) if there is a bijection between them.

That seems pretty obvious and more than a bit pedantic. But here’s where it gets interesting! Once we accept that two sets have the same cardinality if there exists a bijection between them, it’s natural to see what happens when the two sets are infinite. Consider, for example, the set of counting numbers \(\{1, 2, 3, 4, \dots \}\) and the set of even counting numbers \(\{2, 4, 6, 8, \dots \}\). Both sets are clearly infinite. Moreover, it seems that the set of counting numbers is about twice as large as the set of even counting numbers. Strangely though, the two sets have the same cardinality: We can establish a bijection - a perfect matching – between the two! Here it is:

../_images/count.png

Notice that the mapping here associates each counting number \(x\) with the even counting number \(2x\). Let’s see if it’s truly a bijection. Is every counting number associated with a distinct even counting number? Yes, because each pair of counting numbers is matched to two different even numbers. And, is every even counting number matched in this way to some counting number? Yes, because any even counting \(y\) number is matched to the counting number \(y/2\). So, strangely, these two sets have the same cardinality!

By similar arguments, the set of all integers (the counting numbers, the negatives of the counting numbers, and 0) also has a bijection with the counting numbers, so these two sets also have the same size. Amazingly, even the set of all rational numbers (numbers that are of the form \(\frac{p}{q}\) where \(p\) and \(q\) are integers) has a bijection with the counting numbers. That’s truly strange because on the face of it, it seems that there are way more rational numbers than counting numbers. But such is the reality of infinity!

“Larger” Infinities

Any set that has the same cardinality as the counting numbers is said to be countably infinite. So, as we noted above, the set of even counting numbers, the set of integers, and the set of all rational numbers are all countably infinite. If that’s the case, aren’t all infinite sets countably infinite?

Surprisingly, the answer is “no”!

The story begins in Germany in the last years of the nineteenth century. A brilliant mathematician name Georg Cantor gave a beautiful argument that shows that the set of real numbers – the set of all numbers, including the rational and irrational numbers - contains a “larger” infinity of elements than the set of counting numbers.

Cantor’s proof made many mathematicians in his day very uncomfortable and, in fact, many went to their graves feeling that there must be a problem somewhere in Cantor’s logic. Cantor’s proof is not only rock-solid but it is one of the most important results in all of mathematics. Moreover, as we’ll see shortly, it’s the very basis for showing that there exist uncomputable problems.

Cantor’s proof is based on the method of proof by contradiction. (See the sidebar on proofs by contradiction.) The approach is to assume that something is true (e.g., that the set of real numbers is countably infinite) and derive from that a contradiction (e.g., \(1=2\) or something equally obviously false). We are therefore forced to conclude that our initial assumption (e.g., that the set of real numbers is countably infinite) must also be false (because if we assume it’s true we are led to conclude something that is definitely false).

Note

A Quick Primer on Proofs by Contradiction

In case you haven’t seen the technique of proof by contradiction before, let’s take a brief aside to demonstrate it with a famous mathematical result that goes back to the days of ancient Greece.

The Greeks wondered if all numbers are rational - that is, whether every number can be expressed as the ratio of two integers like \(\frac{1}{2}\), \(-\frac{3}{7}\), etc. According to legend, Hippasus of Metapontum discovered that \(\sqrt{2}\) is not rational - it cannot be written as a ratio of two integers. The ancient Greeks, also according to legend, treated this result as an official secret, and Hippasus was murdered when he divulged it.

Here’s the proof. Assume that \(\sqrt{2}\) is rational. Then it can be written as a ratio of two integers, \(\frac{p}{q}\). Every fraction can be written in lowest terms (that is, canceling out common factors), so let’s assume that \(\frac{p}{q}\) is in lowest terms. In particular, that means that \(p\) and \(q\) cannot both be even, since if they were we would have cancelled out the multiples of \(2\) from each of \(p\) and \(q\) when putting the fraction in lowest terms.

OK, so now \(\sqrt{2} = \frac{p}{q}\). Let’s square both sides to get \(2 = \frac{p^2}{q^2}\) and now let’s multiply both sides by \(q^2\) to get \(2q^{2} = p^2\). Since \(2q^2\) is even, this means that \(p^2\) is even. If \(p^2\) is even, clearly \(p\) must be even (since the square of an odd number is always odd!). Since \(p\) is even, let’s rewrite it as \(p = 2 \ell\) where \(\ell\) is also an integer. So now, \(2q^{2} = p^2 = (2\ell)^2 = 4 \ell^2\). Since \(2q^2 = 4 \ell^2\), we can divide both sides by \(2\) and we get \(q^2 = 2 \ell^2\). Aha! So \(q^2\) is even! But this means that \(q\) is even. That’s a contradiction because when we wrote \(\sqrt{2} = \frac{p}{q}\) we stipulated that \(\frac{p}{q}\) is in lowest terms, and thus \(p\) and \(q\) could not both be even.

We’ve just established that if \(\sqrt{2} = \frac{p}{q}\) in lowest terms then both \(p\) and \(q\) must be even and thus \(\frac{p}{q}\) is not in lowest terms. That’s like saying if it’s raining then it’s not raining. In that case, it can’t be raining! And in just the same way in our case, \(\sqrt{2}\) cannot be written as a fraction.

In a nutshell, a proof by contradiction shows that something is false by first assuming that it’s true and then deducing an absurd (namely false) outcome. This forces us to conclude that our assumption was false, thereby proving what we seek to prove!

Cantor not only showed that the real numbers are uncountably infinite, he showed that even the set of real numbers between 0 and 1 are uncountably infinite! Here’s how his proof goes: Assume that the set of real numbers between 0 and 1 is countably infinite. (Note that he’s setting up the proof by contradiction. He’s hoping to eventually conclude that something absurd follows from this assumption, therefore forcing us to conclude that the assumption is false!) Then, there must exist a bijection – a perfect matching – between the set of counting numbers and the set of real numbers between 0 and 1. We don’t know what that bijection looks like, but it would need to match every counting number with a distinct real number between 0 and 1.

A real number between 0 and 1 can be represented by a decimal point followed by an infinite number of digits. For example, the number \(.5\) can be represented as \(0.5000\dots\). The number \(\frac{1}{3}\) can be represented as \(.333\dots\). The fractional part of \(\pi\) would be \(.141592654\dots\). So a bijection between the counting numbers and the real numbers between 0 and 1 could be represented by a table that looks “something” like this. We say “something” because we don’t know what the actual association would be between counting numbers and real numbers would be, except for the fact that it is necessarily a bijection so we must associate a distinct real number with every counting number and every real number must appear somewhere on the right-hand column of the listing.

../_images/count2.png

Now, Cantor says: “Aha! Well now that you have given me the bijection, I’m going to look at that bijection and highlight the first digit of the real number matched with counting number 1, the second digit of the real number matched with counting number 2, the third digit of the real number matched with counting number 3, and so forth, all the way down the list. This is going down the diagonal of the listing of real numbers indicated in boldface in the table below.

../_images/count3.png

“Now,” says Cantor, “Watch this! I’m going to write down a new number as follows: My new number differs from the real number matched with 1 by changing its first digit (e.g., changing 5 to 6). Similarly, my new number differs from the real number matched with 2 by changing its second digit (e.g., changing 3 to 4). It differs from the real number matched with 3 by changing its third digit (e.g., changing from 1 to 2), etc. In general, for any counting number \(n\), look at the real number matched to \(n\) in your bijection. My number will differ from that real number in the \(n^{th}\) digit. For example, for the hypothetical bijection that begins as in the table above, the real number that I construct would begin with \(.642\dots\).”

“What’s with that number?” you ask. “Well,” says Cantor, “You promised me that you had a bijection between the counting numbers and the real numbers between 0 and 1, and that real number that I just constructed is certainly between 0 and 1. So where is it in the table that represents your bijection?” He’s got a good point. Where is it? You might argue, “Be patient, it’s matched with some very large counting number way down that list.” But if so, you must be able to tell Cantor what counting number that is. Perhaps, Cantor’s new number is matched with the counting number \(10^9\). But in that case, by the way that Cantor constructed his new number, it would differ from that number in the billionth digit. That’s a problem because we asserted that we had a bijection between the counting numbers and the real numbers between 0 and 1, and therefore every real number in that range, including Cantor’s new number, must appear in that bijection.

So, the conclusion is that no matter what bijection we try to establish between the counting numbers and the real numbers between 0 and 1, Cantor can always use his diagonalization method (changing the digits along the diagonal) to show that our putative matching fails to be a bijection. In other words, no bijection exists between the counting numbers and the real numbers between 0 and 1 and we have shown that the real numbers are not countably infinite!

A fine point: Our attorneys have advised us to say one more thing here. Note that some real numbers have two different representations when we use an infinite decimal expansion. For example \(0.3\) is the same as \(0.2999\dots\). Indeed, the problem is with an infinite number of 9’s. Every real number whose decimal expansion eventually has an infinite sequence of 9’s can be represented by changing the digit before the first of those 9’s and then replacing all of the 9’s by zeroes. So, it might be that Cantor’s new number in his diagonalization proof is actually not different from all numbers in our bijection table, it just looks different because it has a different representation. The solution to this problem is to always agree to use the “nicer” representation of numbers when we have the choice - that is, never use the representation that has an infinite number of consecutive 9’s. After all, this is just an issue of representing numbers and we can always represent a number that contains an infinite sequence of 9’s with its alternate representation! Now, when Cantor diagonalizes, he must be careful not to construct a number with an infinite sequence of consecutive 9’s, because we decided to avoid that representation. One easy solution to this is to simply have Cantor avoid ever writing the digit 9 in the new number he constructs. In particular, if he was changing the digit 8 in one of the real numbers in the table, rather than letting him change it to 9 (which might lead to an infinite sequence of 9’s), have him change it to any digit other than 8 or 9. This will still give him the contradiction that he seeks, and avoids the legal tangle that our attorneys have been worrying about.

../_images/Alien5.PNG

I wonder how much they billed us for that?

7.3.2 Uncomputable Functions

We’ve just seen that the set of real numbers between 0 and 1 is such a large infinity that its elements can’t be matched up perfectly with the counting numbers. But what does this have to do with computer science?

../_images/Alien5.PNG

Your “favorite” language is officially Python, but this would work just as well for any programming language.

Our goal is to show that there exist problems that are not solvable by any program. We’ll do this using Cantor’s diagonalization method. Here’s the plan: First we’ll show that the set of all programs is countably infinite; that is, there is a bijection between the counting numbers and the set of all programs in your favorite language.

Then, we’ll show that the set of problems does not have a bijection with the counting numbers (it’s a larger infinite set), and thus there is no matching between programs and problems.

Let’s start with the set of programs. A program is, after all, nothing more than a string of symbols that we interpret as a sequence of instructions. Here’s our plan: We’ll list out all possibles strings in alphabetical order from short ones to longer ones. Each time we list one out, we’ll check if it’s a valid Python program. If it is, we’ll match it to the next available counting number.

There are 256 different symbols that are used in a standard computer symbol set. So, we’ll first list the 256 strings of length 1. None of them are valid Python programs. Then we’ll move on and generate \(256 \times 256\) strings of length 2. None of these are valid Python programs. Eventually, we’ll come across a valid Python program and we’ll match that with the counting number 1. Then, we’ll march onwards, looking for the next valid Python program. That program will be matched with 2, and so forth.

While this process is laborious, it does show that we can find a bijection between counting numbers and Python programs. Every counting number will be matched to a distinct Python program and every Python program will eventually be generated this way, so it will be matched to some counting number. So, there is a bijection between the set of counting numbers and the set of Python programs. The set of Python programs is, therefore, countably infinite.

Now let’s count the number of problems. The problem with problems is that there are so many different kinds of them! So, to make our life easier, let’s restrict our attention to a special kind of problem where we take a counting number as input and return a Boolean: True or False. For example, consider the problem of determining if a given counting number is odd. That is, we’d like a function called odd that takes any counting number as input and returns True if it’s odd and False otherwise. That’s an easy function to write in Python. Here it is:

def odd(X):
        return X % 2 == 1

A function that takes any counting number as input and returns a Boolean is called a counting number predicate. Counting number predicates are obviously a very, very special kind of problem.

It will be convenient to represent counting number predicates as infinite strings of “T” (for True) and “F” (for False) symbols. We list all the possible inputs to the predicate along the top and, for each such input, we put a “T” if the predicate should return True for this input and we put a “F” otherwise. For example, the odd predicate would be represented as shown in Figure 7.2

../_images/input.png

Table 7.2: Representing the odd counting number predicate as an infinite list of “T” and “F” symbols, one for each possible input to the predicate.

Table 7.3 shows a list of several counting number predicates. The first one is the predicate for determining if its input is odd. The next one is the predicate that determines if its input is even, the next one is for primes, etc. Incidentally, all of these predicates have relatively simple Python functions.

../_images/input2.png

Table 7.3: A few sample counting number predicates and the values they return for small non-negative integers.

Now, suppose you claim to have a bijection between the counting numbers and counting number predicates. That is, you have a listing that purportedly perfectly matches the counting numbers to the counting number predicates. Such a bijection might look something like Table 7.4. In this table each row shows a counting number on the left and the predicate with which it is matched on the right. In this example, the counting number 1 is matched to the odd predicate, the counting number 2 is matched to the even predicate, etc. However, this is just an example! Our plan is to show that no matter how we try to attempt to match counting numbers to predicates, the attempt will necessarily fail.

../_images/input3.png

Table 7.4: Applying Cantor diagonalization to the counter number predicates. The table shows an attempted bijection of counting numbers to predicates. Cantor diagonalization is used to construct a predicate that is not in this list.

Once again Cantor reappears to foil your plan. “Aha!” he says. “I can diagonalize your predicates and create one that’s not on your list.

And sure enough, he does exactly that, by defining a predicate that returns values different from everything you listed. How? For the predicate matched to the counting number 1, he makes sure that if that predicate said “T” to 1 then his predicate would say “F” to 1 (and if it said “F” to 1 his predicate would say “T” to 1). So his predicate definitely differs from the predicate matched to the counting number 1. Next, his predicate is designed to differ from the predicate matched to the counting number 2 by ensuring that it says the opposite of that predicate on input 2, and so forth down the diagonal! In this way, his new predicate is different from every predicate in the list. This diagonalization process is illustrated in Table 7.4.

So, Cantor has established that there is at least one predicate that’s not on your list and thus the counting number predicates must be uncountably infinite, just like the real numbers.

What have we shown here? We’ve shown that there are more counting number predicates than counting numbers. Moreover, we showed earlier that the number of programs is equal to the number of counting numbers. So, there are more counting number predicates than programs and thus there must be some counting number predicates for which there exist no programs.

Said another way, we’ve shown that there exist some “uncomputable” problems. That’s surprising and amazing, but the key words there are “there exist.” We haven’t actually demonstrated a particular problem that is uncomputable. That’s our next mission.

7.4 An Uncomputable Problem

In the last section we proved that there exist functions that can’t be computed by any program. The crux of the argument was that there are more problems than programs, and thus there must be some problems with no corresponding programs.

That’s surprising and amazing, but it would be nice to see an example of an actual problem that cannot be solved by a program. That’s precisely what we’ll do in this section!

7.4.1 The Halting Problem

At the beginning of this chapter we noted that it would be very useful to have halt checker program that would take another program (like one that we are working on for a homework assignment) as input and determine whether or not our input program will eventually halt. If the answer is “no”, this would tell us that our program probably has a bug!

To make things just a bit more realistic and interesting, we note that our homework programs usually take some input. For example, here’s a program that we wrote for a homework assignment. (Exactly what the objective of this homework problem was now escapes us!)

def homework1 (X):
        if X == "spam":  return homework1(X)
        else: return "That was not spam!"

Notice that this program runs forever if the input is 'spam' and otherwise it halts. (It returns a string and it’s done!)

So, imagine that we now want a function called haltChecker that takes two inputs: A string containing a program, P, and a string, S, containing the input that we would like to give that program. The haltChecker then returns True if program P would eventually halt if run on input string S and it returns False otherwise.

../_images/Alien5.PNG

Evidently, you can buy just about anything on the Internet.

We’d really like to have this haltChecker. We searched on the Internet and found one for sale! Here’s what the advertisement says: “Hey programmers! Have you written programs that seem to run forever when you know that they should halt? Don’t waste your precious time letting your programs run forever anymore! Our new haltChecker takes any program P and any string S as input and returns True or False indicating whether or not program P will halt when run on input string S. It’s only $99.95 and comes in a handsome faux-leather gift box!”

“Don’t waste your money!”–one of our friends advised us. “We could write the haltChecker ourselves by simply having it run the program P on string S and see what happens!” That’s tempting, but do you see the problem? The haltChecker must always return True or False. If we let it simply run program P on input string S, and it happens that P runs forever on S, then the haltChecker also runs forever and doesn’t work as promised. So, a haltChecker would probably need to somehow examine the contents of the program P and the string S, perform some kind of analysis, and determine whether P will halt on input S.

But it’s true that we shouldn’t waste our money on this product because we’re about to show that the haltChecker cannot exist. Before doing that, however, our lawyers have advised us to point out that it is possible to write a haltChecker that works correctly for some programs and their input strings, because certain kinds of infinite loops are quite easy to detect. But that is not good enough! We want one that is absolutely positively 100% reliable. It must render a decision, True or False, for any program P and input S that we give it.

A hypothetical haltChecker would need to take two inputs: a program and a string. For simplicity, let’s assume that the program is actually given as a string. That is, it’s Python code in quotation marks. The haltChecker will interpret this string as a program and analyze it (somehow!). So, we might do something like this with our hypothetical haltChecker:

>>> P = ' def homework1 (X):
                if X == "spam":  return homework1(X)
                else: return "That was not spam!" '
>>> S = 'spam'
>>> haltChecker(P, S)
False
>>> haltChecker(P, 'chocolate')
True

We will show that a haltChecker cannot exist using a proof by contradiction again. This is a general and powerful technique for showing that something can’t exist.

Here’s a metaphorical sketch of what we’re about to do. Let’s imagine that someone approaches you and tells you, “Hey, I’ve got this lovely magic necklace with a crystal oracle attached to it. If you ask it any question about the wearer of the necklace that can be answered with True or False, it will always answer the question correctly. I’ll sell it to you for a mere $99.95.”

It sounds tempting, but such a necklace cannot exist. To see this, assume by way of contradiction that the necklace does exist. You can force this necklace to “lie” as follows: First you put the necklace on. Then, your friend asks the oracle necklace: “Will my friend accept this handful of jelly beans?” If the oracle necklace says True then your friend can ask you “Would you like this handful of jelly beans?” You now answer False and the oracle has been caught giving the wrong answer! Similarly, if the oracle necklace says False then, when your friend asks you if you would like the jelly beans, you say True and you have again caught the oracle in a lie. So, we are forced to conclude that an always accurate oracle necklace cannot exist.

Necklaces? Oracles? Jelly Beans? “You professors need a vacation,” we hear you say. Thank you for that, it is indeed almost vacation time, but let’s see first how this necklace metaphor bears on the haltChecker.

OK then. Here we go! Assume by way of contradiction that there exists a haltChecker. Then, we’ll place that haltChecker function in a file that also contains the program paradox below:

def paradox(P):
        if haltChecker(P, P):
                while True:
                        print 'Look! I am in an infinite loop!'
        else: return

Take a close look at what’s happening here. This paradox program takes a single string P as input. Next, it gives that string as both inputs to the haltChecker. Is that OK?! Sure! The haltChecker takes two strings as input, the first one is interpreted as a Python program and the second one is any old string. So, let’s just use P as both the program string for the first input and also as the string for the second input.

Now, let’s place the code for the paradox function in a string and name that string P like this:

>>> P = "def paradox(P):
                if haltChecker(P, P):
                        while True:
                                print 'Look! I am in an infinite loop!'
                else: return"

Finally, let’s give that string P as the input to our new paradox function:

>>> paradox(P)

What we’re doing here is analogous to you foiling the hypothetical oracle necklace; in this case the hypothetical oracle is the haltChecker and you are the invocation Paradox(P).

Let’s analyze this carefully to see why. First, notice that this paradox program either enters an infinite loop (the while True statement loops forever, printing the string "Look! I am in an infinite loop!" each time through the loop) or it returns, thereby halting.

Next, notice that when we run the paradox function on the input string P, we are actually running the program paradox on the input string containing the program paradox. It then calls haltChecker(P, P) which must return True or False. If haltChecker(P, P) returns True, it is saying “It is true that the program P that you gave me will eventually halt when run on input string P .” In this case, it is actually saying, “It is true that the program paradox that you gave me will eventually halt when run on the input program paradox .” At that point, the paradox program enters an infinite loop. In other words, if the haltChecker says that the program paradox will eventually halt when run on the input paradox, then the program paradox actually runs forever when given input paradox. That’s a contradiction; or perhaps we should say, that’s a paradox!

Conversely, imagine that haltChecker(P, P) returns False. That is, it is saying, “The program P that you gave me will not halt on input P .” But in this case, P is paradox, so it’s actually saying “The program paradox will not halt on input paradox .” But in that case, we’ve designed the paradox program to halt. Here again, we have a contradiction.

We could easily write the paradox program if we only had a haltChecker. However, we just saw that this leads to a contradiction. Thus, we’re forced to conclude that our one assumption – namely that a haltChecker exists – must be false. In the same way, we concluded that an oracle necklace cannot exist because if it did then we could devise a logical trap just as we have done here!

7.5 Conclusion

In this chapter we’ve taken a whirlwind tour of two major areas of computer science: complexity theory and computability theory. Complexity theory is the study of the “easy” and “hard” problems - problems for which algorithmic solutions exist but the running time (or memory or other resources) may vary from modest to outrageous. In fact, complexity theory allows us to classify problems into more than just two categorize “easy” and “hard”, but into many different categories that ultimately provide deep and surprising insights into what makes some problems solvable by efficient algorithms while others require vastly more time (or memory).

../_images/Alien5.PNG

Thanks for reading!

Computability theory explores those problems that are “impossible” to solve, such as the halting problem and many others. In fact, computability theory provides us with some very powerful and general theorems that allow us to quickly identify what problems are uncomputable. Among these theorems is one that says that virtually any property that we would like to test about the behavior of a program is uncomputable. In the case of the halting problem, the behavior that we sought to test was halting. Another behavior we might be interested in testing is that of having a virus that writes values into a certain part of your computer’s memory. Results from computability theory tell us that testing an input program for that property is also uncomputable. So, if your boss ever tells you, “your next project is to write a completely reliable virus detector,” you might want to read a bit more about computability theory so that you can show your boss that it’s not just hard to do that, it’s downright impossible.