Lecture 22

Theory of Computation

A problem is solvable by a computer if a program can be written to solve instances of the problem.

As you know, this program must be based on an algorithm. When talking about the solvability of problems we want to use the idea of algorithms rather than programs, since we want to simplify the situation as much as possible. We do not want to make things more difficult by including issues such as which hardware or programming language to use.

The idea of an algorithm that we have used up to now has not been formalized. We have seen examples of algorithms such as quicksort and binary search but we have not even defined what an algorithm is.

The idea of algorithms has existed for thousands of years, the ancient Greeks discovered algorithms for finding prime numbers and common denominators. For many years people also thought that every problem could be solved (given enough effort) by an algorithm.

The German mathematician, David Hilbert said:

"In mathematics there is nothing that cannot be known."

In 1900, he proposed a problem called the Entscheidungsproblem - Find an algorithm which, given any statement in a formal system, would determine if it were true of false.

In the 1930's, research showed that this problem is not computable. Kurt Godel, Alonso Church, Stephen Kleene and Alan Turing all found problems that had no algorithmic solution.

The Church-Turing Thesis.

This is the thesis (i.e. it has not yet been proved true of false) that all computers are equally powerful as problem solvers. Obviously, some computers will be able to solve problems faster than others, but this is not the point. The Church-Turing thesis says that, given sufficient resources (time and memory), there is nothing that one computer can do, that any other can not (eventually). The RISC machine from assignment 2 can do anything your pentium can. Since nobody has ever found a case that proves this thesis wrong we will assume that it is true. It also seems reasonable, because we can use one computer to emulate another.

Assuming the truth of the Church-Turing thesis, it will be useful to look at the simplest machine that is still capable of computation.

Combinational logic can not perform computation, it is just a mapping from an input to an output. Sequential logic, on the other hand, can perform a limited form of computation. If we combine a sequential logic device with an external memory, then we have a Turing machine.

Turing Machines

We assume that instances of problems and solutions to instances can be encoded into sequences of 0's and 1's.

A Turing machine is a very simple device. It has a control unit which can be in any of a finite number of states. These states are given and fixed for any specific machine, i.e. the control unit is a sequential logic device. The machine also has an input and output tape (all the books call this a tape but in reality it is a toilet roll). The 'tape' is capable of holding any finite length sequence of 0's, 1's and any number of blank symbols, and has a read/write head to read and write symbols.

The tape is divided into place-holders; each place-holder can store one symbol.

We assume the symbols to be 0, 1, and the blank symbol. This machine performs computations as follows: the machine always starts in a particular state that we call S0, i.e., the state of the machine's control is S0 at the beginning of any computation.

The instance of the problem, encoded into a finite sequence of 0's and 1's, appears on the tape and the read/write head is positioned on a symbol of the input.

The actions of the machine are determined by a table. Given the state of the control unit, and the symbol currently under the input/output head, the table says to which state the control of the machine must change.

The table also says what symbol overwrites the currently read symbol and the movement of the input/output tape head: left, right or no move.

This diagram shows the components of a Turing Machine.


There will be a set of states, called accepting states, that will halt the machine. The machine produces an output when it halts.

Definition 1 An algorithm is a Turing Machine that always halts no matter what input is presented to it.

Since we are willing to accept machines whose computations may not terminate, then those machines are not algorithms (they are called partial functions), but we will see that our arguments apply even to those more general machines.

Let us define problems that are solvable by these more general machines.

Definition 2 A problem P is computable if there is a TM that produces, for any instance of the problem, its solution, if one exists; otherwise the TM runs forever.

Example add one to a binary number.

The following example adds one to a binary number. Assume the TM has the following state table.
Current State Current cell contents Value to write Direction to move New state to enter
0 * * L 1
1 0 1 L 3
1 1 0 L 2
1 * * R 6
2 0 1 L 3
2 1 0 L 2
2 * 1 L 4
3 0 0 L 3
3 1 1 L 3
3 * * R 5
4 0 * R 5
4 1 * R 5
4 * * R 5
5 0 0 R 5
5 1 1 R 5
5 * * NO MOVE 6

Example of a Turing Machine. Add two numbers:

The numbers are represented in unary; this is to say, a number n is represented by n consecutive 1's. For instance, number 5 is represented as 11111. The numbers to be added, appear on the tape from left to right as follows: a 0, followed by n 1's, followed by 0, and then m 1's followed by 0. So, the instance of the problem n=5 and m=3 would appear on the tape as 01111101110. The solution of the instance appears on the tape surrounded by 0's. For this instance the solution is 0111111110.

The machine is fully specified by giving each of its components; these are listed below. In general, a machine M is a seven-tuple of the form , representing the set of states of the machine, the initial state, the set of input symbols, the set of output symbols, the accepting states, the blank symbol and the transition function or program, respectively.

For the addition problem

Let us see how this Turing Machine solves the problem 01111101110. The TM starts in state 1 and scans the first symbol, a 0.

The program tells it to write a 0 (thus leaving the symbol in that place-holder unchanged), move to the nearest symbol to the right, and change to state 2. Note there there are two choices each for states 2 and 3; the one to be executed depends on the symbol that is scanned.

In this case, we have a 1, so we write the 1 again (leaving that symbol unchanged) and move right without changing state. We continue in this fashion until reaching the 0 that separates the unary representations of 5 and 3.

Now we are past the first number, and we know the number to be added to it will follow. We move right and change to a new state, 3, and read a 1. Now the ``addition'' occurs, writing a 0 over the 1, we move left, go to state 4, read a 0, write a 1 over it, and return to state 2.

The program continues to ``loop'' in this manner until we have 8 ones in a row on our tape with 2 zeroes at the end. Reading the first 0 (we are in state 2), we move right one place, change state to 3, and read another 0. The second unary number is gone, and we have completed the addition, so we write a blank over the 0, move left, and go to state 5, in which we stop.

The final answer, now on the tape, is a unary representation of 8 with a 0 at each end.