We have seen that Turing machines are more powerful than push-down automata, which are more powerful than finite automata.

Is there some type of automata that is, in turn, more powerful than a Turing machine?

The answer is no. Anything that is computable according to any algorithm whatsoever can be computed with a Turing machine.

Likewise, anything that is *not* computable with a Turing machine is
not computable with any algorithm on any reasonable model of computation we
might create.

Alonzo Church was an American mathematician working in computability and logic. In 1936 Church published a paper showing that the Entscheidungsproblem (German for "decision problem") was impossible - effectively showing that there is no algorithm which can decide if arbitrary logical statements are true or false.

Alan Turing published a paper in 1936 (independently of Church) which showed exactly the same thing. Church created a computational system called the lambda calculus in constructing his proof. Turing used his Turing machines.

Church served as Turing's Ph.D. advisor for the next few years. During that time they proved that the lambda calculus and Turing machines were equivalent models - anything that could be done on one could be done on the other. This equivalence is called the "Church-Turing Thesis".

Over the years, the interpretation of the thesis has expanded to include
all algorithmic systems of any kind. Any system of solving problems with
algorithms can either solve *fewer* problems than a Turing machine (like
a DFA), or the system is exactly equivalent to a Turing machine.

The Church-Turing thesis tells us that our intuitive notion of algorithms is equivalent to algorithms that can be expressed by a Turing machine. Anything that can expressed in any programming language, or pseudo-code can be expressed with a simple Turing machine.

The lambda calculus is quite different from Turing's machines. It is based on functions, written in the form:

$\lambda\; parameter\; .\; body$

For instance a function which adds one to its input could be written as:

$\lambda\; x\; .\; x + 1$

Functions always take a single parameter. If we need to, we can simulate multiple parameters with a function that produces another. For example, addition can be expressed as:

$\lambda\; x\; .\; (\lambda\; y\; .\; x\; +\; y)$

For convenience we allow the use of multiple parameters. The above could also be written as:

$\lambda\; x\; y\; .\; x + y$

We also allow ourselves the ability to name functions:

ADD := $\lambda\; x\; y\; .\; x + y$

Function application is done by simply putting the arguments after the function.

Numbers in lambda calculus are created using *Church numerals*.
Each number is a function with two parameters: $s$ (successor) and $z$ (zero):

ZERO := $\lambda\; s\; z\; . z$

ONE := $\lambda\; s\; z\; . s\; z$

TWO := $\lambda\; s\; z\; . s\; (s\; z)$

THREE := $\lambda\; s\; z\; . s\; (s\; (s\; z))$

Arithmetic and logic can be built up from these simple terms.

Looping is handled with recursive functions. As a small example, the following lambda calculus expression might be used to calculate factorials:

$\lambda\; x\; .\; (\text{IF-THEN-ELSE}\; (\text{LESS}\; x\; \text{ONE})\; \text{ONE}\; (\text{FACT}\; (\text{SUB}\; x\; \text{ONE})))$

Because the lambda calculus is so different from Turing machines, it is
telling that they turn out to be equivalent. Lambda calculus serves as the
theoretical underpinning for *functional* programming languages, such as
Lisp, ML and Haskell. Turing machines serve as the basis for *imperative*
programming languages, such as C, Python and Java (though many languages
include the ability to create lambda functions).

A Turing machine is a simple computer model which can be designed to solve one specific problem. Early electronic computers were designed in a similar way - such that the computer would perform one computation on different inputs.

Another result from Turing's 1936 paper on the Entscheidungsproblem is the
*Universal* Turing machine.

A universal Turing machine is one which can compute *any*
computable function. How is this possible?

The answer is to have the universal Turing machine read not only its input string off the tape, but also a description of the Turing machine to simulate. This relies on encoding Turing machine as strings.

The universal Turing machine then simulates the machine it takes as input, on the input string which it is given and can then be used to compute anything a specific Turing machine can compute.

This idea led to the development of the "stored program computer" which was not designed to solve a specific problem but to run any arbitrary program which was encoded as part of the computer's input.

Any algorithmic system in which you can build a universal Turing machine is equivalent in power to a Turing machine - because you could simply simulate that Turing machine on the universal one. This is called "Turing completeness".

Unsurprisingly, all general-purpose programming languages are Turing-complete.

The bar for Turing-completeness is not terribly high. Almost every feature of
most programming languages is for convenience. The list of things you really
*need* to solve problems is remarkably short:

- Unlimited memory.
- Unrestricted looping.
- Some way to make decisions.

One programming language designed to be very close to this bar is Brainfuck, abbreviated as BF hereafter. BF is quite similar to Turing machines - it keeps a large tape of bytes, with a tape head. The head can move left and right by one spot at a time. Each cell stores one byte. The bytes can be incremented or decremented. BF has only 8 symbols it allows:

Symbol | Action |

> | Move the tape head right. |

< | Move the tape head left. |

+ | Increment the byte under the tape head. |

- | Decrement the byte under the tape head. |

. | Output the byte under the tape head. |

, | Read one byte of input, storing it under the tape head. |

[ | If the byte under the tape head is zero, branch forward to the command after the matching ]. |

] | Branch back to the matching [. |

BF programs are of course incredibly tedious to create. The language does serve to show how low the bar is for Turing-completeness.

Because the requirements of Turing-completeness are fairly basic, some systems which were not designed as programming languages are actually Turing-complete as well:

- C++ Template Expansion:
`/* the recursive case */ template <int N> struct Factorial { enum { value = Factorial<N-1>::value * N }; }; /* the base case */ template<> struct Factorial<0> { enum { value = 1 }; }; int main( ) { int x = Factorial<5>::value; }`

- LaTeX macros
- Powerpoint
- Minecraft
- Magic: The Gathering
- Conway's Game of Life
- ...and more.

Copyright © 2019 Ian Finlayson | Licensed under a Creative Commons Attribution 4.0 International License.