Uncomputable Numbers

Uncomputable Numbers

The Turing Essays

Real numbers we can never know the value of

We all remember learning that the decimals of pi are infinite in number, 3.14159265359? Some of us even recall learning that you can approximate upper and lower bounds on the value of pi to as high of a degree as you want by measuring the sides of polygons. As the number of sides of the polygons approach infinity and the length of their sides approach zero, your approximation gets closer.

Image for postApproximating upper and lower bounds for the value of pi using pentagons (left), hexagons (middle) and octagons (right).

Archimedes (c. 287- 212) invented this early ?polygonal algorithm?, which dominated for over 1000 years as the most efficient way of computing pi to any desired precision. The very primitive and geometric algorithmic function serves the purpose of estimating a real number, pi, whose value must be computed, as it is not expressible as a rational number (fraction/ratio).

The modern study of computability began around 1900 with the announcement of Hilbert?s ?finitist? program to axiomatize the foundations of mathematics. This after Hilbert himself had showed in 1899 that the consistency of geometry reduced to that of the real numbers, which in turn, reduced to the arithmetic results of Dedekind (Soare, 2013). Hilbert?s program (1904), hence, was established to try to similarly exploit the finiteness of mathematical proofs to show that contradictions could not be derived. If such a foundation could be established, mathematical proofs would be expressible completely in terms of symbols of logic, such as those contained in the Principia Mathematica (Whitehead & Russell, 1910; 1912; 1913). If achieved, all proofs, including those of Fermat?s Last Theorem, the Riemann Hypothesis and the Poincar Conjecture, would all be inevitable consequences of the mathematical system they are expressed in. If true, one could build a machine sufficiently capable, let it run for long enough and eventually all mathematical truths would be revealed.

Famously, not 30 years later, Austrian logician Kurt Gdel (1906?1978) published a paper, ber formal unentscheidbare Stze der Principia Mathematica und verwandter Systeme I (?On Formally Undecidable Propositions of Principia Mathematica and Related Systems?), in which his ?Theorem VI? shattered Hilbert?s dream of a fully axiomatized foundation for mathematics. Gdel showed, not one year after finishing his Ph.D., that such a system ? no matter how robust, would always be inherently incomplete. There will always be statements expressible in the system which are unprovable given its axioms:

First incompleteness theorem (Gdel, 1931a)Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.

The paper effectively ended Hilbert?s finitist program of 1904. As Soare (2013) recounts, John von Neumann (1903?1957) happened to be in the audience as a representative of Hilbert?s program when Gdel, then 25 years old, took the podium to present his result. von Neumann, considered by some to be the smartest person ever to live, immediately recognized that Hilbert?s program was over.

von Neumann spent the next weeks preparing the proof of a related theorem, based on the arithmetization of Gdel?s incompleteness proof, to show that not only are formal systems incapable of proving every statement in them, they are also unable to guarantee proofs of their own consistency. When he later presented his proof to Gdel, the great man reportedly politely thanked and informed him that he himself (Gdel) had written the same proof weeks earlier, and that it was already submitted for publication. The theorem states:

Second incompleteness theorem (Gdel, 1931b)No consistent axiomatic system which includes Peano arithmetic can prove its own consistency.

Stated more colloquially, any formal system ?interesting enough? to formulate its own consistency can only prove this if and only if it is inconsistent. The theorem demonstrated an inherent fallacy of a second Hilbert program, initiated in 1918 entitled Entscheidungsproblem (?decision problem?), of whether it is possible to provide a decision procedure that ?allows one to decide the validity of a sentence? (Soare, 2013).

And so mathematicians in the 1930s were once again faced with the practical implications of the imperfections of mathematics, first demonstrated by Georg Cantor in the 1880s.

This is where our story starts.

Definitions

Archimedes? constant (pi), along with other well-known numbers such as Pythagoras? constant (?2) and the golden ratio (?) are all examples of a type of real number which we say is computable, despite also being irrational (real numbers which cannot be constructed from fractions of integers). Such computable numbers may be defined as:

Real numbers that can be computed to within any desired precision by a finite, terminating algorithm.

In other words, we define a real number as computable if there is an algorithm which, given n, returns the first n digits of the number. Informally, we can think of a computable number as Marvin Minsky did, namely as ?A number for which there is a Turing machine which, given n on its initial tape, terminates with the nth digit of that number [encoded on its tape]?.

Formally, a real number a is computable if it can be approximated by some computable function from the natural numbers N to the real numbers Z which, given any positive integer n, the function produces an integer f(n) such that:

Image for postModern definition of a computable real number a given positive integers n, f(n)

An equivalent definition of computable numbers is that there exists a function which, given any positive rational error bound ? produces a rational number r such that the absolute value |r – a| is less than or equal to the error bound ?.

History

Just as Georg Cantor had spent the last quarter of the 19th century studying the uncountable sets that arose from his invention of the diagonal argument, researchers at Princeton University would spend the 1930s studying the implications for uncomputable numbers (Soare, 2013). In particular, history highlights the contributions of two men, Alonzo Church and Alan Turing.

Image for postLeft: Alonzo Church. Right: Alan Turing (Photos: Princeton University)

Alonzo Church

Alonzo Church (1903?1995) was an American mathematician and logician who earned his Ph.D. under Oswald Veblen at Princeton University in the 1920s. His first published paper was on Lorentz transformations. His most well known results include the proof that Hilbert?s decision problem (Entscheidungsproblem) is undecidable (known as Church?s theorem), that Peano arithmetic is undecidable, his invention of ?-calculus and his articulation of what would come to known as the Church-Turing thesis about the nature of computable functions.

Alan Turing

The now famed Alan Turing (1912?1954) was an English mathematician most well known as the tour de force behind the successful Allied effort to crack the Axis communication encryption device Enigma. For mathematicians, Turing?s name is synonymous with genius not only for this applied work but rather instead for his exceptionally visionary work in pure mathematics and logic.

Image for postLeft: Alan Turing (1912?1954). Right: On Computable Numbers, with an Application to the Entscheidungsproblem in the Proceedings of the London Mathematical Society (1936)

Turing was born in London in 1912, but by the time he was 17 years old was off to King?s College, Cambridge to study mathematics. He graduated with first-class honors in 1934 and in the same year was elected a fellow of King?s based solely on the strength of his thesis, which proved the Central Limit Theorem. He published his renowned paper ?On Computable Numbers, with an Application to the Entscheidungsproblem? in 1936. It was published in two parts in the Proceedings of the London Mathematical Society journal on the 30th of November and 23rd of December 1936. In September of the same year, Turing travelled to Princeton University to study for a Ph.D. in mathematics under Church. He obtained his Ph.D. in 1938. His dissertation was entitled ?Systems of Logic Based on Ordinals? which introduced ordinal logic and the notion of relative computing ? augmenting his previously devised Turing machines with so-called oracle machines, allowing the study of problems that lay beyond the capability of Turing machines. Although inquired about by von Neumann for a position as a postdoctoral research assistant following his Ph.D., Turing declined and instead travelled back to England.

Uncomputability

As Robert I. Soare recounts (Copeland et al, 2013 p. 207), by 1931 logic as applied to computability was a young man?s game. Gdel, born in 1906, was 25 years old when he proved the incompleteness of formal systems. Church was 33 when he proved the undecidability of the Entscheidungsproblem with his invention of the concept of ?effective calculability? based on ?-calculus. Turing, independently proved the same result in the same year with the invention of Turing machines. Born in 1912, he was 24 at the time.

Entscheidungsproblem

The concept of an Entscheidungsproblem, the problem of finding an algorithm that is able to determine the truth or falsity of an input based on a finite set of axioms, dates back to the seventeenth century when it was first imagined by Leibniz (Davis, 2000). Later, for various formal systems, the problem had also been posed by Schrder in 1895, Lwenheim in 1915, Hilbert in 1918 and Behaman in 1921 before Hilbert and Ackermann together published a formal statement in 1928 in Grundzuge der theoretischen Logik (?Principles of Mathematical Logic?):

The Entscheidungsproblem is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability.

Church?s solution

Turing?s thesis advisor Alonzo Church first began working on the problem in 1933. He presented his proposed solution featuring what he called effectively calculable functions to Gdel around March 1934. Supposedly, Gdel rejected Church?s approach as ?thoroughly unsatisfactory? (Soare, 2013). Based on ?-calculus, Church?s first proposition said:

First definition of computability (Church, 1934)A function is effectively calculable if and only if it is ?-definable.

Church and his graduate student Stephen C. Kleene had worked together on defining functions as ?-definable since 1931. By 1934, they had shown that all the usual number theoretic functions were ?-definable. Yet still, as Gdel knew, primitive recursive functions as stated in his own 1931 paper did not include all computable functions and so would be an insufficient foundation for claiming the effective computability of all numbers. To account for this, Gdel in the spring of 1934 expanded on a concept of the brilliant Jacques Herbrand (1908?1931) to define Herbrand-Gdel recursive functions (now known simply as recursive functions), improvements on those defined in Gdel?s 1931 paper, now called ?primitive? recursive functions. After attending lectures on the subject held by Gdel, Church and Kleene eventually revised their definition of computability to be based instead on Herbrand-Gdel recursive functions (Soare, 2013):

Second definition of computability (Church, 1936)A function on the positive integers is effectively calculable if and only if it is recursive.

Church and Keene would later be able to show that in fact the ?-definable functions they had used in their first definition and the Herbrand-Gdel recursive functions they used in their second were formally equivalent. However, Gdel never came to accept Church?s definition of computing (Gdel, 1995), disagreeing fundamentally with its approach. The key weakness of Church?s argument according to Sieg (1994, p.80) was that ?steps taken in a calculus must be of a restricted character and they are [here] assumed without argument to be recursive? adding that ?this is precisely where we encounter the major stumbling block for Church?s analysis?, a stumbling block that was ?quite clearly seen [both] by Church? himself, and Gdel.

Turing?s solution

Simultaneously Alan Turing came to Princeton to work on the same problems of both attempting to settle Hilbert?s Entscheidungsproblem and in the process provide a complete and final definition of computability. When published in the Proceedings of the London Mathematical Society 42 on November 30th 1936, Turing?s paper defined computability as (Soare, 2013):

Turing’s definition of computability (1936)A function is intuitively computable (effectively computable) if and only if it is computable by a Turing machine; that is, an automatic machine (a-machine) as defined in Turing (1936)

Turing in his paper notes the similarity in his definition to that of Church?s paper published seven months before, stating: ?In a recent paper Alonzo Church has introduced an idea of ?effective calculability? which is equivalent to my ?computability?, but is very differently defined.? adding ?Church reaches similar conclusions about the Entscheidungsproblem?.

Unlike Church and Kleene, Turing?s definition did not rely on the recursive functions of Gdel (1931) or Herbrand-Gdel (1934). Rather, Turing instead narrated an analysis of how humans would go about carrying out a calculation, showing step by step that the same procedure could also be conducted by a machine. In terms of the rigorousness of this analysis, Gandy (1988) later wrote ?Turing?s analysis does much more than provide an argument, it proves a theorem?, without ?making no references whatsoever to calculating machines?. Instead, ?Turing machines appear as a result, a codification, of his analysis of calculations by humans? (Soare, 2013).

Turing had invented a new formal model, his concept of an a-machine or a Turing machine. A Turing machine may be defined informally as:

A Turing machine (a-machine) is a mathematical model of computation that defines an abstract machine which manipulates symbols on a strip of tape according to a table of rules. Despite the model’s simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm’s logic can be constructed.

Turing?s definition aligned much more closely with Gdel?s view of computation (Soare, 2013):

“But Turing has shown more. He has shown that the computable functions defined in this way are exactly those for which you can construct a machine with finite number of parts which will do the following thing. If you write down any number n?, n?, …, n? on a slip of paper and put the slip into the machien and turn the crank then after a finite number of turns the machine will stop and the value of the function for the argument n?, n?, …, n? will be printed on the paper.”

Uncomputable numbers

While we know that the set of real numbers is uncountable, the set of computable numbers is countable, and thus we know that most real numbers are not computable. The proof that the computable numbers is countable arises intuitively from the fact that they may all be produced by Turing machines, of which there are only countably many variations (i.e. they can be put into one-to-one correspondance with the natural numbers). Formally:

Proof sketch of the countability of the computable numbersBy assigning a Gdel number to each Turing machine definition produces a subset S of the natural numbers corresponding to the computable numbers. A surjection from S to the computable numbers shows that the computable numbers are at most countable.

Example A: Chaitin?s constant ?

Argentine-American mathematician Gregory Chaitin in 1975 proporsed a real number that is not computable, now called Chaitin?s constant, omega ?. It represents the probability that a randomly constructed program will halt/terminate:

Chaitin’s Thought Experiment (Barmpalias, 2018)Suppose we run a universal Turing machine on a random binary program. Specifically, whenever the next bit of the program is required, we flip a coin and feed the binary output to the machine. What is the probability that the Turing machine will halt?

Chaitin?s thought experiment is a prototypical example of a halting problem. A halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running (halt) or continue to run forever. In the context of self-delimiting machines the halting probability ?? of a Turing machine U takes the following expression:

Image for postHalting probability of Turing machine U given input ?

Where ? represents random finite binary programs and U(?)? denotes the fact that U will halt on input ?.

Turing?s 1936 paper proved that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist. No halting probability is computable. The proof of this fact relies on an algorithm which, given the first n digits of ?, solves Turing?s halting problem for programs of length up to n. Since the halting problem is undecidable, ? cannot be computed.

Example B: The Busy Beaver Problem BB(n)

Tibor Rad in a 1962 paper entitled ?On Non-Computable Functions? posed the following problem:

What is the largest finite number of 1s that can be produced on blank tape using a Turing machine with n states?

The problem has later come to be known as the Busy Beaver problem. As a game, the problem is often stated as: finding a terminating program of a given size that produces the most output possible. The problem has a natural expression as a mathematical function BB(n) where if n represents the number of states, let BB(n) represent the largest finite number of 1s that can be written on blank tape by a machine of that size. So, BB(1) = 1, BB(2) = 4, BB(3) = 6. For BB(4) the problem gets much harder. Proving that BB(4) = 13 was a Ph.D. thesis. The value of BB(5) not known, but at least 4098 (bound by Marxen & Buntrock in 1989), and BB(6) at least 3.5 x 10??? (found by Kropitz in 2010).

The BB(n) function is an example of an uncomputable function. That is, there is no Turing machine M?? that computes the BB function. This is provable by contradiction:

Proof that BB(n) is not computable (Roberts, 2016)We start by assuming that the Busy Beaver function BB(n) is computable, and that there exists a Turing machine M?? with ? states that takes n as an input, writes BB(n) number of 1s as output and then halts. Let Clean denote a Turing machine cleaning sequence of 1s initially written on the tape. Let Double denote a Turing machine evaluating function n + n. Given a tape with n 1s it will produce 2n 1s on the tape, then halt. Now create the routine ‘Double | EvalBB | Clean’ and let n? be the number of states of this machine. Let Create_n? denote a Turing machine creating n? 1s on an initially blank tape. Let N denote the sum n? + n?.Let BadBB denote the routine ‘Create_n? | Double | EvalBB | Clean’. This machine has N states (n? + n?). Running BadBB will produce BB(N) 1s on the tape before clearing all 1s and halting. But the step of cleaning will continue at least BB(N) steps, so the computation time of BadBB must be strictly greater than BB(N), which contradicts the definition of the function BB(n).

Apart from being an interesting and challenging mathematical game, the Busy Beaver problem also has intriguing implications in practice. For Hilbert?s Entscheidungsproblem, suppose any mathematical conjecture can be disproven via counterexample/existence proof among a countable number of cases (e.g. Goldbach?s conjecture). If such conjectures are true, the Turing machine looking for counter-examples will never halt. If it is not true, it will halt and notify us that it has found a counter-example. If we simulate an n-state Turing machine and know the value of BB(n) we can decide whether or not it will ever halt by running the machine that many steps. If, after BB(n) steps the machine does not halt, we know that it will never halt and thus that there are no counterexamples, which would prove the conjecture to be true.

However true in theory, this is not currently achievable in practice. Wikipedia lists two reasons:

  • It is extremely hard to prove values for the busy beaver function (case and point, BB(6) having at least 3.5 x 10??? value) and it is assumed that one would need at least 20?50 states to make a useful problem-solving machine. Furthermore, the few states that are solved were solved using a manual checking method for each n-state Turing machine, which would not be practically possible for a machine with larger states.
  • Even if one does find a more efficient way of calculating BB(n) states, the value of the function get very large, very fast (we know that BB(17) is larger than G, Graham?s number). Even BB(6) would require more computational capacity than exists in the known part of the universe to be performed directly (Lloyd, 2001).

Example C: Penrose Tiling

A Penrose tiling is an example of non-periodic tiling generated by an aperiodic set of prototiles. Because such patterns are non-periodic, they lack translational symmetry. They are also self-similar (?fractal?), as they appear the same at larger and larger scales.

Image for postPenrose tiling

The question of whether a set of tiles will cover the plane was conjectured by Hao Wang in 1961. The problem was later shown to be uncomputable. The proof relies on using tiles to simulate a Turing machine, configuring the tiles in such a way that they cover the complete plane only if the Turing machine runs forever starting with a blank tape. Since the Halting problem is undecidable, the tiling problem must be undecidable and so also uncomputable.

Epilogue

Today, in mathematics and computer science, Turing machines are the primary formalism for defining computable functions (Soare, 2013 p. 213). However, in recognition to Church for grasping the essence of such functions first, the modern-day formulation of the hypothesis about the nature of computable functions is now known as the Church-Turing thesis:

Church-Turing thesisA function is intuitively computable if and only if it is computable by a Turing machine, or equivalently if it is specified by a recursive function

Its implications are that:

No method of computing carried out by a mechanical process can be more powerful than a Turing machine.

Although widely adopted, as there is no clear way to prove or disprove its validity the proposition still remains a conjecture. A more detailed account of the history of the development of the thesis is available on Wikipedia.

This essay is part of a series of stories on math-related topics, published in Cantor?s Paradise, a weekly Medium publication. Thank you for reading!

13

No Responses

Write a response