Godel & Turing

Godel & Turing

Chaitin

Godel showed that the connection between proof  and truth was shaky. In mathematics and in other formal systems statements can be true but unprovable. Some mathematical propositions might be undecidable, and this demolishes the idea of a closed consistent body of rules, and replaces it with incompleteness.

Chaitin discusses randomness. Something is random if there is no pattern or abbrevaited description. Then there is no algorithm shorter than the thing itself. On this basis, mathematics can be shown to be shot through with randomness.

This has implications beyond mathematics because the laws of physics are mathematical. The laws of physics are seen as algorithms that map input data or initial conditions onto output data or the final state.

The possibility of the universe as a computer is examined. Quantum mechanics imposes a lower limit on the time taken for each step in processing, and the universe has a finite age, so only a finite amount of information can have been processed in the life of the universe. This is suggested to mean that there is a cosmological bound on the fidelity of mathematical laws.

Random or patternless sequences of a given length require the longest programmes. A random sequence of length k requires a programme of length k. These random or patternless sequences comprise the majority of sequences. Such programmes, where the number of bits equals the number of  observations are random and useless. The minority of non-random sequences require a programme that is shorter than k. The shorter the programme, the less random the sequence, and the greater the pattern present in the sequence.

This links to information theory. Messages are coded or compressed to eliminate redundant information. Any information source that is not random can be compressed. The definition of randomness is the reverse of information theory.

Chaitin also relates this to the Occam’s razor concept in which the simplest theory, or in other words the shortest sequence, is seen as the best theory. Simplicity here meansd not ease of calculation, but the number of arbitrary assumptions that have to be made. A scientific theory is valuable, if it allows one to compress many observations into a few hypotheses.

The minimum quantity of information needed to define a string is equated to the complexity of the string. Most strings of length n have complexity n, and these are random. Only the non-random minority have less complexity. Where a string is random, the programme describing it has has to expand in direct relation to the length of the string.

Although randomness can be measured and defined, a given numprograber cannot be proved to be random. This is seen as being related to Godel’s incompleteness theorem.

The fundamental unit of information is the ‘bit’, and this is defined, and this is defined as the smallest item of indication a choice between two things. In binary notation, one bit is represented by either a ‘0’ or a ‘1’.

In the 1960’s, the Russian mathematician, Solomonoff, represented a scientist’s theory as an algorithm predicting future observations, and in the case of competing theories about observations, the model will select the algorithm comprising the fewest bits. This is the same idea as Occam’s razor.

Any specified series of digits, such as 123, can be generated by an infinite number of algorithms. However, the bast programme is the smallest one. The smallest programmes are called minimal programmes, and there may be one or many minimal programmes for a given series of digits.

The concept of the minimal programme is closely related to the concept of complexity. The complexity of a series of digits is the number of bits required to get the series of digits as output. There the complexity equals the minimal programme for the series.

It is emphasised that non-random distributions are exceptional. It is easy to show that a series of digits is non-random by finding a programme that is shorter than the series, but will generate the series. To demonstrate that a particular digit is random, it is necessary to demonstrate that there is no small programme for generating it. Chaitin’s version of Godel predicts that such proof of randomness cannot be found. Given that quantum mechanics depends on random outcomes this indicates a limitation to axiomatic systems.

Godel had shown that it was not possible in mathematics to have a mechanical system of proofs without the need for human judgement or insight. Before Godel, Hilbert had wanted a formal system with a finite list of axioms or initial assumptions and rules of inference. This is the definition of a formal system. A formal system has an algorithm for testing proofs that will check one-by-one all the theorems of  a particular system. The randomness of numbers that are larger than the formal system cannot be proved. This is taken as being related to Godel’s incompleteness theorem.

Godel’s theorem was based on the Cretan liar paradox, referring to the Cretan who says that ‘all Cretans are liars’, or alternatively, ‘this statement is false, although Godel preferered ‘this statement is unprovable’. If the unprovable statement was true, it meant that the formalisation of number theory was incomplete. At the same time, if the statement was provable, it would also be false, and number theory would be inconsistent.

Much of Chaitin’s discussion focuses on the concept of algorithmic information theory. He asks how many bits of information of information the smallest programme takes to calculate an individual object. Between two objects there may be mutual information that measures the commonality of the objects. This is regarded as a fundamental concept. Knowing the amount of commonality reduces the amount of information needed to describe the objects. Randomness in a particular object implies the lack of common features with another object.

Chaitin seems to criticise mainstream mathematics for shrugging off the implication of Godel’s theorem and the slightly more accessible work of Turing on the halting problem, which essentially dealt with those things that a computer could not do. The view has tended to be that the problem did not apply to normal mathematics, but Chaitin thinks they may be wrong in this respect. His view of algorithmic information theory suggests that randomness and incompleteness may be more pervasive than generally thought.

Chaitin considers that there may be a deeper reason for incompleteness that is linked to physics. Physics contains the notion of entropy or the amount of disorder in a system, which also relates to the arrow of time. The size of a computer programme is a similar concept to the amount of entropy in a system. The idea or programme size relates to the Occam’s razor concept of preferring the simplest system. The problem with the concept of the smallest or simplest programme is that one cannot be sure that one has the smallest programme. Such mathematical truth would require an infinite amount of information, while any system of axioms contains only a finite amount of information.

Print Friendly
Posted by

Leave a Reply