Dec. 23, 2004, 4:17 p.m.

A word or two on mathematical proof and rigor...

In The Beginning God Said...
In The Beginning God Said...

Mathematics is, for most people, something you use in order to calculate the total price of groceries, make some year end financial calculations, for engineers it could be to do some FFT's (Fast Fourier Transforms) etc. The concept I want to portray is that I am rather sure most people assume mathematics is correct, sensible and without any inconsistencies - they just use it unconditionally in their everyday life.

Well, fortunately I will not completely burst your bubble about mathematics. It is indeed correct, consistent and sensible in the right frame of reference. But how can we be so sure that what we call mathematics, is indeed correct? The answer lies in the concept of mathematical theorems each accompanied with a rigorous proof, and axioms.

The concept of axioms is not new - not at all. Actually, the first mathematical axioms dates back to about 300BC - in the time of Euclid. Same with mathematical theorems and their proofs. The differences lie in the extent of those axioms and the accompanying assumptions that underlies them, together with the assumptions for mathematical proofs.

In the times of Euclid and the ancient Greeks, mathematical theorems were not proven rigorously. Instead, they made use of deductive reasoning. Although this is a very powerful method of reasoning, it is not infallible. To see an example of a condition where this line of reasoning fails, follow the link to the definition of deductive reasoning above. Deductive reasoning and proofs based upon that are only as good as the axioms on which they are based. In the time of the school of Pythagoras it was commonly accepted to simply check that the result was true for a specific subset of special cases, and then to assume it is valid in the general case. This is called induction.

Up to the 18th century mathematicians carried on their work in fields like Algebra and Geometry based on irrational, complex and real numbers. They also used the concept of continuity without a solid base for their work. This did not present real big problems until the start of the Calculus in the early 18th century. It was with concepts such as continuity, infinitesimal quantities, gradients having clear geometric interpretations etc. that calculus started running into trouble and various paradoxes came to light.

In the 19th century important developments were made on Quaternions (numbers that do not obey the normal laws of arithmetic) and non-Euclidian Geometry. These fields emphasized the insufficiency of the axioms and methods of mathematical proof in use then. Those discoveries forced mathematicians to question whether their axioms, rules and methods for arithmetic and geometry were correct and useful after all.

This caused mathematicians to begin developing precise formulations for basic things like irrational numbers, continuity, integrals and derivatives. For the first time had algebra taken precedence over geometry. Axioms were revisited and considerably extended to include numbers as well. Instead of using mere deductive reasoning in working with infinitesimal values, limit theory was developed to handle these calculations in a mathematically sound manner. The importance of rigor was emphasized with various paradoxes that started to appear, under which the famous Russell's Paradox is one. An illustration of the problem follows:

A barber lived in a village and one day advertised that he shaves all the people in the village who do not shave themselves. Of course, he says, I don't shave those people who do shave themselves. The next morning the barber was wondering whether or not he should shave himself...

Several people worked on the axiomatisation of set theory, and it was through the great work of Zermelo, Fraenkel, von Neumann and others that a formal set theory was developed in which nobody has yet found any paradox in.

I can maybe explain the extent of the formalism a bit better by presenting some theorems of number theory so basic, that you will most probably think it is "logical" and can therefore be assumed. But believe me - it is precisely this reasoning of assumption that caused all these paradoxes and inconsistencies!

Lemma 1:
Between any two different rational numbers a and b there is at least one other irrational number.

Proof:
Let decimal expansions of two rational numbers a and b (a<b) first differ in the nth digit. First, consider the number bn obtained from b by cutting all the digits after the nth. Then a<bn<b. Assume b has non zero digits after the nth. Think of how to amend the proof if it does not. Let bm be the first non-zero digit of b after bn. Append to bn m-n 0s and then any random sequence of digits. Every time you append a digit you obtain a new rational number between a and b. The sequence is convergent to a limit between a and b. If you make an effort to avoid periodic sequences the limit will be irrational.

At the turn of the century, the great challenge of proving mathematics itself as consistent was tackled by David Hilbert using a technique called "Metamathematics". Metamathematics excludes the use of proof by contradiction and relies solely on very concrete reasoning. They received some success, and were on the verge on a huge breakthrough when Kurt Gödel showed that the consistency of number theory (on which Hilbert's main focus was) could not be proven using the logic of metamathematics. This was a consequence of his result, the Gödel's Incompleteness Theorem.

Gödel's theorem states that any consistent axiomatic system that embraces arithmetic is incomplete. To be incomplete means that within the system are statements that can neither be proved nor disproved using the axioms. Gödel went on to show that this implies that the consistency of a deductive system cannot be established using the system itself. Hilbert's goal of showing the consistency of mathematics was ultimately unattainable.

The proof of Gödel's theorem is based on the principle of self reference. An example is a statement like: "This sentence is false", or imagine a card on which one side is printed:

"The statement on the other side of this card is false"

and on the other side:

"The statement on the other side of this card is true"

All this indicates the never ending search for the ultimate in mathematical soundness and rigor - to have a method of proving something as absolutely correct and sound. But as we practise mathematics from within a relative paradigm (from which we can never escape) - will we ever be able to formulate some system that can provide that complete rigor and consistency?