Again, a talk for the best mathematics club in the multivere. Again the intended audience are pupils, so it is not a scientific talk, and the focus lies in breadth rather than depth.
We will use the concept of "classical logic" before we define it – to make it easier to understand afterwards. For now, classical logic shall mean "the stuff you use in mathematic competitions without knowing it".
The first question we ask ourselves is what a mathematical proposition actually is. There are some patterns that regularily occur:
Not all of {∀,∃,∧,∨,→,↔,¬} is really needed. For example, A↔B is usually defined as (A→B)∧(B→A). Classically, we can define A→B as ¬A∨B. Furthermore, there are different connectives, like the exclusive or ("XOR"), which holds if exactly one of the two proposition holds, and could be defined by A∨B∧¬(A∧B). Which subset we use axiomatically and which ones we define depends on the need of the theory.
The theory of this talk requires of the characters mentioned so far the subset {∀,∃,∧,∨,→}. However, we need two more: ⊤, which is defined to be the proposition that is always true (the "verum"), and ⊥, which is always false (the "falsum"). We can define negation through → and ⊥:
¬A := A→⊥
The intuition is that ⊥ can only follow from something wrong. Of course, this subset is redundant. In classical logic, we can define everything by means of ∀, → and ⊥ (read ↔ᶜ as "is classically equivalent") :
∃xA ↔ᶜ ¬∀x¬A
A∧B ↔ᶜ ¬(A→B→⊥)
A∨B ↔ᶜ ¬(¬A∧¬B)
We still lack a possibility of actually expressing something meaningful in any common mathematical theory. Let us, for the moment, stick with a simple but useful theory, the theory of Real Closed Fields¹ – it is an axiomatization of real numbers. Quantifiers are over ℝ. We need to add a few additional symbols:
0 and 1 are constant symbols (or sometimes "nullary function symbols"). + and · are binary function symbols. P and = are so-called relation symbols: Their "return value" is not a number, but a proposition. A proposition of the form P(x) or of the form x=y is called an atom (or prime formula, or atomic formula): It cannot logically be "split". We leave out the nasty details of operator precedence and syntactical correctness, as well as the defined semantics.
With these symbols, we can express many propositions. For example, we can express the axioms of a commutative ring (which all hold in ℝ):
We regard these propositions as axioms. Notice that none of them contains implications (this will be relevant later). Furthermore, we have the following axioms:
For the further axioms, we introduce a notation: We define
x<y :⇔ P(y-x) (which is short for ∃z.z+x=y∧P(x))
We add infinitely many axioms, using a so-called axiom scheme: It is easy to enumerate all axioms of this axiom scheme, and it is easy to decide whether some proposition is in this axiom scheme.
A polynomial is what we intuitively would regard as one: a₀+a₁x+a₂x²+…+aₙxⁿ. Now, we add one axiom for every possible polynomial:
This axiom system is called GRCF. Now that we have axioms, we want to derive theorems from them. The question that bothered the early logicians was whether a computer can automatically decide whether a given proposition, encoded as a string, is true or not. This is not possible in general, though there are special cases where it is.
The next question is whether we can at least make it possible for a computer to check proofs. To be more clear, the question is whether there is a finite set of rules, which allows us to form exactly the true propositions. And this is possible.
Sets of rules are called calculi, and a calculus with this property is called sound and complete, where sound means it allows to derive only correct propositions, and complete means that it actually allows to derive all correct ones. A very simple such calculus is the calculus of natural deduction. It is a very simple set of rules, that is easy to learn and work with. The notation is of the form
input1 input2 …
───────────────
output
The first rule is essentially modus ponens:
A A→B
───────
B
Introduction of "and" and "or" and "exists" is also easy:
A B A B A[x:=t]
─── ─── ─── ───────
A∨B A∨B A∧B ∃xA
where A[x:=t] means A with t inserted for x in A.
Elimination of "and" and "forall" is also easy:
A∧B A∧B ∀xA
─── ─── ───────
A B A[x:=t]
The other rules are slightly more complicated, because they require the concept of a free assumption: An assumption is something we just propose, without proving it – it is a proposition that is on the top of a proof tree. New assumptions are free, but can be bound by other rules. We will denote bound assumptions in brackets. A proof is correct only when all free assumptions are axioms.
The simplest rule that binds a free assumption is the introduction rule for implications:
[A]
|
B
───
A→B
This rule means: if we have a proof with the free assumption A and we can derive B, then we are allowed to bind this assumption and derive A→B. This rule essentially says: If B follows from A, then A→B. Notice that one can also bind assumptions that do not occur at all:
B
─── (bind A)
A→B
For disjunction elimination we have
A∨B A→C B→C
───────────
C
The remaining rules have variable conditions. A variable in some proposition is called free, if it is not in the scope of ∃ or ∀.
∃xA ∀x(A→C)
────────────
C
if x does not occur freely in C.
A
───
∀xA
if x does not occur freely in any free assumption above A except for A itself. These rules are the rules of minimal logic: Everything derivable by these rules is said to hold minimally. Notice that this system does not have a concept of negation. However, we can derive some simple theorems from it.
[A] A→B
───────
B B→C
────────
C
─── (bind A)
A→C
This proves that A→B and B→C implies A→C. We can even prove some basic facts about negation – for minimal logic, ⊥ has no special meaning:
A ¬A
────
⊥
However, there are things we can not prove. For example, ((A∨B)∧¬A)→B holds intuitively, but it is not derivable by the calculus so far. Instead of adding new rules, we add axioms: For every proposition A, we add the axiom "ex-falso-quodlibet": ∀x∀y… ⊥→A. We can then derive
[(A∨B)∧¬A]
──────────
[A] ¬A
──────────
⊥ ⊥→B
────────────────
[(A∨B)∧¬A] B [B]
────────── ─── (bind A) ─── (bind B)
A∨B A→B B→B
────────────────────────────────
B
──────────── (bind (A∨B)∧¬A)
((A∨B)∧¬A)→B
This system is called intuitionistic logic, propositions derivable with it are said to hold intuitionistically. Usually, when people talk about constructive mathematics, they mean mathematics formalized with intuitionistic logic. Intuitionistic logic has the property that every object that can be shown to exist can be calculated.
One thing that can not be derived is the Drinker's Problem: In every pub, there is a person, such that if this person is drunk, all other persons are drunk:
∃d. D(d)→∀q.D(q)
This is not derivable in intuitionistic logic. The usual proof is:
Assume all people are drunk. Then we are done. Assume at least one person is not drunk. Then this person satisfies the condition: (S)he is not drunk, and not all people are drunk.
This proof distinguishes between ∀q.D(q) and ¬∀q.D(q). And this is the thing that cannot be done in intuitionistic logic. This has a good reason: Consider a pub with infinitely many guests (of course, this breaks the example, but logically, we can express this). Let us say we enumerate the persons by natural numbers. Then the state of a pub can be expressed by a function d : ℕ→{0,1}, where 0 means "not drunk", and 1 means "drunk". For such a function, it is not generally possible to decide whether it will always be 1. The above proof has no computational content.
If we really want the full logical strength of ordinary mathematics, you need classical logic: You need to add stability (or duplex negatio affirmat)
Lemma: Tertium non datur holds classically, that is, for every proposition A with free variables x, y, z, ∀x∀y∀z….A∨¬A.
[A]
────
[¬(A∨¬A)] A∨¬A
───────────────
⊥
──── (bind A)
¬A
────
[¬(A∨¬A)] A∨¬A
───────────────────────
⊥
──────── (bind ¬(A∨¬A))
¬¬(A∨¬A) ¬¬(A∨¬A)→(A∨¬A)
────────────────────────────────────────
A∨¬A
Lemma: ¬∀q.D(q)→∃q.¬D(q)
[D(q)]
───────
¬∀q.D(q) ∀q.D(q)
──────────────────
⊥
───── (bind D(q))
¬D(q)
────────
∃q.¬D(q)
We can formalize the former proof:
[¬D(d)] [D(d)]
───────────────
⊥→∀q.D(q) ⊥
──────────────────────────
∀q.D(q)
──────────── (bind D(d))
D(d)→∀q.D(q)
───────────────
∀q.D(q) ¬∀q.D(q) ∃d.D(d)→∀q.D(q)
──────────── ──────── ────────────────────────
D(d)→∀q.D(q) ∃q.¬D(q) ∀d.¬D(d)→∃d.D(d)→∀q.D(q)
─────────────── ─────────────────────────────────
(∀q.D(q))∨(¬∀q.D(q)) ∃d.D(d)→∀q.D(q) ∃d.D(d)→∀q.D(q)
──────────────────────────────────────────────────────────
∃d.D(d)→∀q.D(q)
Another proof that shows the problem with non-constructive proofs:
Lemma: There exist two numbers a,b∈ℝ-ℚ such that aᵇ∈ℚ
Proof: If (√2)^(√2)∈ℚ, we are done. If not, then ((√2)^(√2))^(√2)=2∈ℚ.
Constructive mathematics have nice properties, but classical mathematics is easier. There are theories for which classical derivability implies intuitionistic derivability. We will look at one special kind of theory which satisfies this property, the so-called geometric theories.
The continuation translation² P/A of a formula P for a formula A is recursively defined as:
P/⊥ is called the Gödel-Gentzen negative translation of P.
Let Γ be a set of axioms. Then Γ/A:={P/A|P∈Γ}.
Lemma: Let A not contain free variables that occur bound in C, and C not contain implications or universial quantifiers. Then C/A↔((C→A)→A) holds intuitionistically.
The proof goes by structural induction on C. It is long and technical. We only sketch it³. For atomic C, this follows by definition. For C=∃xB, we may assume – by induction – that B/A↔((B→A)→A). We have to prove that ((∀x.B/A→A)→A)↔((∃xB→A)→A).
The → direction:
[C]
───
[∃x.C→A] ∃xC
────────────
A C/A→(C→A)→A [C/A]
─── (bind C) ─────────────────
C→A (C→A)→A
──────────────────────────
A
───── (bind C/A)
C/A→A
────────
(∀x.C/A→A)→A ∀x.C/A→A
──────────────────────
A
────────── (bind ∃x.C→A)
(∃x.C→A)→A
[C→A] [C]
──────────
A
───────── (bind (C→A)
[∀x.C/A→A] ((C→A)→A)→C/A (C→A)→A
────────── ───────────────────────
C/A→A C/A
──────────────────────
A
─── (bind C)
∃xC C→A
─────────────────
A
────── (bind ∃xC)
(∃x.C→A)→A ∃xC→A
──────────────────────────
A
──────────── (bind ∀x.C/A→A)
(∀x.C/A→A)→A
Lemma: (¬¬B→B)/A is derivable in intuitionistic logic.
The proof is also technical, so we omit it. We can now, however, show that we can convert a classical proof of B from a theory Γ into an intuitionistic proof of B/A from Γ/A:
Definition A geometric proposition is a proposition that does not contain → and ∀. A geometric implication is a formula of the form ∀x…(A→B), where A and B are geometric propositions. geometric theory is a theory which only contains geometric implications.
Theorem: From Γ follows Γ/A intuitionistically, for geometric theories.
Theorem (Barr): Let B be a geometric implication, and Γ be a geometric theory, such that B is classically derivable from Γ. Then B is intuitionistically derivable from Γ.
Notice that GRCF only contains geometric implications or geometric propositions. A geometric proposition B can be replaced by an equivalent geometric implication by ⊤→A. So GRCF is a geometric theory.
───────────
Footnotes:
¹We use the axiomatization from E. Palmgren
²See my diploma thesis
³For the long version, see my diploma thesis