So I was looking for an alternative to my current habit of consuming websites: I let my feedreader create a list of links, and then, using tab mix plus, I open these links all at once, and wait until they are loaded. This way, I can view them when I am offline.

I would be satisfied if Pale Moon had an option to actually save these tabs for later reading, but it hasn't: I need to keep them in RAM, and keep Pale Moon running. Apparently, this would not be cloudy enough. So there are services claiming to do exactly this, for example, Pocket.

So I signed up for a Pocket account, and configured this cli application. Furthermore, I had to install Google Chrome, because apparently, their programmers do not know about HTML5 local storage, and viewing articles offline is only supported in their apps, which means using a Google Chrome App under Linux.

So, I was experimenting a little with pocket-cli, just to see whether it kept its promise. And … it didn't. Somehow, it showed some sort of summary of many sites. It was especially bad for this website, All links and titles were messed up, you have no way of properly reading it.

Think about it: I try to make sure that my website is usable under lynx, and with screenreaders. But Pocket scrambles it. Probably because my site doesn't have that much meta and link tags and shit. Anyway, also other sites were shortened.

So after all, I don't see why anyone would use this service. It is a bookmark service with previews that syncs between devices. Just like the native Firefox Sync. The only thing it appears to be able to do is posting your reading habits to facebook and twitter. But I can manually post links there.

In the end, I deleted my account again. And I will probably have to keep my habits. At least until I find something better.

And again, I gave a talk for the best mathematics club of the multiverse, for "younger" students, meaning that this text is not scientific. This is the script for the talk.

We first want to have a look at ants. One behavior of ants is to build "chains" in which the ants follow each other. The single ant is relatively simple, and can only have simple instructions, and a canonical instruction is to just follow the ant in front. In nature (as far as I remember), ants use trails of pheromones, while the "follow the frontman" behavior is more common to other insects like the Oak processionary, but the biological details are not that important for the talk, and ants are a more well-known insect and therefore didactically more suitable (and there is a unicode symbol for ants).

Now, our leading intuition for this talk will be that we have a swarm of infinitely many ants which will just randomly be dropped and organizes itself according to increasingly complex rules. Our goal is to have a chain of ants that starts somewhere and does not end:

One line of ants following a leader

We can get this result if we tell the ants to follow some other ant. However, we might as well get a configuration which is infinite on both sides, and has no leader:

One infinite line of ants on both sides, no leader

The solution to this problem is rather simple: We can introduce one ant, which knows that it has no predecessor. Let us call this ant the zero-ant (which is totally not a suggestive name). Now, if we are lucky, we get the desired result. However, the following configuration is still possible:

The ants assemble in tree form, the zero ant being the leader

As you can easily see, every ant follows some other ant, but sometimes ants are followed by more than one other ant. We have to teach the ants that every ant has at most one successor. That is, every ant knows that it has one successor, and every ant except the zero-ant has a predecessor. Again, we could get the desired result. But we could also get the following result:

One line that follows the zero ant, but some circles of ants besides that line

Besides the main line of ants which is as we desire, we have cycles. The problem is that the ants have no way of knowing whether at some point in the future the zero-ant comes. We now leave the realm of theoreticall biological realizability, and approach the realm of mathematics. To break cycles, we have to introduce a more complicated principle, which talks about sets of ants. Assuming we have some set B of ants, we call an ant m\in B a non-successor, if it is not successor of any other element in B. In general, this means that m is what we would intuitively call a "minimum". We now say that every finite non-empty set of ants knows an ant which is not the successor of any other ant in this set. A cycle of ants would contradict this. Therefore, we cannot have cycles anymore. But we can still have the following situation:

One line that follows the zero ant, but some other lines which are infinitely long on both sides

The line that is infinite on both sides has ants that "think" they are farther away from the zero-ant than every ant that is connected to the zero-ant. We call the ants which are connected to the zero-ant the standard ants, the others we call non-standard ants.

To really get what we want, we would need the same principle for every non-empty set of ants. Then we could just apply it to our set of non-standard ants, and get a contradiction. At least then all hives look "the same".

At this point, we need to get more mathematical. A hive of ants is given by a pair (A,0,\sigma) of the set A of its members, the zero-ant 0\in A, and a successor function \sigma:A\rightarrow A. Since \sigma is a function, this already implies that there can be at most one successor for every ant, and that there actually is such a successor. We call a hive (A,0,\sigma) a Peano hive, if it satisfies the above axioms, that is, more formally:

  • \forall_x. x = 0 \leftrightarrow \forall_y \sigma y \neq x – an element is 0 if and only if it has no predecessor.
  • \forall_{x,y\in A}.\sigma x=\sigma y\rightarrow x=y – every ant to have at most one predecessor.
  • \forall_{B\subseteq A}\exists_{x\in B}\forall_{y\in B} x \neq
  \sigma yevery non-empty subset of A has a non-successor

Theorem: Let two Peano hives (A,0_A,\sigma) and (B,0_B,\tau) be given. Then there is a function b : A
  \rightarrow B with b(0_A)=0_B and forall a\in A we have b(\sigma a) = \tau(b(a)), and b is bijective.

Proof: We prove that b exists, and is surjective and injective.

To prove its existence, we would have to prove the recursion theorem; to not overcomplicate things, we will be sloppy at this point: Assume b was not well-defined, then the set of ants in A for which b is not well-defined has a non-successor m. By definition, m\neq 0_A. But if m =
\sigma m', then b(m) = b(\sigma m') = \tau b(m') would be well-defined. Contradiction.

Assume b wasn't surjective. Then the set B' := B \backslash
\operatorname{Im} b of ants in B which are not in the image of b is non-empty, and therefore contains a non-successor m. This non-successor cannot be 0_B by definition. Therefore, there must be some m' \in B such that m = \tau m'. But since m is a non-successor, m'\not\in B', so there is some n\in A such that b(n)
= m'. But then b(\sigma n) = \tau b(n) = \tau m' =
m. Contradiction.

Assume b wasn't injective. Then the set A' := \{ x \in A
\mid \exists_{y \in A} b(x) = b(y) \} of ants that are not mapped uniquely is non-empty, and therefore contains a non-successor n. Let \underline{n} be another ant mapped to b(n). Clearly, \underline{n}\neq 0_A, and therefore, b(n) has a predecessor m', and therefore, n \neq
0_A, since b(0_A)=0_B. Therefore, both n and \underline{n} has a predecessor, and these predecessors contradict the non-successor property of n. ∎QED

From our non-successor principle, a (more commonly used) principle follows, namely the principle of induction: If B\subseteq A, 0\in B and \forall_{x\in B}\sigma x\in B, then B=A.

Proof: Let B be such a set, and assume A\backslash B \neq
 \emptyset. Then let m \in A\backslash B be a non-successor. Since 0\in B, m \neq 0. But then m has a predecessor q, and q\in B. But then \sigma q = m \in B. Contradiction. ∎QED

It also works the other way around:

Proof: Assume there was a set B \subseteq A that contains no non-successor. Trivially, A \backslash B is non-empty, since it contains at least 0_A, since otherwise, 0_A would be a non-successor in B. Assume x \in A \backslash B. If \sigma x was in B, then, since predecessors are unique by axiom, \sigma x would be a non-successor in B. Therefore, \sigma x \in A \backslash B. But then by induction, A = A \backslash B, and therefore, B =
 \emptyset. ∎QED

As we just proved, these axioms entirely specify the structure of the hives. These hives behave like natural numbers, and in fact, the axioms we just formalized are the Peano axioms.

The problem when doing this is that you need a universal quantifier over all subsets of the ants, that is, you are in second order logic.

It is not even clear what "all subsets" mean. There are uncountably many such subsets

Proof: Assume there was a surjection f:\mathbb{N} \rightarrow
 \mathfrak{P}(\mathbb{N}). Consider the set R:=\{n \mid
 n\not\in f(n)\}. Assume f(r)=R. If r\in R, then by definition, r\not\in f(r)=R. Contradiction. If r\not\in
 R, then r\not\in R=f(r), which is also a contradiction. Such an r cannot exist, and therefore, f cannot be surjective. ∎QED

However, there are only countably many words. For every system to describe subsets, there is at least one plausible subset that we cannot describe. To introduce natural numbers, therefore, one usually wants to refrain from using sets at all, and only quantify over "ants" directly. Notice, we quantify over all possible "ants", and want them to behave like we want natural numbers to be. At this point, natural numbers do not exist yet. We have the constant symbol 0 and the function symbol S, and the axioms

  •  \forall_n. n = 0 \leftrightarrow \forall_m S m \neq n
  •  \forall_n \forall_m. S n = S m \rightarrow n = m

To be able to express a bit more, we add symbol + for addition and a symbol \cdot for multiplication, and define some of its properties:

  • \forall_n 0 + n = n
  • \forall_n \forall_m S n + m = S (n + m)
  • \forall_n 0 \cdot n = 0
  • \forall_n \forall_m S n \cdot m = m + (n \cdot m)

A proposition is a string with a free variable. Such propositions can be regarded as subsets of our ants. For example, we could define the proposition \operatorname{Primes}(x) :\Leftrightarrow
\forall_n\forall_m. n\cdot m = x \rightarrow (n = x \vee m = x), which denotes the set of prime numbers (actually "prime ants" so far, since we cannot be sure to really describe natural numbers here).

Now, while first order logic does not allow quantification over sets of ants, it allows to have infinitely many axioms. For every proposition \varphi(x), we add the non-successor axiom

  •  (\exists_x \varphi(x))\rightarrow \exists_n. \varphi(n)\wedge
  \forall_m. \varphi(m) \rightarrow S m \neq n

However, similar to our system with finite sets above, there are configurations of a hive with non-standard ants, they are called non-standard numbers.


(C) 2016 Christoph-Simon Senjak, UXUL.DE

Herr Schmidt läuft auf der Straße entlang. Er greift sich plötzlich stöhnend an die Brust und fällt hin.

Ein Krankenwagen erscheint nach einiger Zeit. Zwei Sanitäter, Annegret und Brunhilde, steigen aus.

Annegret: "Diese Handschuhe sind mir fast ein bisschen zu eng."

Brunhilde: "Die Größe ist aber die Gleiche."

Annegret geht zu Herrn Schmidt, während Brunhilde in ihrer Tasche etwas sucht.

Annegret: "Guten Tag. Mein Name ist Annegreth Holzinger, und das ist meine Kollegin Brunhilde von Unterkalking Freifrau von und zu Wohlfartsoberkacheln. Ich bin Notfallsanitäter. Wer sind Sie?"

Herr Schmidt stöhnt nur, und kann offenbar nicht reden. Annegret greift Herrn Schmidt in die Hosentasche. Brunhilde leuchtet ihm mit einer Lampe in die Augen.

Annegret: "Ich werde mal schaun ob Sie einen Ausweis haben, nur dass wir wissen wer Sie sind. Meine Kollegin wird Sie untersuchen. Und gleich kommt der Notarzt."

Brunhilde: "Können Sie mal den Mund aufmachen? Genau, und jetzt die Zunge hoch. Ok, das brennt jetzt ein bisschen und ist etwas kalt, nicht erschrecken."

Brunhilde nimmt eine Sprühflasche und sprüht Herrn Schmidt in den Mund.

Der Notarzt trifft ein. Er zieht sich Handschuhe an und geht dann zu Herrn Schmidt.

Notarzt: "Hallo, ich bin der Notarzt. Können Sie mich verstehen."

Herr Schmidt stammelt immer noch etwas, aber kann inzwischen reden.

Herr Schmidt: "Ja."

Notarzt: "Ok. Wissen Sie wer Sie sind?"

Herr Schmidt: "Jü… Jürgen Schmidt."

Annegret nickt.

Notarzt: "Ok, das stimmt. Ich hör Sie mal ab."

Der Notarzt nutzt einige Geräte um Herrn Schmidt zu untersuchen. Annegret zeigt auf das Datum auf dem Ausweis von Herrn Schmidt.

Notarzt: "Ah, ich sehe, wir haben ein Geburtstagskind."

Herr Schmidt: "Ja, ich wollte gerade auf eine Feier."

Notarzt: "Und? Was wünschen Sie sich denn?"

Herr Schmidt: "Ich würde gerne weiterleben."

Der Notarzt schaut sich einige seiner Messgeräte an. Anschließend läuft er zu seinem Auto und holt einen Besen und reicht ihn Herrn Schmidt.

Notarzt: "Hier. Für Sie. Zum Geburtstag."

Herr Schmidt schweigt und nimmt nur zögerlich den Besen.

Notarzt: "Gefällt er ihnen nicht?"

Herr Schmidt: "Doch. Danke. Ich weiß nur nicht …"

Notarzt (unterbricht): "Schaun Sie, von den Dingen hier ist das das was an Ihren Wunsch am nächsten rankommt."

Herr Schmidt: "Ach so. Ok. Dann vielen Dank."

Der Notarzt klatscht in die Hände.

Notarzt: "Gut. Ich muss dann auch weiter."

Die Sanitäter beginnen zu packen.

Herr Schmidt: "Ok."

Notarzt: "Schaun Sie, ich hab Ihnen so gut geholfen wie ich konnte, aber irgendwann ist dann auch mal gut, ich hab auch noch andere Patienten."

Herr Schmidt: "Ja, ich verstehe, Ihre Zeit ist begrenzt. Trotzdem danke."

Notarzt: "Bitte."

Der Notarzt steigt in sein Auto und fährt weg. Die Sanitäter sind fertig mit dem Packen des Krankenwagens und gehen ebenfalls weg. Herr Schmidt liegt alleine auf der Straße mit seinem Besen.

As I don't really know where else to publish this (mathoverflow etc do not seem to have a section for it), here comes a wrong construction of a field of characteristic 1. Still, it is wrong in a somewhat interesting way. Maybe as an exercise for an introductory lecture in mathematical logic or something.

So to make things easy, we work with ZFC. We could probably use weaker systems, but this would just make things harder to express.

Now, let \operatorname{ZFC} be the first-order theory of Zermelo-Fraenkel set theory, and let us assume that \operatorname{ZFC}\not\vdash\bot. We define first-order constants K, 0, 1, +, \cdot with the axioms

  • 0,1\in K
  • +,\cdot\subseteq K^{K^2}
  • (K,+,\cdot,0,1) is a field

Now define 0.a := 0 and (n+1).a := n.a + n, where the first addition is ordinal addition, and the second addition is addition in the field K. Essentially, this defines that n.a means adding a\in K for n times. This is a simple recursive definition, so it is justified according to the recursion theorem. We add the axiom

  • \exists_{0<\alpha<\omega}\forall_{a\in K} \alpha.a = 0 (K has finite characteristic \neq 0)

let us call the resulting theory \operatorname{ZFC}_K. Obviously, \operatorname{ZFC}_K\not\vdash\bot. Now define the axioms A_n := \underbrace{1+\ldots+1}_{n\times}\neq 0, and A :=
\{A_{n+2}\mid n\in \mathbb{N}_0\}, and consider \operatorname{ZFC}_K \cup A.

Every finite subset of \operatorname{ZFC}_K \cup A is satisfiable, because there are fields of arbitrarily large characteristic. Hence, because of the compactness theorem, \operatorname{ZFC}_K \cup A \not\vdash \bot, and hence, there is a model M \models \operatorname{ZFC} with a constant K such that

  • K is a field
  • K has finite characteristic
  • \operatorname{char} K\neq n for all n\in\mathbb{N}_0\backslash\{1\}.

We can "lift" K out of its model M, by setting \overline{K} := \{k\in M\mid M\models k\in K\}, similar for the other constants.

Now comes the exercise: Why is \overline{K} not a field of characteristic 1?

I could actually imagine that, though the actual characteristic of \overline{K} should be 0 (if I am correct), this thing has some properties that one wants from fields with characteristic 1, so it might be interesting to look at (but if it is, there certainly are people who already do this).

And again, a talk for the best mathematics club of the multiverse. This time, I thought I'd have a slightly more abstract topic, namely Kripke models. As always, this is not a scientific talk, and it focuses on breadth rather than depth, and neglects formality in favour of comprehensibility. Its audience will mainly consist of young students who will probably not have experience in mathematical logic for the largest part.

Anyway, feel free to drop a comment if you notice any mistakes.

A simply-typed lambda calculus

The lambda calculus is the prototype of all functional programming languages. We will give a short informal introduction to it.

A lambda term is inductively defined as

  • There are variables, as the ones known from programming languages. Every variable is a lambda-term. Thus, x is a lambda-term.
  • If a,b are lambda-terms, then the application ab is a lambda-term. It regards a as a function, and b as its first argument. A more usual notation would be a(b), but there are reasons why ab is preferred.
  • If a is a lambda-term and x is a variable, then the abstraction \lambda_xa is a lambda-term. The variable x will usually occur in a. The intuition is that \lambda_xa is the function that maps to some value b the term a[x:=b], where every occurence of x in a is replaced by b. For example, if a=cxd, then (\lambda_x (cxd))b = cbd. Another notation that is more common in some other parts of mathematics would be x\mapsto a.

The idea of the untyped lambda calculus is that everything is a function. Functions map functions to functions. Lambda terms are the only objects one talks about.

A problem that arises from the above definition is that you might have a lambda term \lambda_x (a\lambda_x (bx)). If we just replaced x, we would have (\lambda_x (a\lambda_x (bx)))c
\stackrel{?}{=} a\lambda_x bc, which is obviously not what we want. Actually, the inner x is bound by the inner \lambda, and will not be replaced. It is possible to define this formally, but for this talk we just forbid that this occurs.

Another problem arises when we substitute with a term that itself contains a variable. Say, we have ((\lambda_x\lambda_y (xay))(by))c
\stackrel{?}{=} ((bc)ac). This is obviously not intended, and we forbid this to happen, instead of handling it formally.

A variable is bound in a term, when it is captured by some lambda. It is free otherwise. A term is closed when it does not contain free variables. For example, \lambda_x y contains y freely, while \lambda_x(xx) is a closed term.

As for most functional programming languages, we can assign types to terms. In the simply-typed lambda calculus, there are atomic types A, B, C and arrow types, which are of the form \tau\rightarrow\rho, where \tau,\rho are types. Variables will be assigned a type. We write t^{A\rightarrow
B} to denote that the term t has the type A\rightarrow
B. The intuition is that this term is a function from objects of type A to objects of type B. We therefore define the typing rule (\lambda_{x^\tau}
t^{\rho})^{\tau\rightarrow\rho}. Furthermore, we only allow applications when the types fit, that is, (a^{\tau\rightarrow\rho}b^\tau)^\rho. Of course, we can leave out type annotations when they are clear or not important. This is a very simple type system, but most other type systems are based on this.

We now ask ourselves the question whether there is a closed term of type ((P\rightarrow Q)\rightarrow P)\rightarrow P. Well, of course there is none, otherwise we wouldn't ask. Kripke models are a way of answering this kind of questions.

Kripke Models for the simply-typed lambda calculus

We are given a rooted tree (T,<), where T is the set of nodes, and <\subseteq T\times T is the "is parent" relation for this tree. The tree need not be finite.

Now, for every atomic type A, let a monotone function b_A:T\rightarrow\{0,1\} be given, that is, x<y\rightarrow
b_A(x)\le b_A(y).

A triple (T, <, b) is called a Kripke model (well, at least a very simple form of a Kripke model). We now define a so-called forcing relation \Vdash\subseteq T\times\Theta (where \Theta is the set of simple types).

We define

  • u\Vdash A if and only if b_A(u)=1.
  • u\Vdash\rho\rightarrow\tau if and only if for all w\ge
  u, w\Vdash\rho implies w\Vdash\tau.

We say that (T, <, b)\models\tau ("(T, <, b) models \tau") if and only if \operatorname{root}(T)\Vdash\tau.

Lemma (Monotonicity): Let (T, <, b) be a Kripke model, and q,r\in T with q < r, and \rho be some type. If q\Vdash \rho, then r\Vdash \rho.

Proof: We prove this by structural induction on \rho. If \rho=A for some atomic type A, this holds by the monotonicity of b_A.

Now Let \rho = \tau \rightarrow \gamma, and assume q\Vdash
\tau \rightarrow \gamma, that is, for all t\ge q, t\Vdash \tau implies t\Vdash \gamma. But then the same holds especially forall t\ge r, hence r\Vdash \tau
\rightarrow \gamma.


Lemma (Correctness): Let a closed simply-typed lambda term t^\rho exist. Then every model \frak M satisfies {\frak M}\models\rho.

Proof: We slightly generalize the Lemma. We actually prove: If there exists a term t^\tau with free variables u_1^{\rho_1},
\ldots, u_n^{\rho_n}, then for every {\frak M}=(T, <, b), if we have {\frak M}\models\rho_k for all k<n, we also have {\frak M} \models \tau. We prove this by structural induction.

Our induction hypothesis is that the current entailment holds for all Kripke models. Notice that if p\in T, then (\{q\ge
p\mid q\in T\}, <, p) is (trivially) also a Kripke model.

Obviously, it holds for variables t=u_1^{\rho_1}.

Assume t=q^{\vartheta \rightarrow \tau}r^\vartheta, where q has the free variables u_1^{\rho_1},\ldots,u_m^{\rho_m} and r has the free variables u_{m+1}^{\rho_{m+1}},\ldots,u_n^{\rho_n} (notice that both sets of free variables need not be disjoint). Assume that for all k\le n we have {\frak M} \models \rho_k. Then by induction, we have {\frak M} \models \vartheta \rightarrow \tau and {\frak M} \models \vartheta. But {\frak M} \models \vartheta
\rightarrow \tau means that {\frak M} \models \vartheta implies {\frak M} \models \tau. Hence, the claim follows.

Assume \tau = \gamma \rightarrow \vartheta, and t=\lambda_{x^\gamma}r^\vartheta. Besides x, let r have the free variables u_1^{\rho_1},\ldots,u_n^{\rho_n}, and let again {\frak M} \models \rho_k for all k. We now have to show that q\Vdash \gamma,\rho_1,\ldots,\rho_k implies q\Vdash \vartheta for all q\in T. By our monotonicity lemma, {\frak M}\models \rho_k implies q\Vdash
\rho_k. By our induction hypothesis for r is that this holds for all Kripke models, especially for (\{t\ge q\mid t\in
T\}, <, q). Hence, for all q\ge b, q\Vdash \gamma implies q\Vdash \vartheta. Therefore, b\Vdash
\gamma\rightarrow\vartheta, and therefore, {\frak M}\models


Now that we have shown correctness, we can answer our above question.

Lemma: There is no closed simply-typed lambda term of type ((P\rightarrow Q)\rightarrow P)\rightarrow P.

Proof: Consider the tree of words matching the regular expression 0*1?. Define b_P(0\ldots 0) = b_Q(0\ldots 0) = 0, b_Q(0\ldots 01) = 0 and b_P(0\ldots01)=1. That is:

     P→1 Q→0 001   000 P→0 Q→0
                \ /
    P→1 Q→0 01   00 P→0 Q→0
              \ /
               0 P→0 Q→0

Let for some node u\Vdash (P\rightarrow Q)\rightarrow P. That means that u\Vdash P\rightarrow Q implies u\Vdash P. For every u=0\ldots 0, there is some u'=0\ldots 01>0\ldots 0 such that u'\Vdash P but u'\nVdash Q, and therefore u'\nVdash P\rightarrow Q, and therefore, u\nVdash
P\rightarrow Q. So for all u, u\nVdash P\rightarrow
Q, and therefore, for all u, u\Vdash (P\rightarrow
Q)\rightarrow P regardles of whether or not u\Vdash P. But then again, if u=0\ldots 0, we have u\nVdash P, so u\Vdash (P\rightarrow Q)\rightarrow P does not imply u\Vdash
P. Therefore, our Kripke model does not force ((P\rightarrow
Q)\rightarrow P)\rightarrow P. According to our correctness lemma, there cannot be a closed lambda term with this type.


This example is called the Peirce formula. You might wonder why this is interesting. In fact, it is just an introductory example. However, if you know functional programming languages, especially Scheme, you might be familiar with the concept of continuations. The type of call/cc is the Peirce formula. In Haskell's standard library, there is also a callCC routine with a similar type. However, this routine has a slightly different type: It uses the continuation translation of P (see the type constructor cont). The reason is that Haskell is pure, and in a pure setting, it is not possible to derive the Peirce formula, which can be shown in a similar way as we just did.

Intuitionistic Propositional Logic

So far, our lambda terms only have atomic types and arrow types. We add a few new type constructors:

  • If \tau and \rho are types, then so is \tau\wedge\rho, the product type (or pair type).
  • If \tau and \rho are types, then so is \tau\vee\rho, the sum type (or union type – it corresponds to unions in programming languages like C++).
  • There is a distinguished type \bot, the empty type or falsum. It must be impossible to create a closed term with this type.

There are additional axiomatic lambda terms for all types \rho,\tau,\vartheta:

  • Product type introduction \wedge_+^{\rho \rightarrow \tau
  \rightarrow \rho\wedge\tau}: This term gets two arguments and returns their pair.
  • Product type destruction \wedge_-^{\rho \wedge \tau \rightarrow
  (\rho \rightarrow \tau \rightarrow \vartheta) \rightarrow
  \vartheta}: This term gets a term of the product type, and a term that maps the two elements of this pair to some other type, and returns this type. For example, the projection to the second coordinate of a pair is \pi_2=(\lambda_{x^{\rho \wedge
  \tau}}(\wedge_-(\lambda_{a^\rho}\lambda_{b^\tau}b)))^{\rho \wedge
  \tau \rightarrow\tau}.
  • Sum type introduction \vee_{+,l}^{\rho\rightarrow\tau\vee\rho} and \vee_{+,r}^{\tau\rightarrow\tau\vee\rho}.
  • Sum type destruction \vee_-^{\rho\vee\tau \rightarrow
  (\rho\rightarrow\vartheta) \rightarrow (\tau\rightarrow\vartheta)
  \rightarrow \vartheta}. This corresponds to if … then … else ….
  • Ex falso quodlibet: \bot_-^{\bot\rightarrow\rho}.

Except for the empty type, all of these are concepts that occur similarily in most modern programming languages, maybe written in a somewhat less intuitive way. We can give terms that do simple stuff like

\wedge_- t (\lambda_{y^\rho}\lambda_{z^\tau}
(\wedge_+(\vee_{+,r}y)(\vee_{+,r}z)))) (\lambda_{t^\vartheta}(\wedge_+
(\vee_{+,l}t) (\vee_{+,l}t)))))^{(\rho\wedge\tau)\vee\vartheta
\rightarrow (\rho\vee\vartheta) \wedge (\tau\vee\vartheta)}

From that perspective, empty types make no sense: They have no computational content. However, we can as well interpret these types as predicates: The atomic types become atomic predicates, \rho\vee\tau will mean \rho "or" \tau, \rho\rightarrow\tau will mean \rho "implies" \tau, and \rho\wedge\tau will mean \rho "and" \tau. In this interpretation, \bot is the predicate that is always wrong, and \rho\rightarrow\bot means "not \rho". We therefore define the abbrevitation \lnot\rho :=
\rho\rightarrow\bot. In this interpretation, closed lambda terms become formal proofs. For example, we can prove the introduction of contraposition:

(\lambda_{x^\rho\rightarrow\tau} \lambda_{y^{\lnot\tau}}
\lambda_{z^\rho} (y(xz))^\bot)^{(\rho\rightarrow\tau) \rightarrow
\lnot\tau \rightarrow \lnot\rho}

However, the other direction cannot be derived. We can show this with a slightly extended definition of Kripke Models: We have to add interpretations for \bot, \wedge and \vee:

  • not u\Vdash\bot for all u\in T.
  • u\Vdash \rho\wedge\tau if and only if u\Vdash\rho and u\Vdash\tau.
  • u\Vdash \rho\vee\tau if and only if u\Vdash\rho or u\Vdash\tau.

The above proof of correctness can be extended, but it is more technical. We will therefore not do this here. In this system, the Peirce formula is not derivable as well, as can be shown by the same Kripke model as above. Also, in the same Kripke model as above, contraposition and the law of excluded middle do not hold (exercise).

First-order Predicate Logic

We now extend our type system with objects and relations. We define a special type \omega of objects. A relation is a "type with object arguments". For example, the set theoretic \in-relation is a relation with two object arguments. Remember, we are in metatheory, the object type is not specified and must be defined by axioms.

To get the full mathematical strength, we need to introduce quantors:

  • \forall_x \tau means "for all objects x^\omega, \tau holds".
  • \exists_x \tau means "there exists an object x^\omega, such that \tau holds".

The type \forall_x \tau is similar to \omega\rightarrow\tau, and \exists_x \tau is similar to \omega\wedge \tau, except that \tau may depend on x.

Introducing a universal quantifier is therefore done by abstraction (\lambda_{x^\omega}\tau)^{\forall_x \tau}, and it is eliminated by application (t^{\forall_x \tau}u^\omega)^{\tau[x:=u]}.

The introduction of an existential quantifier reflects sum introduction: \exists_+^{\forall_x(\tau \rightarrow
\exists_x\tau)}. Its elimination is \exists_-^{\exists_x\tau
\rightarrow \forall_x(\tau \rightarrow \rho) \rightarrow \rho} where x must not occur freely in \rho.

We now have the full expressivity of first-order predicate logic. For example, we can prove \exists_x\tau \rightarrow \lnot \forall_x
\lnot \tau:


However, as before the type system we just defined only allows for intuitionistic logic. The advantage of intuitionistic logic is that if you prove the existence of something, you get an algorithm that calculates it. The disadvantage is that things that cannot be calculated cannot be shown to exist. The following is the so-called Drinker formula (or Drinker problem, or sometimes Drinker paradox):

\exists x(Dx \rightarrow \forall_y Dy)

It is not derivable in intuitionistic logic. We can show this using a Kripke model, but we first have to extend our semantic:

Firstly, every node u\in T gets a domain D_u of objects, such that t\le u\rightarrow D_t\subseteq D_u. The \Vdash relation now depends on an assignment \eta, which is a function from object variables x^\omega to elements of the domain of the corresponding node. For every relation R with n arguments, now, b_R(u) becomes a function b_R(u) : D_u^n \rightarrow \{0,1\}. This corresponds to our former function in the sense that predicates become relations with zero arguments. We then say u\Vdash
  R\,x_1\,\ldots\,x_n[\eta] if and only if b_R(u)(\eta(x_1),\ldots,\eta(x_n)) = 1. For \circ\in\{\wedge, \vee, \rightarrow\}, the definition is similar to our former definition, where u\Vdash \rho\circ\tau
  [\eta] decomposes to u\Vdash \rho[\eta] and u\Vdash
  \tau [\eta].

For the interpretation of the quantors, we must define \eta[x:=o], which is \eta, except for the variable x, where it is o:

\eta[x:=o] (y) = \left\{
o & \mbox{ for } y=x \\
\eta(y) & \mbox{ otherwise}

  • u\Vdash (\exists_x A)[\eta] if and only if there exists an a\in D_u such that u\Vdash A[\eta[x:=a]].
  • u\Vdash (\forall_x A)[\eta] if for all w\ge u, for all a\in D_w, w\Vdash A[\eta[x:=a]].

We again omit the correctness proof. But we show that the Drinker formula cannot be derivable. We set T=\mathbb{N} with the canonical ordering. We set D_u = \{ k \mid k \le u \} b_D(u)(u) = 0 and b_D(u)(k) = 1 for k < u. (This model is taken from here.)

3 D={0,1,2,3} b={0→1,1→1,2→1,3→0}
2 D={0,1,2} b={0→1,1→1,2→0}
1 D={0,1} b={0→1,1→0}
0 D={0} b={0→0}

We set \eta(x)=x. Obviously, if 0\Vdash \exists_x (Dx
\rightarrow \forall_y Dy)[\eta], then 0\Vdash D0\rightarrow
\forall_y Dy[\eta]. However, 1\Vdash D0[\eta], but 1\nVdash \forall_yDy[\eta]. Therefore, 0\nVdash\exists_x (Dx
\rightarrow \forall_y Dy)[\eta].

Similar Concepts

Beth Models

A similar concept for first-order logic you might see is the concept of Beth models. Their definition is slightly different but similar, and their properties are similar. See here for example.

Presheaf Models

There is a generalization of Kripke models, the so-called presheaf models. Here, the domains are given by a presheaf D:C^{\operatorname{op}} \rightarrow \operatorname{Set} on a category C.

For every relation R with n arguments and every object o from C we have an interpretation b_R(o)\subseteq
D(o)^n such that if there is a morphism f:o\rightarrow u in C, we have b_R(u)\subseteq b_R(o). An assignment is, as before, a map from variable symbols into objects. The definition is now similar to the one for Kripke models: For relations, we define o\Vdash Rx_1\ldots x_n[\eta] :\Leftrightarrow
(\eta(x_1),\ldots,\eta(x_n)) \in b_R(o). For implications we define o\Vdash (\tau\rightarrow\rho) [\eta] if and only if for every morphism f:o\rightarrow u, u\Vdash\tau[\eta] implies u\Vdash\rho[\eta].

It is also possible to define this (even more general) in terms of morphisms between classical models instead of explicitly giving interpretations for all relations and domains. You might also find a more intricate version of implication for arrow types, a version that includes computational content, as an interpretation of type theory.

A Kripke model is a special case where C is a poset.

Semantic Forcing in Set Theory

The following part is kept very short and only for older audiences. Paul Joseph Cohen proved that the continuum hypothesis is independent of Zermelo-Fraenckel Set Theory (ZFC). He uses a concept inspired by Kripke models. We will only look at the parts related to Kripke models here. For an explanation of the proof see here, here and here.

This is a relative consistency result, so we can assume that there is a classical transitive countable model M of ZFC. (It can be proved that the existence of such a set is unprovable, by Gödel's second incompleteness theorem.)

That is a set M for which the ZFC axioms hold when relativizing all quantors to it (that is, replacing \forall_x with \forall_{x\in M}, etc). Transitivity means that x\in
M\rightarrow y\in x\rightarrow y\in M. This can be proved by the Löwenheim-Skolem theorem and the Mostowsky collapse lemma.

We write M\models\varphi if the formula \varphi holds in M when relativizing all quantors to it. Notice that we already used the character \models for Kripke models. It is generally used for semantic entailment. M is a so-called classical model, while Kripke models are intuitionistic models.

A forcing poset is a triple (\mathbb{P},\le,1) completely defined in M where \le is a preorder with the largest element 1 and the splitting condition that for all p\in
P there exist q,r\in P such that there is no s\le
q,r, that is, they form no diamond, that is, the ordering is similar to a tree.

A \mathbb{P}-name is, roughly speaking, a set with an element of \mathbb{P} attached, in which every element itself is a \mathbb{P}-name. That is, for example, (\{(\emptyset,p)\},q) for p,q\in \mathbb{P}.

We define \operatorname{val}(u,G) = \{\operatorname{val}(v, G)\mid
\exists_{p\in G}(v,p)\in u\}, the valuation map.

We now define

M[G] = \{\operatorname{val}(u,G)\mid u\in M^{(\mathbb{P})}\}

where M^{(\mathbb{P})} is the set of \mathbb{P}-names in M. We now define a forcing relation on elements of \mathbb{P}.

  • p\Vdash a\in b :\Leftrightarrow \forall_{q\le p\in \mathbb{P}} \exists_{r\le q\in\mathbb{P}, c\in M^{(\mathbb{P})}, s\ge r\in\mathbb{P}}((c,s)\in b\wedge r\Vdash a=c).
  • p\Vdash a=b :\Leftrightarrow \forall_{q\le p\in \mathbb{P},c\in M^{(\mathbb{P})}}((q\Vdash c\in a)\Leftrightarrow (q\Vdash c\in b)).
  • p\Vdash \lnot\varphi :\Leftrightarrow \lnot\exists_{q\le p\in\mathbb{P}} q\Vdash\varphi
  • p\Vdash \varphi\wedge\psi :\Leftrightarrow (p\Vdash\varphi) \wedge (p\Vdash\psi).
  • p\Vdash \forall_x\varphi(x) :\Leftrightarrow \forall_{x \in M^{(\mathbb{P})}} p\Vdash\varphi(x).

One can show that if G satisfies some conditions, M[G]\models\varphi holds there is a p\in G such that p\Vdash\varphi.

(but thanks to the power of free software, this has been fixed already!)

I just watched the new film "Star Trek Beyond". I expected the worst. And I was still disappointed. It is amazing. It is like they actively tried to make this film bad in as many ways as possible. The film feels like a B-Movie with too much budget for CGI.

I will black out spoilers in this review.

Ok, honestly, I am a Trekkie, and it is hard to really satisfy me. But this film is really beyond my worst expectations (pun intended). This film is not only bad by Star Trek standards. If you really want to watch it … I'd recommend waiting until it is shown on TV. It's not worth the money.

I really have to force myself to say anything good about it. But well, let's start with the few good things, just to be fair: They did introduce the universal translator, which was – conformant to the story – only a prototype at that time. Furthermore, I think Zachary Quinto does a very good job playing Spock, and Karl Urban does a very good job playing McCoy. Both show that they are very good actors. The Xindi are mentioned, and there are some subtle references to the original series. And a foto of the old crew is shown. And apparently they spent a lot of money on special effects, which are mindless for the largest part, but beautiful.

The story of the film is trash. I mean seriously … a story that bases on a life drinker?

There is no real motivation behind the "bad guy's" intentions. I have read that this film was all about questioning the goals of star fleet, and that he should have valid points. I don't see any valid point except when you count his Hitler-style position on war. He just wants to kill. But then again, why does he use this inefficient kind of weapon. His "bee armada" seems pretty strong, and there are other gases that he could put into the vent of this (unrealistic) city.

He is not interesting by any means. The acting is miserable. And except for the major "plot twist" that he is a human there is nothing surprising about him. Btw, he said that most of his crew died. How did he come to get his "bee army" and all the pilots in it? Why is he a human and still talking in that alien language by default?

Nothing is logical or consistent or interesting about the main storyline. When the alien that lured them into the nebula said that Krall wants to "save you from yourselves", I hoped that this might be a story about the temporal cold war, because there was some artifact that could be used as a weapon. There would be so many possibilities of consistent stories that would somehow satisfy me as a Trekkie and not require major changes to the rest of the film.

On the other hand, even if you are willing to accept the mindless story with all its inconsistencies, they did not make anything out of it. The bad guy has no personality. The fight scenes are boring. The dialogues are predictable.

The side stories of the movie are not really developed either. The struggles Kirk has in the beginning are completely irrelevant for the rest of the movie. Spock's struggle is shown a few times, but I account that to Zachary Quinto being such an excellent actor.

And what the fuck were they thinking when they decided that some metal music sent to the bee army would make them instantly explode? Was this intended as a reference to Mars Attacks?

So I guess, that is what they meant when calling the film "Star Trek Beyond": We are beyond Star Trek. We have to face the fact that we must live without it now.

My recommentation to paramount: Maybe ask Terry Gilliam or at least Uwe Boll to direct the next Star Trek film.

It is a well-known fact that ZFC\vdash_i TND:

Assume we wanted to prove \neg Q \vee Q. We can then, by the schema of specification, define the sets A := \{x\in 2\mid x =
0\vee Q\} and B := \{x\in 2\mid x = 1\vee Q\}. We constructively know that both sets are non-empty: At least 0\in
A and 1\in B. Therefore, \{A, B\} is a set of non-empty sets, and therefore, by the axiom of choice, there is a function g such that g(A)\in A and g(B)\in B, that is, we know that g(B)=0\vee g(B)=1 and g(A)=0\vee
g(A)=1. Trivially, g(A)=1\rightarrow Q, hence, g(A)=0\vee Q. Similarly, g(B)=1\vee Q. Hence, g(A)\neq g(B)\vee Q. On the other hand, Q\rightarrow
A=B=\{0,1\}, therefore also Q\rightarrow
g(A)=g(B). Contraposition (which is constructive in this direction) yields g(A)\neq g(B)\rightarrow \neg Q. Hence, \neg Q\vee Q.

Now, in my opinion, the problem here is not the axiom of choice. Let us recall what the axiom of choice says: If we proved that every set in a collection has an element, then we can find an element for every such set. I find this principle very constructive: If we can prove that something exists, then we can find it. As long as we cannot prove the existence of things that we cannot find, the axiom of choice should not do any harm to constructivity, as far as I see – correct me if I am wrong.

Of course, if you can prove the existence of things that you cannot find, the axiom of choice might make the situation worse. And in my opinion, in the case of ZFC, in this proof, the schema of specification is the actual culprit. Whcih is why I wonder why even some "constructive" set theories have a similar axiom scheme. For some reason, the impression that "smaller" sets are always simpler than "larger" sets is prevalent, and from that perspective, separation is not a problem. I wonder where this comes from.

Even without any deeper knowledge of recursion theory or logic, it should be clear that while the set of natural numbers \mathbb{N} is rather simple, there are subsets which are more complicated. Of course, you can encode the halting problem into a subset of \mathbb{N}, but you don't have to go that far: Consider the set of prime numbers. It is intuitively more complicated than \mathbb{N}. (This somehow reminds me of the concept of entropy, considering the "amount of information" encoded in a set. Not sure whether there is a formal correspondence, though.)

An obvious limitation one could impose to comprehension-like schemes would be that the propositions used need to be provably decidable. This would break the above proof in the sense that Q would be decidable from the start. And it makes more sense in a constructive setting: If we assume that we can decide a set, the resulting set would still be decidable. Not sure whether there exists a constructive set theory with such an axiom, though.

Just my two cent. Feel free to enlighten me if I am wrong.

A stone containing the word "Hope" styled in a way that it becomes "adolf" when turned around.
(Source: HugeLOL)

Mein Gott, da interessiert man sich mal ein paar Monate nicht für Politik und dann sowas. Was zum fick? Erst solidarisiert sich die Bild (!!!) mit Flüchtlingen und alle Ankömmlinge werden mit Teddybären beworfen, dann passieren irgendwelche "Übergriffe" in Köln (was genau passiert ist scheint ja irgendwie keiner zu wissen und auch irgendwie niemanden wirklich zu interessieren) und schon kippt die Stimmung von überschwänglicher Willkommenskultur ins Gegenteil über? Was ist denn das für eine Meinungsbildung? Wie soll man denn so nachhaltige Politik machen? Wie soll man so irgendetwas machen?

Warum bin ich hierzulande eigentlich gezwungen mich zwischen zwischen Flüchtlinge-mit-Teddybären-bewerfen oder "Wir-sind-das-Volk"-Rufen zu entscheiden?! Haben wir keine normalen Leute mehr? Während Teddybärdeutschland mir ja durchaus sympathischer ist, verkennen doch beide Seiten irgendwie das Problem, dass wir es nicht mit einer grauen Masse zu tun haben, sondern mit relativ vielen Individuen, die in der Regel einen Grund haben hier zu sein. Warum kann ich es nicht einfach als neutral ansehen, als ein Ding das passiert, und mit dem man jetzt irgendwie umgehen muss? Und dann irgendwie dafür sein, dass man mit gebotener Souveränität damit umgeht?

Es war doch klar, dass es früher oder später irgendwie irgendwo einen Eklat geben wird. Und dass die Meldungen von Straftaten von Ausländern in die Höhe schnellen werden, schon alleine, weil sich solche Schlagzeilen in solchen Zeiten gut verkaufen, das ist jetzt auch nichts wirklich Neues und nie Dagewesenes. Überhaupt, es ist doch nicht unsere erste Flüchtlingskrise!

Souverän geht man nicht um indem man Leute mit Teddybären bewirft und sie dann bei Eiseskälte in Zeltsiedlungen unterbringt. Souverän ist es aber genausowenig wenn man Straftaten die Einzelne begangen haben herunterspielt oder relativiert. Souverän ist es auch nicht, der Türkei in den Arsch zu kriechen dass die unser Problem löst.

Souverän wäre es gewesen, jedem der lesen kann eine Übersetzung unseres Grundgesetzes in die Hand zu drücken, und ihnen zu sagen, dass darauf unser friedliches Zusammenleben – auch mit dem Islam – hier basiert, und sie sich daran zu halten haben, und wenn sie das nicht tun wir uns gezwungen sehen sie wieder auszuschließen. Souverän wäre es gewesen, den anderen europäischen Staaten kräftig in den Arsch zu treten dass die auch etwas tun, notfalls mit einem EU-Austritt zu drohen. Souverän wäre es gewesen, die Waffen-Exporteure die eine direkte Schuld trifft in die Pflicht zu nehmen. Souverän wäre es aber auch gewesen, viel mehr zu betonen, dass die Ermittlungen in Köln laufen, und im Ergebnis zu Ausweisungen führen können.

Stattdessen diskutieren wir jetzt mit der AfD über bewaffneten Grenzschutz. Ein G7-Staat diskutiert ernsthaft darüber, Waffengewalt gegen wehrlose Menschen einzusetzen – weil wir Angst vor deren (Un-)Kultur haben, und noch viel mehr Angst um unser Geld.

Die Angst um unser Geld hat sich schon in der Griechenlandkrise gezeigt. Wo war diese Angst, als wir Banken mit Steuergeldern aus der Krise gefischt haben? Die Occupy-Bewegungen in Deutschland waren allesamt mickrig und nicht der Rede wert. Wieso brennen heute so viele Flüchtlingsheime, aber es brannte nie eine Bank? Warum sah man nie eine Horde Hartz-IV-Ossis vor Banken skandieren, sie seien das Volk? Vermutlich, weil diese Art Mensch zutiefst feige ist, und sich niemals mit jemandem anlegen würde, der potentiell die Macht hätte, sich zur Wehr zu setzen. Außerdem wäre das ja fast schon wieder links, und links zu sein ist gesellschaftlich verpönt.

Die AfD fordert ja anscheinend auch einen Austritt aus dem Euro und eine neue Währungsunion mit besser wirtschaftenden Nationen, hörte ich in irgendeinem Interview, irgendwas von Schweden oder so wurde gefaselt. Ich bin ehrlichgesagt zu faul nachzulesen ob das jetzt so stimmt oder nicht, die AfD gackert ständig irgendwelchen Müll, muss mich nicht alles interessieren. Aber wenn dem so ist, schön, dann aber bitte auch ohne die neuen Bundesländer, am Besten generell ohne die stark vom Länderfinanzausgleich abhängigen Bundesländer. Ernsthaft, wenn deren asoziale Entsolidarisierungspolitik in diese Richtung wirklich Früchte trägt, werde ich mich auch entsolidarisieren, und die Bayernpartei wählen – dann will ich wenigstens dass bayerisches Geld in Bayern bleibt, und kein Gulden (?) davon in Sachsen landet!

Für die Angst um die Kultur wird gerne das alte und leidige Thema ob der Islam zu Deutschland gehöre wieder aufgewärmt – ohne genau zu definieren, was eigentlich "zu Deutschland gehören" heißt. Wir, oder viel mehr unsere (Groß-)Eltern, haben massenweise Gastarbeiter eingeladen hier zu arbeiten. Diese Leute haben unser Land mit aufgebaut und sind jetzt in dritter und vierter Generation hier. Selbst wenn der Islam nach irgendwelchen obskuren historischen Definitionen nicht zu Deutschland gehört, wir haben ihn selbst zu einem Teil davon gemacht. Und jetzt ist er da. Das kann man jetzt gut finden oder schlecht, akzeptieren muss man es.

Wovor haben wir eigentlich Angst? Irgendwelche Reichsbürger gemäß RuStAG 1913 haben Angst dass unsere edlen blonden Frauen von nicht-deutschen Vergewaltigern geschändet werden könnten. (Tipp: Klickt euch nie durch deren Foren, das macht echt schlechte Laune.) Und auch bei nicht-Geisteskranken ist eine gewisse Xenophobie wohl grundsätzlich immer vorhanden. Aber ich glaube, so unreflektiert ist sie bei den meisten Menschen dann doch nicht. Ich persönlich kenne einfach zu viele Menschen auch aus niedrigeren Bildungsschichten, die sofort bestätigen würden, dass sie nicht glauben, dass ein Großteil der Kriegsflüchtlinge aus Triebtätern besteht, als dass ich glauben könnte, dass das wirklich eine Mehrheitsmeinung ist.

Jim Gaffigan Quote: "See that guy over there with an almost identical religion? I wanna kill him because my God is all about love."

Warum haben wir also so viel Angst vor dem Islam? Sind wir nicht eigentlich ein säkulärer Staat, der Mechanismen hat, um uns vor extremen religiösen Einflüssen zu schützen? Vermutlich ist genau das der Punkt: Wir alle wissen wie viel Einfluss die Staatskirchen auf unser Geschehen haben, und wie oft sich der Staat ihnen beugt. Wir haben nicht zuletzt eine offen religiöse Regierungspartei. Man stelle sich vor, die islamischen Menschen organisieren sich eines Tages zu einer Partei, die eine CSU pullt und mit hoher Geistlichkeit für sich wirbt. Am Ende hätte die auch noch Erfolg. Und wir könnten nichts dagegen tun, denn die Religionsfreiheit in unserem Land geht so weit dass man sogar religiöse Parteien toleriert. Und was würde eine solche Partei dann fordern? So blöd, die Scharia einführen zu wollen, ist hier niemand (das würden auch die ganzen Dönerbudenbesitzer sicher nicht mittragen.) Vermutlich die Stärkung der Familie als Grundfeste unserer Gesellschaft, die Nicht-Anerkennung homosexueller Partnerschaften, eine Stärkung der Rechte von Religion, eine gemäßigt soziale Politik – also all der Schlonz den wir schon von den Christdemokraten kennen. Ja, ich finde das auch durchaus besorgniserregend, da ich finde, noch so eine rechtskonservative Partei brauchen wir wie Scheiße am Schuh.

Staatlich kontrollierner islamischer Unterricht an Schulen wurde schon vorgeschlagen. Ich halte diese Idee im Grunde für sinnvoll, denn auf diese Weise kann man dem Extremismus von Halbgelehrten theoretisch vorbeugen. Nun erlebte ich meinen evangelischen Religionsunterricht immer als recht aufgeschlossen (wohl nicht zuletzt weil die Hälfte meiner Schulklasse sowieso atheistisch war), aber Erzählungen vom (primär katolischen) Religionsunterricht in den weniger fortschrittlichen Teilen Bayerns lässt einen nur ausmalen wie grässlich schief so etwas gehen kann.

Die richtige Reaktion wäre es jedenfalls, mehr Laizismus zu praktizieren. Religion zu dem zu machen was es ist – Privatsache, die man respektieren muss, aber die im Interesse des friedlichen Zusammenlebens wenig politischen Einfluss haben darf. Dann brauchen wir auch keine Angst vor dem islamischen Einfluss auf unsere Kultur haben.

Und wo ich grade beim Schulunterricht war. Eine Sache die meiner Meinung nach für die Zukunft gerade bei solcherlei Diskussionen dringend in Schulen gehört ist Wissenschaftstheorie. Und zwar nicht nur so nebenbei, sondern als richtiges Schulfach. Jeder sollte zumindest mal gelernt haben woher Statistiken kommen, wie man sie interpretiert, wie man mit Quellen arbeitet, und vor Allem gerade aktuell: was ein confirmation bias ist. Das wäre in so vielen Bereichen wichtig. Warum ist das kein Schulfach?

I can remember, it was probably in 2001, when two guys, probably bankers or business administration students ("BWL-Studenten") came to my school and showed us the glorious new currency. They didn't know what ECU stood for (something everybody who ever played Dinoropa should know), if I remember correctly – they generally didn't know much, it appeared. It was the last year Germany used DM, they joined the Euro in 2002, but wanted to do a bit of propaganda for it beforehand, I guess.

These guys had some "special" for us. We could watch and feel one of the new 5 Euro banknotes. They actually had enough banknotes to give one to everyone (we unfortunately had to give them back in the end, but whatever). Now, these banknotes were the old 5 Euro banknotes, which you still sometimes see (as you may have noticed, there are new banknotes now).

Old 5€ Banknote

When I first saw one of these, I immediately noticed the lack of accessibility of that banknote: No tangible indication of the banknote's value. And of course, at that time, I was still naïve and thought that if somebody was allowed to speak at school, he should be familiar with that topic. Well, these two guys weren't. They weren't even aware of the DM banknotes at that time having such a thing. I had to show them so they at least believed me, but they did not know why the Euro did not have such an essential feature.

In fact, the newest DM banknotes had it: It was not perfect (well, I guess it was maybe one of the first times this has been tried), but it was there (see the dots and bars the red arrow points at):

Old 100 DM banknote with tangible value indication (Source: Wikipedia)

By the way, the old 10€ banknotes were the same (and the – at the writing of this – current 50€ banknotes still have no tangible indication):

Old 10€ Banknote

Well, personally I think that the Euro designers just forgot about it, even though this page says something else. Blind people, as far as I know, usually use templates.

However, as you might have noticed (but probably haven't), we have new Euro banknotes now. And these new banknotes have a surprisingly good tangible striation at their side, which is interrupted once for the 10€, and twice for the 20€ banknote:

Striations of the new 5€, 10€ and 20€ banknotes

Even I can distinguish them without looking. The new 5€ banknote came out 2013. Which means that it only took them 11 years to fix their bug. Congrats!