This is the first of four parts of my posts about the irrefutability of AC in ZF.

It will be about the basic logical background.

Canonically, an alphabet is a set of symbols. It is not necessarily finite. A string is a finite sequence of symbols. A language is a set of strings.

Let us fix the alphabet of logical symbols {\mathcal S}:=\{\forall,\rightarrow,\wedge,\bot\}. Furthermore, we define the sets REL_n:=\{R_i^n|i\in\mathbb{N}_0\} of n-ary relation symbols for n\in\mathbb{N}_0, and the sets FUN_n:=\{f_i^n|i\in\mathbb{N}_0\} of n-ary function symbols, and the set VAR:=\{x_i|i\in\mathbb{N}_0\} of variable symbols. Of course, they will stand for relations, functions and variables, but so far, they are only symbols with no meaning.

We define the set TER, the language of terms, as the smallest superset of VAR satisfying the closure condition t_1,\ldots,t_n\in VAR\Rightarrow f_i^n t_1\ldots t_n\in VAR for all i,n. This resembles terms in polish notation, but still, so far TER is just a set of strings with no specified meaning.

We define the set {\mathcal L}, the language of formulae, as the smallest set containing all strings of the form R_i^n t_1\ldots t_n for t_1,\ldots,t_n\in TER for all i,n, and satisfying the closure conditions
  • \bot\in{\mathcal L}
  • A\in{\mathcal L}, x_i\in VAR \Rightarrow \forall_x A\in{\mathcal L}
  • A\in{\mathcal L}, B\in{\mathcal L} \Rightarrow A\rightarrow B\in{\mathcal L}
  • A\in{\mathcal L}, B\in{\mathcal L} \Rightarrow A\wedge B\in{\mathcal L}
\bot stands for the falsum, the proposition which is always false.

This intuitively resembles logical formulae built up from relations between terms and their conjunctions, implications and quantifications. In most theories, we have term symbols different from f_i^n, and we may safely use these instead of f_i^n, same for relation symbols, same for variable symbols. Some binary function symbols and relation symbols use infix notation, and we may as well allow these and brackets to change the precedence as usual. So for example, instead of \forall_{x_0}\forall_{x_1}\forall_{x_2} R_0^2 f_0^2 f_0^2 x_0 x_1 x_2 f_0^2 x_0 f_0^2 x_1 x_2 we may write \forall_x\forall_y\forall_z (x+y)+z = x+(y+z), as it is clear how to convert this formula into our defined notation.

Furthermore, we define \lnot A, "not A", to be A\rightarrow\bot, and A\vee B, "A or B", to be \lnot (\lnot A\wedge \lnot B), and \exists_x A, "there exists an x satisfying A", to be \lnot\forall_x\lnot A.

A set of such formulae is called (first-order) theory. Intuitively, a formula A follows from a theory T, if all structures satisfying the formulae in T also satisfy A. Make sure that you understood this intuition, it is crucial for the further definitions. It motivates to define what exactly a structure is, and what it means for a structure to satisfy a formula.

We define the concept of an interpretation \mathcal M, which maps function symbols and relation symbols to functions and relations, and bounds quantifiers. Mathematical structures usually have relations and functions on a domain, for example for arithmetic this is the set of natural numbers. So an interpretation has a domain \mathcal D. Having fixed this domain, we need a rather unintuitive object, a function \eta:VAR\rightarrow{\mathcal D}, the valuation. This function is only a "helper" for some edge cases in our setting. And then finally, we need two functions {\mathcal I}_R and {\mathcal I}_f, mapping REL_n and FUN_n to n-ary functions and relations on \mathcal D. We unite these functions into a single function \mathcal I, as it is clear which one of them we mean by the argument we give to it. Now, an interpretation {\mathcal M}=({\mathcal D},{\mathcal I},\eta) is a triple of these functions and a domain.

An example would be \mathcal D=\mathbb{N}, \eta:i\mapsto x_i, {\mathcal I}_R(R_0^2)=\{(x,x)|x\in\mathbb{N}\}, {\mathcal I}_f(f_0^2))= (x,y)\mapsto x+y, which would give a standard interpretation of the above formula.

We can extend {\mathcal I} to general terms by {\mathcal I}(x_i):=\eta(x_i) for variables x_i, and {\mathcal I}(f_i^n t_1\ldots t_n):= ({\mathcal I}(f_i^n))({\mathcal I}(t_1),\ldots,{\mathcal I}(t_n)). (As you can see, the valuation's purpose is mainly to give a value to free variables.)

Now we can define what it means for an interpretation {\mathcal M}=({\mathcal D},{\mathcal I},\eta) to satisfy a formula A. For this, we write {\mathcal M}\models A, saying \mathcal M models A, or that \mathcal M is a model of A:
  • {\mathcal M}\not\models\bot
  • {\mathcal M}\models R_i^n t_1\ldots t_n iff ({\mathcal I}(t_1),\ldots,{\mathcal I}(t_n))\in{\mathcal I}(R_i^n).
  • {\mathcal M}\models A\wedge B iff {\mathcal M}\models A and {\mathcal M}\models B
  • {\mathcal M}\models A\rightarrow B iff {\mathcal M}\models A implies {\mathcal M}\models B
  • {\mathcal M}\models \forall_{x_i} A iff for all x\in{\mathcal D} we have ({\mathcal D},{\mathcal I},\theta)\models A, where \theta(y)=\left\{\begin{array}{rl} x & \mbox{for } y=x_i \\ \eta(y) & \mbox{otherwise}\end{array}\right.
For a theory T we define {\mathcal M}\models T by {\mathcal M}\models A for all A\in T.

To add an additional level of confusion, we define a second meaning for \models, namely a relation between theories. Let T,U be theories. Then T\models U iff for all models {\mathcal M}, {\mathcal M}\models T implies {\mathcal M}\models U. Furthermore, we definde T\models A for a formula A to be T\models\{A\}. T\models A now finally means that A follows from T, which is what we wanted to achieve.

There are some well-known theories like ZFC, NGB and PA. But there are also simpler theories like the theory of groups, {\mathcal G}=\{\forall_x\forall_y\forall_z (x+y)+z=x+(y+z), \forall_x\exists_y x+y=0, \forall_x x+0=x\}. Every group is a model of {\mathcal G}.

Theories which have at least one model are called satisfiable, it is provable that this implies that no contradiction can be derived from them, that is, they are also consistent. There are some theories which do not have a model, for example, trivially \{\bot\} does not have one. Less trivial, we can define the theory {\mathcal F} of fields similar to the theory of groups. Then {\mathcal F}\cup\{\exists_{a_1}\ldots\exists_{a_6}\forall_x((\bigvee\limits_{i=1}^6 x=a_i)\wedge(\bigwedge\limits_{1\le i < j\le 6} a_i\neq a_j))\} is not satisfiable, as it can only be satisfied by a field with exactly six elements, which clearly does not exist.

The other question, which will be relevant in the next parts, is the question whether formulae A are decided by a theory T anyway, that is, whether T\vdash A or T\vdash\lnot A. Gödel proved that this is not the case when the theory is of sufficient complexity. Furthermore, there is a proof that this does not hold for the theory of ZF and the axiom of choice. Such propositions are called independent.

But there are much simpler examples. Consider \mathcal G again, and the formula A=\forall_x\forall_y x+y=y+x. A is independent from \mathcal G: There is a model of \mathcal G which satisfies A, and a model of \mathcal G which does not satisfy A.

For example, (\mathbb{Z},\begin{array}{c} + \rightarrow (x,y)\mapsto x+y \\ 0 \rightarrow 0 \\ = \rightarrow \{(x,x)|x\in\mathbb{Z}\}\end{array}, i\mapsto x_i\} \models A, since the group of integers is abelian. But for example, the multiplicative group of invertible matrices is not commutative.

In the end, the independence proof of AC from ZFC is not different from this: A model of set theory is given which satisfies AC, and another model is given which satisfies the negation of AC.

When booting via PXE, Linux usually uses NFS or NBD to transmit its root file system. After that, a SquashFS or UnionFS (or something similar) is applied. There is no problem with this, but it can be annoying to synchronize the parts being in the TFTP-tree with an NFS-Tree, or to maintain an iso-file. In either case, you have a second fileserver. It would be nice if one could also provide the root file system via TFTP.

I see four problems with this approach.

There are no directory listings in TFTP. A conservative extension to the protocol would be to define a file, say directory/*dirlist*, containing the directory index. With a simple script running over the published directories and creating that file in every of them, one would not even need to adapt the TFTP server, as the protocol could stay the same. Another possibility would be to define an additional RFC 2348 compliant option "dirlist=true" in the request-package. However, this would need a protocol extension.

There is no meta information for files in TFTP. A conservative extension to the protocol would be to add them to the directory list mentioned above. This would also be extensible. Obviously, this could also be done with a RFC 2348 compliant option, say "file-meta", in the request-package.

Files can only be 32 megabytes large. This is due to the fact that RFC 1350 specifies that 512 byte blocks are enumerated by short integers. RFC 2347 defines an extension to extend the block size, but still, there is an upper bound by the package size of UDP packages, so this is no real solution. A conservative extension would be to split the files into smaller parts with a defined naming scheme. By just running a script over the directories which splits all large files, one would not even need a protocol extension for this, and old TFTP servers could still be used. Another possibility would be a RFC 2348 compliant option defining a larger block index, but this would be a protocol extension which is not backward compatible.

Files are always sent on the whole. Especially when publishing block devices, this is a problem. One could split them into smaller files as proposed above, but this is not really convenient. I cannot think of any other conservative standard for this. Another possibility would be a RFC 2348 compliant range-option which gives the byte-range to be read, this would be a protocol extension, but it would also solve the problem of files being larger than 32 megabytes.

There may be some problems, but I think the setup could become much easier.

I gave a short talk about the irrefutability of the axiom of choice from ZF for some younger students, in the scope of a seminar of the best mathematics club of the multiverse.

I wanted to create a script, and instead of putting up a LaTeX-document which will then collect dust in one of my backup directories, I thought I'd write a short series of postings giving a script for this talk. Unfortunately, it is rather complicated, and I am not sure whether I can recommend that.

The talk was in German, these posts will be in English, though. I will separate them in parts, and will publish the parts delayed, as they will be very long. I hope that everyone interested in it will enjoy it.

In the end, the talk was a lot shorter and I have left out a lot of stuff. It is impossible to put all of that matter into a two hours talk for people not familiar with logic.

It is based on my lecture notes for the lecture "Models of Set Theory" given by Donder (whom I thank for giving his permission to do so), and the book "Set Theory" by Kunen, and "Einführung in die Mengenlehre" by Oliver Deiser.

What I want to do is give an ontological introduction to the topic, as I think there is no such thing yet. That is why I do not give the proof for the independence of AC, but only for the irrefutability.

This is not a scientific paper!

This time I will only publish an outline, which I will fill with hyperlinks as soon as I post them (so this post will be updated until all parts are published):

Part 1 will be about the basic logical background needed for this:
  • Formulae
  • First-Order Theories
  • Interpretations and Models
  • A Simple Independence Proof
Part 2 will be about ZF:
  • Classes
  • Axioms of ZF
  • The Axiom of Choice
  • Well-Orderings, Ordinals and Transfinite Induction
Part 3 will be about inner models:
  • The V-Hierarchy
  • Inner Models
Part 4 will then contain the actual proof:
  • The Classes OD and HOD
  • HOD \models AC
It is likely that I made some mistakes, so if you see one, feel free to send comments.

I was very happy when I finally got my ThinkPad X200 Tablet, which is a Convertible, with a screen which is also a graphic tablet. It rapidly reduced the amount of paper I used, since I can do the usual scribbling directly on the screen. Under Windows, there is a nice software, called Windows Journal.

A similar software which is platform independent, and which I like more meanwhile, is Xournal. It is my preferred tool for taking quick notes or just scribbling around. It has functionality for "rubber and pen", and lined paper, so it is quite easy to work with.

And it is my preferred tool for annotating PDF files, even though these files sometimes appear wrong on Apple-software (I still do not have any clue why).

It is also good for drawing images which should look like "natural drawing".

I like this software, and I use it very often. Meanwhile it definitely belongs to the software I install by default whenever I begin to use a new system.

Title: "Influenza" -- Panel 1: Lungs inside a human torso are shown. -- Panel 2: Zooms into the lungs from Panel 1 and shows two viruses which look like spiked balls whith faces. Virus A: "Sometimes I wonder whether it is right to kill people. We cause so much pain to humans. Maybe we should focus on other ways of surviving, and make laws to enforce them." -- Panel 3: Virus B: "What are you talking about? Think of all the poor families who live from infecting humans! Think of all their poor children! You can never compare the well-being of a human to the well-being of a virus!" -- Panel 4: Virus A: "I suppose you are right." -- Panel 5: A sad person inside a sick bed attached to a cardiograph is shown.

Unix-Files are saved with three time-values in the meta-information: The atime, the ctime and the mtime. The ctime is set when a file is changed. Similarly, mtime is set when the file is modified. The difference to the ctime is that ctime is also set when the file is renamed or the permissions are changed, while mtime is set when the content of the file is changed. These values are completely intuitive.

Now, there is the atime. I always thought it saves the timestamp of the last time the file is accessed. That is why I usually mount my file systems with the noatime option: I do not want every read access to result in a write access of header information of a file.

As I said, I thought, atime means that the file is accessed. That is, we would always have atime >= mtime >= ctime. This is wrong - and probably my conception of "access" is wrong.

Accessing means reading or writing - at least in my humble opinion. But atime is only updated when the file is read. It is more like a read time than like an access time. So actually, atime can be smaller than mtime: If the file was modified but not read afterwards. In the first moment, that did not make sense to me, but seeing that some mailclients like mutt actually use atime, it became clearer: If you have mails that have been sent, the file modification time is the time when the mail has been received. The read time is the time when the mail has been read the first time. Therefore, using these values gives you a mechanism of keeping track of read and unread mails.

I do not think that this is still a good way of programming, because disks are large enough to save this meta information in the data part of a file, and atime is not always accessible or correct, but of course, if you want to use meta information for this, it makes sense, and in the past it may have been a reasonable setup.

An interesting alternative to the noatime mount option is the relatime mount option: It updates atime only if atime <= mtime, that is, only if the file has been modified since the last read access. That way, mutt and other software using these times to manage their read flags still works.

This is a hack. A hack which is both stupid and ingenious. It is one of these hacks that you usually see when people talk about Windows internals.

According to Gulli News, Windows 8 will not shut down completely, but keep a memory image which is then loaded to RAM, to improve the speed of the startup. This reminds me of a quote regarding Smalltalk:

I always show this, when asked by C/C++ programmers, as a typing in of "Hello, World!," preferably into an empty Transcript pane. And then I save the image. Loading the program subsequently brings up: "Hello, World!" Individuals will quibble over this relentlessly, saying, "It's not the same!" I can only agree that it's not the same, but that was my point about the languages and their environments in the first place.

Has a bit of Smalltalk's philosophy reached the practical world?

Wenn ich das Wort "Bodenständig" höre denke ich unweigerlich an konservative Politiker in Bierzelten, die Volksmusik hören. Dementsprechend hört sich "Bodenständigkeit" für mich nach einem konservativen Kampfbegriff an, doch gehen wir mal ein wenig näher hin zu dessen Wurzeln. Offenbar handelt es sich um eine Metapher, etwas steht auf dem Boden, vermutlich auf dem metaphorischen Boden der Tatsachen. Das Gegenteil dazu ist wohl am Ehesten "Abgehoben", also ohne relevanten Kontakt zum Boden.

Während nun dieser Kampfbegriff genutzt wird, um schlechte Musik und gestrige Politik zu rechtfertigen, also im Wesentlichen, um eine Weiterentwicklung zu verhindern, fände ich die Einführung dieses Begriffes im Bereich der Software möglicherweise ganz sinnvoll. Im Unterschied zur Realität ist es im Bereich der Software erheblich einfacher, und erheblich weniger verpönt, Dinge zu verändern, es ist üblicherweise sogar notwendig. Die Frage darf also nicht dahin gehen, ob Veränderungen grundsätzlich gut sind, sondern, welche Veränderungen gut sind.

Und wie der konservative Politiker den Atomausstieg kritisch sieht, sehe ich die Wolkenisierung (mein persönlicher Begriff für das Bewegen in die Cloud, den ich an dieser Stelle prägen will) sehr kritisch. Die Probleme die sich mit der Wolkenisierung grundsätzlich ergeben sind bekannt und werden vielseitig diskutiert.


(via xkcd)

Dass ich nicht so ganz verstehe, warum jeder auf Wolkenisierung aus ist, ist sowieso ein ganz anderes Thema. Viel wichtiger ist aber, dass ich nicht sehe, was die große Neuerung an Wolkenisierung ist. Und hier bin ich wohl so schwer es mir fällt das zuzugeben vergleichbar mit obigen Politikern: Es steht eine Änderung bevor, jeder findet sie toll, aber was sie letztendlich an Vorteilen bringt sehe ich nicht. Viel schlimmer, ich sehe nicht einmal, was die Wolkenisierung an Neuerungen bringt.


(found at geek-and-poke, Creative Commons Attribution 3.0 Unported License)

Ich sehe beispielsweise durchaus eine Dezentralisierung von Webdiensten und eine redundante Verbindung vieler Server als positiv an, aber genau das sehe ich in der aktuellen Entwicklung nicht. Viel mehr sehe ich, dass Leute sich von klassischen E-Mails hin zum Oligopol von Gesichtsbuch und Konsorten zentralisieren, selbes gilt für dezentrale Dienste wie Jabber und IRC. Und selbst das dezentrale Web wird insofern zentralisiert, dass proprietäre Formate immernoch nicht ausgestorben sind. Eigentlich führt die Wolkenisierung eher zum Gegenteil des Cloud Computing, die Daten sind jetzt nicht mehr auf vielen kleinen PCs gespeichert, sondern auf einer zentralen (wenn auch vermutlich redundanten) Serverfarm gespeichert.

Hier kommt die Bodenständigkeit ins Spiel. Die wenigsten Neuerungen sind wirklich neu. Blogs gibt es schon lange, davor gab es Webseiten die man noch nicht Blogs nannte, die aber effektiv genau so funktionierten. Daten auf einen Server speichern, sodass man aus der Ferne auf sie zugreifen kann - kein Problem, dafür gibt es jede Menge an Protokollen, beginnend bei sowas Einfachem wie FTP, zumindest braucht man dafür keine wunderbaren neuen Web 2.0 Startups. Remote-Anwendungen kann man seit Jahrzehnten entweder per Remote-Shell, z.B. SSH, ausführen, oder, wenn man unbedingt Grafik will, per X-Forwarding. Natürlich hat das alles auch Nachteile, und man kann alles verbessern. Trotzdem sollte man Bodenständig bleiben.

Webanwendungen sind eine ganz nette Entwicklung der Neuzeit, aber auch im Wesentlichen, weil sie sich durchgesetzt haben. Wo es recht schwierig ist, einem Windows-User SSH und X-Forwarding beizubringen, hat er den Browser eben schon da. Mehr ist es nicht.

Und mehr ist es bei vielen neuen Entwicklungen nicht. Und das ist schade, denn wenn man bedenkt, was man heute alles kann, was früher unvorstellbar gewesen ist, und die Stagnation sieht, die uns jetzt durch Wolkenisierung offenbar bevorsteht. Eine Möglichkeit dem entgegenzuwirken ist, bodenständige Software zu verwenden.

Ich werde zum Beispiel häufig belächelt, weil ich BitlBee für mein Instant Messaging verwende. Es sei sehr rudimentär. Das ist wahr, und es hat auch ein paar Nachteile gegenüber Pidgin, aber diese Nachteile werden abgebaut (inzwischen kann man soweit ich das gelesen habe sogar schon Dateien übertragen und kriegt eine Skype-Anbindung). Als Frontend benutze ich Irssi. Beides sehr rudimentär, doch es hat vergleichbare Funktionalität mit den großen Clients. Bodenständigkeit ist das Zauberwort. Design kommt nach Funktionalität. Buttons kommen erst, wenn das CLI funktioniert. Und dann kommt die Desktop-Integration, die bei BitlBee natürlich erstmal komplett fehlt. Die ist aber auch das Letzte an das man denken sollte, erstmal muss der Rest gut genug funktionieren. Und bei BitlBee sieht man dass die Teile die es kann funktionieren.

Ein Negativbeispiel stellen diverse kommerzielle Softwareprodukte dar, insbesondere, wenn sie versuchen, Linux-Installer auszuliefern. Oft nutzen diese Installer irgendwelche Heuristiken, um Pfade und Einstellungen festzulegen. Anstatt zu versuchen, Software zu schreiben, die mit allen bekannten Situationen zurechtkommt, wäre die Zeit vielleicht besser investiert in das Erstellen von - bodenständigen - rpm und deb Paketen. Das ist nicht trivial, es erzeugt Probleme, aber wenigstens ist die Software dann kompatibel mit einer Standardinfrastruktur.

Only few people are really willing to implement an own operating system, at least compared to the people willing to implement an own textadventure. But there are projects. Like MikeOS. A nice little OS with an elementary text editor, BASIC interpreter and an implementation of Hang Man.

Nothing special. But nice.

Step 1:

Step 2:

Step 3:

Step 4: