This is part 2 of my series about the irrefutability of the axiom of choice from ZF. It will mainly introduce concepts about set theory.

Set theory is all about collections. While some set theories use elementary objects, ZF only knows sets. It needs no function symbol, but one binary relation symbol \in. Firstly of course, we notice that collections can usually be denoted by propositions with a free variable. Such collections are called classes. The intuition is that a class A associated with a formula B(x) is the collection \{x|B(x)\} (which is of course not possible for all B). We write x\in A for B(x), and sometimes we associate B itself with the class. We write A\not\in B for \lnot A\in B.

Most important, classes are only syntactical objects. We cannot quantify over them. They are only a convenient form of talking about collections, but everything we do with classes could be done logically directly. For example, if we write A\cup B, we could as well talk about the formula A(x)\vee B(x). Especially, we say that two classes are equal when they contain the same elements.

A first important class is the class \emptyset := \{x|\bot\}. It obviously contains no element. It is the empty class, we will axiomatically postulate that it is a set, the empty set.

Furthermore, there is the class V:=\{x|\bot\rightarrow\bot\}, the universe, which contains all sets. It is not a set, it is a proper class.

Another interesting class, which resembles Russel's paradoxon, is the class R:=\{x|x\not\in x\}. It is not a set: Assume it was a set and R\in R. Then R\not\in R. Contradiction, so R\not\in R, but then R\in R. Contradiction. R cannot be a set. R is a proper class.

The duty of ZF is it now to provide a class of sets, by postulating the existence of sets that can be created from other sets. However, the first axiom we deal with does not give us extra powers of creating sets, but limits the existing sets. It is the axiom of extensionality.

Since we did not want to add axioms for equality, we define equality of sets as it is done in Wikipedia:

 x=y :\Leftrightarrow \forall_z (z\in x\leftrightarrow z\in y) \wedge \forall_z(x\in z\leftrightarrow y\in z)

Then the extensionality axiom sais that two sets containing the same elements are equal, which can be expressed by

(EXT)  \forall_x\forall_y( \forall_z (z\in x \leftrightarrow z\in y) \rightarrow \forall_z(x\in z\leftrightarrow y\in z))

The first axiom giving us a set is the axiom of the empty set. Formally, it can be expressed by \exists_x \forall_y y\not\in x. However, we choose an equivalent, but more convenient notation, using the class V we defined above:

(NUL) \emptyset\in V

Having two sets, we axiomatically postulate the existence of a set containing these two sets, \forall_x\forall_y\exists_z (x\in z\wedge y\in z). Again, we use a more convenient notation

(PAR)
\forall_x\forall_y \{x,y\}\in V

As the high-level-notations should be well-known and as it should be clear how they can be rewritten into what we defined in Part 1, we will continue using high-level-notations.

For every set, there is the union of its elements:

(UNI) \forall_x \bigcup_{y\in x}\{y\} \in V

Then there is the axiom of infinity, which gives us the existence of an infinite set, which will turn out to guarantee the existence of the set of finite (ordinal) numbers. If one has never worked with ordinals before, it is rather counter-intuitive, but as soon as ordinals are definet, it should become clear. Essentially, it sais that there is a set containing \emptyset, and for all elements x it contains, it also contains x\cup\{x\}. Therefore, it contains \emptyset,\{\emptyset\},\{\{\emptyset\},\emptyset\},\ldots.

(INF) \exists_u (\emptyset\in u\wedge\forall_x (x\in u\rightarrow x\cup\{x\}\in u))

Then, for every set b we want a powerset \mathbb{P}(b), containing all subsets of b:

(POW) \forall_b \mathbb{P}(b)\in V

A further boundary for sets is the axiom of foundation, which sais that every nonempty set contains an element which is disjoint to itself. From this axiom follows that there are no infinite \ni-chains.

(FUN) \forall_z (z\neq\emptyset\rightarrow\exists_x(x\in z\wedge x\cap z=\emptyset))

For the final axiom, we need an additional concept. Usually, a binary relation can be considered a set of pairs, and a function can be considered a special type of relation. That is, a function is a relation which is left-unique, that is, if it contains (a,b_1) and (a,b_2), then b_1=b_2. We can extend this notion to classes: A class B is called functor, if it contains only pairs of sets, and if (a,b_1)\in B and (a, b_2)\in B implies b_1=b_2 for all a,b_1,b_2.

As functors are classes, there is no way of quantifying over all of them. The final axiom is therefore not really an axiom (even though it is often considered so), but an axiom scheme, describing infinitely many axioms. It sais that if we have a functor and a set, then the image of that set under the functor is also a set. It is called replacement scheme.

(RPL) For every functor F we have \forall_x \{F(y)|y\in x\}\in V

From the replacement scheme follows the comprehension scheme:

Lemma: For every class B and every set x\in V we have B\cap x\in V.
Proof: We can easily define a functor sending every element a onto \{a\} if a\in B, and onto \emptyset otherwise. Applying (RPL) and (UNI) we then get what we want.

Lemma: There is no infinite \ni-chain
Proof: Assume we had an infinitely long decreasing chain x_1\ni x_2\ni x_3\ni\ldots. Consider x=\bigcup_i \{x_i\}. For an arbitrary y\in x we can find an i such that y=x_i. Therefore, x_{i+1}\in y\cap x, and so, x violates (FUN). Contradiction.


Now we have all axioms of ZF. The critical part, which is the center of our further attention, is the axiom of choice:

(AC) \forall_x \exists_f ((f\mbox{ function on domain }x)\wedge\forall_y((y\in x\wedge y\neq\emptyset)\rightarrow f(y)\in y))

The relevance of this axiom is not immediately clear. It becomes clearer when dealing with well-orderings and ordinals:

Definition: A strict total ordering on a set s is a binary relation R satisfying
  • Transitivity: xRy and yRz imply xRz
  • Trichotomy: xRy or yRx or y=x
  • Irreflexivity: \lnot xRx
for all x,y,z\in s.

Definition: A well-ordering on a set s is a strict total ordering W on s, such that every x\subseteq s has a minimum.

Well-orderings are interesting since their existence for every set is equivalent to the axiom of choice:

Theorem: If every set A has a well-ordering w_A, then the axiom of choice holds.
Proof: Let an arbitrary set A be given. By (UNI), B=\bigcup\limits_{x\in A}x exists, and by assumption, there is a well-ordering w_B. Define f(\emptyset)=\emptyset and let f(x) for x\in A, x\neq\emptyset be the minimum of x regarding w_B. Then this f satisfies the axiom of choice.


Theorem: From the axiom of choice follows that every set has a well-ordering

We will not prove this here. A German version of this proof can be found on Wikipedia.

Well-orderings turn out to be extremely useful, therefore, we are interested in classifying them in a simple way. In the usual way we can define order-isomorphisms as structure-preserving bijective functions, and as usual, isomorphism becomes an equivalence relation, and therefore, we want to find representatives for every equivalence class.

Lemma: Let (x,<) be a well-ordering and y\in x. Then (\{z\in x|z<y\},<) is also a well-ordering, and is not isomorphic to (x,<).
Proof: Trivially (\{z\in x|z<y\},<) is also a well-ordering. Assuming there was an isomorphism f:\{z\in x|z<y\}\rightarrow x, then \{z\in x|f(z)\neq z\} is non-empty and contains a smallest element m such that f(m)\neq m. Then there must be some n\in x such that f(n)=m, but as m was minimal and n\neq m, we know that n>m, therefore f(n)>f(m), so m>f(m), which means that f(f(m))=f(m) since m was minimal, and therefore f(m)>f(f(m))=f(m). Contradiction.

Definition: A set x is called an ordinal if for all y\in x we have y\subseteq x and x is well-ordered by \in. The class of all ordinals is called \operatorname{On}, we will later prove that it is a proper class.

The following definition of natural numbers are examples for finite ordinals:
  • 0:=\emptyset
  • n+1 := n\cup\{n\}
We call the set of these numbers \omega, it is the smallest infinite ordinal.

Lemma: \emptyset\in x for x\in \operatorname{On}
Proof: x contains an \in-minimum y. Assume there was some z\in y, then also z\in x, and y would not be minimal anymore. Contradiction. Thus, y=\emptyset.

Lemma: If y\in x and x\in \operatorname{On}, then y\in \operatorname{On}.
Proof: As y\in x, y\subseteq x, so trivially, y is also well-ordered by \in. Assuming z\in y and t\in z, by transitivity we have t\in y, so every element of y is a subset. y\in \operatorname{On}.

Theorem: If x and y are ordinals whose well-orderings regarding \in are isomorphic, then x=y
Proof: Call the isomorphism f. Assuming f\neq id, there must be a minimal z\in x such that f(z)\neq z. Trivially, f(\emptyset)=\emptyset, so z\neq\emptyset. Therefore, there is some t\in z. Trivially, we have f(z) = \{f(t)|t\in z\}=\{t|t\in z\}=z. Contradiction.□

Theorem: Any two ordinals are comparable.

We will not prove this here.

Definition: For a class A we define A^{<\omega} as the collection of finite sequences of sets inside A.

Theorem: There is a bijective functor G:\operatorname{\operatorname{On}}\rightarrow\omega\times \operatorname{\operatorname{On}}^{<\omega}.
Proof: We show that there exists a well-ordering on \omega\times \operatorname{\operatorname{On}}^{<\omega} such that the class of predecessors of every element is a set. From this, the theorem follows directly. Firstly, on \operatorname{\operatorname{On}}^{<\omega}, define an ordering

pRq :\Leftrightarrow (\max(p) < \max(q)) \vee (\max(p) = \max(q) \wedge \operatorname{dom}(p) < \operatorname{dom}(q)) \vee (\max(p) = \max(q) \wedge \operatorname{dom}(p) = \operatorname{dom}(q) \wedge \exists_i\forall_{j<i} (p_i=q_j\wedge p_i<q_j))

, where max(\emptyset)=0. This is a well-ordering: Firstly, it sorts according to the maximum of the sequence, then according to the length, if both are equivalent, then it uses lexicographical ordering. Form \omega\times \operatorname{\operatorname{On}}^{<\omega} now, define \left<n,p\right><\left<m,q\right> :\Leftrightarrow pRq \vee (p=q \wedge n<m).

This is the first of four parts of my posts about the irrefutability of AC in ZF.

It will be about the basic logical background.

Canonically, an alphabet is a set of symbols. It is not necessarily finite. A string is a finite sequence of symbols. A language is a set of strings.

Let us fix the alphabet of logical symbols {\mathcal S}:=\{\forall,\rightarrow,\wedge,\bot\}. Furthermore, we define the sets REL_n:=\{R_i^n|i\in\mathbb{N}_0\} of n-ary relation symbols for n\in\mathbb{N}_0, and the sets FUN_n:=\{f_i^n|i\in\mathbb{N}_0\} of n-ary function symbols, and the set VAR:=\{x_i|i\in\mathbb{N}_0\} of variable symbols. Of course, they will stand for relations, functions and variables, but so far, they are only symbols with no meaning.

We define the set TER, the language of terms, as the smallest superset of VAR satisfying the closure condition t_1,\ldots,t_n\in VAR\Rightarrow f_i^n t_1\ldots t_n\in VAR for all i,n. This resembles terms in polish notation, but still, so far TER is just a set of strings with no specified meaning.

We define the set {\mathcal L}, the language of formulae, as the smallest set containing all strings of the form R_i^n t_1\ldots t_n for t_1,\ldots,t_n\in TER for all i,n, and satisfying the closure conditions
  • \bot\in{\mathcal L}
  • A\in{\mathcal L}, x_i\in VAR \Rightarrow \forall_x A\in{\mathcal L}
  • A\in{\mathcal L}, B\in{\mathcal L} \Rightarrow A\rightarrow B\in{\mathcal L}
  • A\in{\mathcal L}, B\in{\mathcal L} \Rightarrow A\wedge B\in{\mathcal L}
\bot stands for the falsum, the proposition which is always false.

This intuitively resembles logical formulae built up from relations between terms and their conjunctions, implications and quantifications. In most theories, we have term symbols different from f_i^n, and we may safely use these instead of f_i^n, same for relation symbols, same for variable symbols. Some binary function symbols and relation symbols use infix notation, and we may as well allow these and brackets to change the precedence as usual. So for example, instead of \forall_{x_0}\forall_{x_1}\forall_{x_2} R_0^2 f_0^2 f_0^2 x_0 x_1 x_2 f_0^2 x_0 f_0^2 x_1 x_2 we may write \forall_x\forall_y\forall_z (x+y)+z = x+(y+z), as it is clear how to convert this formula into our defined notation.

Furthermore, we define \lnot A, "not A", to be A\rightarrow\bot, and A\vee B, "A or B", to be \lnot (\lnot A\wedge \lnot B), and \exists_x A, "there exists an x satisfying A", to be \lnot\forall_x\lnot A.

A set of such formulae is called (first-order) theory. Intuitively, a formula A follows from a theory T, if all structures satisfying the formulae in T also satisfy A. Make sure that you understood this intuition, it is crucial for the further definitions. It motivates to define what exactly a structure is, and what it means for a structure to satisfy a formula.

We define the concept of an interpretation \mathcal M, which maps function symbols and relation symbols to functions and relations, and bounds quantifiers. Mathematical structures usually have relations and functions on a domain, for example for arithmetic this is the set of natural numbers. So an interpretation has a domain \mathcal D. Having fixed this domain, we need a rather unintuitive object, a function \eta:VAR\rightarrow{\mathcal D}, the valuation. This function is only a "helper" for some edge cases in our setting. And then finally, we need two functions {\mathcal I}_R and {\mathcal I}_f, mapping REL_n and FUN_n to n-ary functions and relations on \mathcal D. We unite these functions into a single function \mathcal I, as it is clear which one of them we mean by the argument we give to it. Now, an interpretation {\mathcal M}=({\mathcal D},{\mathcal I},\eta) is a triple of these functions and a domain.

An example would be \mathcal D=\mathbb{N}, \eta:i\mapsto x_i, {\mathcal I}_R(R_0^2)=\{(x,x)|x\in\mathbb{N}\}, {\mathcal I}_f(f_0^2))= (x,y)\mapsto x+y, which would give a standard interpretation of the above formula.

We can extend {\mathcal I} to general terms by {\mathcal I}(x_i):=\eta(x_i) for variables x_i, and {\mathcal I}(f_i^n t_1\ldots t_n):= ({\mathcal I}(f_i^n))({\mathcal I}(t_1),\ldots,{\mathcal I}(t_n)). (As you can see, the valuation's purpose is mainly to give a value to free variables.)

Now we can define what it means for an interpretation {\mathcal M}=({\mathcal D},{\mathcal I},\eta) to satisfy a formula A. For this, we write {\mathcal M}\models A, saying \mathcal M models A, or that \mathcal M is a model of A:
  • {\mathcal M}\not\models\bot
  • {\mathcal M}\models R_i^n t_1\ldots t_n iff ({\mathcal I}(t_1),\ldots,{\mathcal I}(t_n))\in{\mathcal I}(R_i^n).
  • {\mathcal M}\models A\wedge B iff {\mathcal M}\models A and {\mathcal M}\models B
  • {\mathcal M}\models A\rightarrow B iff {\mathcal M}\models A implies {\mathcal M}\models B
  • {\mathcal M}\models \forall_{x_i} A iff for all x\in{\mathcal D} we have ({\mathcal D},{\mathcal I},\theta)\models A, where \theta(y)=\left\{\begin{array}{rl} x & \mbox{for } y=x_i \\ \eta(y) & \mbox{otherwise}\end{array}\right.
For a theory T we define {\mathcal M}\models T by {\mathcal M}\models A for all A\in T.

To add an additional level of confusion, we define a second meaning for \models, namely a relation between theories. Let T,U be theories. Then T\models U iff for all models {\mathcal M}, {\mathcal M}\models T implies {\mathcal M}\models U. Furthermore, we definde T\models A for a formula A to be T\models\{A\}. T\models A now finally means that A follows from T, which is what we wanted to achieve.

There are some well-known theories like ZFC, NGB and PA. But there are also simpler theories like the theory of groups, {\mathcal G}=\{\forall_x\forall_y\forall_z (x+y)+z=x+(y+z), \forall_x\exists_y x+y=0, \forall_x x+0=x\}. Every group is a model of {\mathcal G}.

Theories which have at least one model are called satisfiable, it is provable that this implies that no contradiction can be derived from them, that is, they are also consistent. There are some theories which do not have a model, for example, trivially \{\bot\} does not have one. Less trivial, we can define the theory {\mathcal F} of fields similar to the theory of groups. Then {\mathcal F}\cup\{\exists_{a_1}\ldots\exists_{a_6}\forall_x((\bigvee\limits_{i=1}^6 x=a_i)\wedge(\bigwedge\limits_{1\le i < j\le 6} a_i\neq a_j))\} is not satisfiable, as it can only be satisfied by a field with exactly six elements, which clearly does not exist.

The other question, which will be relevant in the next parts, is the question whether formulae A are decided by a theory T anyway, that is, whether T\vdash A or T\vdash\lnot A. Gödel proved that this is not the case when the theory is of sufficient complexity. Furthermore, there is a proof that this does not hold for the theory of ZF and the axiom of choice. Such propositions are called independent.

But there are much simpler examples. Consider \mathcal G again, and the formula A=\forall_x\forall_y x+y=y+x. A is independent from \mathcal G: There is a model of \mathcal G which satisfies A, and a model of \mathcal G which does not satisfy A.

For example, (\mathbb{Z},\begin{array}{c} + \rightarrow (x,y)\mapsto x+y \\ 0 \rightarrow 0 \\ = \rightarrow \{(x,x)|x\in\mathbb{Z}\}\end{array}, i\mapsto x_i\} \models A, since the group of integers is abelian. But for example, the multiplicative group of invertible matrices is not commutative.

In the end, the independence proof of AC from ZFC is not different from this: A model of set theory is given which satisfies AC, and another model is given which satisfies the negation of AC.

When booting via PXE, Linux usually uses NFS or NBD to transmit its root file system. After that, a SquashFS or UnionFS (or something similar) is applied. There is no problem with this, but it can be annoying to synchronize the parts being in the TFTP-tree with an NFS-Tree, or to maintain an iso-file. In either case, you have a second fileserver. It would be nice if one could also provide the root file system via TFTP.

I see four problems with this approach.

There are no directory listings in TFTP. A conservative extension to the protocol would be to define a file, say directory/*dirlist*, containing the directory index. With a simple script running over the published directories and creating that file in every of them, one would not even need to adapt the TFTP server, as the protocol could stay the same. Another possibility would be to define an additional RFC 2348 compliant option "dirlist=true" in the request-package. However, this would need a protocol extension.

There is no meta information for files in TFTP. A conservative extension to the protocol would be to add them to the directory list mentioned above. This would also be extensible. Obviously, this could also be done with a RFC 2348 compliant option, say "file-meta", in the request-package.

Files can only be 32 megabytes large. This is due to the fact that RFC 1350 specifies that 512 byte blocks are enumerated by short integers. RFC 2347 defines an extension to extend the block size, but still, there is an upper bound by the package size of UDP packages, so this is no real solution. A conservative extension would be to split the files into smaller parts with a defined naming scheme. By just running a script over the directories which splits all large files, one would not even need a protocol extension for this, and old TFTP servers could still be used. Another possibility would be a RFC 2348 compliant option defining a larger block index, but this would be a protocol extension which is not backward compatible.

Files are always sent on the whole. Especially when publishing block devices, this is a problem. One could split them into smaller files as proposed above, but this is not really convenient. I cannot think of any other conservative standard for this. Another possibility would be a RFC 2348 compliant range-option which gives the byte-range to be read, this would be a protocol extension, but it would also solve the problem of files being larger than 32 megabytes.

There may be some problems, but I think the setup could become much easier.

I gave a short talk about the irrefutability of the axiom of choice from ZF for some younger students, in the scope of a seminar of the best mathematics club of the multiverse.

I wanted to create a script, and instead of putting up a LaTeX-document which will then collect dust in one of my backup directories, I thought I'd write a short series of postings giving a script for this talk. Unfortunately, it is rather complicated, and I am not sure whether I can recommend that.

The talk was in German, these posts will be in English, though. I will separate them in parts, and will publish the parts delayed, as they will be very long. I hope that everyone interested in it will enjoy it.

In the end, the talk was a lot shorter and I have left out a lot of stuff. It is impossible to put all of that matter into a two hours talk for people not familiar with logic.

It is based on my lecture notes for the lecture "Models of Set Theory" given by Donder (whom I thank for giving his permission to do so), and the book "Set Theory" by Kunen, and "Einführung in die Mengenlehre" by Oliver Deiser.

What I want to do is give an ontological introduction to the topic, as I think there is no such thing yet. That is why I do not give the proof for the independence of AC, but only for the irrefutability.

This is not a scientific paper!

This time I will only publish an outline, which I will fill with hyperlinks as soon as I post them (so this post will be updated until all parts are published):

Part 1 will be about the basic logical background needed for this:
  • Formulae
  • First-Order Theories
  • Interpretations and Models
  • A Simple Independence Proof
Part 2 will be about ZF:
  • Classes
  • Axioms of ZF
  • The Axiom of Choice
  • Well-Orderings, Ordinals and Transfinite Induction
Part 3 will be about inner models:
  • The V-Hierarchy
  • Inner Models
Part 4 will then contain the actual proof:
  • The Classes OD and HOD
  • HOD \models AC
It is likely that I made some mistakes, so if you see one, feel free to send comments.

I was very happy when I finally got my ThinkPad X200 Tablet, which is a Convertible, with a screen which is also a graphic tablet. It rapidly reduced the amount of paper I used, since I can do the usual scribbling directly on the screen. Under Windows, there is a nice software, called Windows Journal.

A similar software which is platform independent, and which I like more meanwhile, is Xournal. It is my preferred tool for taking quick notes or just scribbling around. It has functionality for "rubber and pen", and lined paper, so it is quite easy to work with.

And it is my preferred tool for annotating PDF files, even though these files sometimes appear wrong on Apple-software (I still do not have any clue why).

It is also good for drawing images which should look like "natural drawing".

I like this software, and I use it very often. Meanwhile it definitely belongs to the software I install by default whenever I begin to use a new system.

Unix-Files are saved with three time-values in the meta-information: The atime, the ctime and the mtime. The ctime is set when a file is changed. Similarly, mtime is set when the file is modified. The difference to the ctime is that ctime is also set when the file is renamed or the permissions are changed, while mtime is set when the content of the file is changed. These values are completely intuitive.

Now, there is the atime. I always thought it saves the timestamp of the last time the file is accessed. That is why I usually mount my file systems with the noatime option: I do not want every read access to result in a write access of header information of a file.

As I said, I thought, atime means that the file is accessed. That is, we would always have atime >= mtime >= ctime. This is wrong - and probably my conception of "access" is wrong.

Accessing means reading or writing - at least in my humble opinion. But atime is only updated when the file is read. It is more like a read time than like an access time. So actually, atime can be smaller than mtime: If the file was modified but not read afterwards. In the first moment, that did not make sense to me, but seeing that some mailclients like mutt actually use atime, it became clearer: If you have mails that have been sent, the file modification time is the time when the mail has been received. The read time is the time when the mail has been read the first time. Therefore, using these values gives you a mechanism of keeping track of read and unread mails.

I do not think that this is still a good way of programming, because disks are large enough to save this meta information in the data part of a file, and atime is not always accessible or correct, but of course, if you want to use meta information for this, it makes sense, and in the past it may have been a reasonable setup.

An interesting alternative to the noatime mount option is the relatime mount option: It updates atime only if atime <= mtime, that is, only if the file has been modified since the last read access. That way, mutt and other software using these times to manage their read flags still works.

This is a hack. A hack which is both stupid and ingenious. It is one of these hacks that you usually see when people talk about Windows internals.

According to Gulli News, Windows 8 will not shut down completely, but keep a memory image which is then loaded to RAM, to improve the speed of the startup. This reminds me of a quote regarding Smalltalk:

I always show this, when asked by C/C++ programmers, as a typing in of "Hello, World!," preferably into an empty Transcript pane. And then I save the image. Loading the program subsequently brings up: "Hello, World!" Individuals will quibble over this relentlessly, saying, "It's not the same!" I can only agree that it's not the same, but that was my point about the languages and their environments in the first place.

Has a bit of Smalltalk's philosophy reached the practical world?

Wenn ich das Wort "Bodenständig" höre denke ich unweigerlich an konservative Politiker in Bierzelten, die Volksmusik hören. Dementsprechend hört sich "Bodenständigkeit" für mich nach einem konservativen Kampfbegriff an, doch gehen wir mal ein wenig näher hin zu dessen Wurzeln. Offenbar handelt es sich um eine Metapher, etwas steht auf dem Boden, vermutlich auf dem metaphorischen Boden der Tatsachen. Das Gegenteil dazu ist wohl am Ehesten "Abgehoben", also ohne relevanten Kontakt zum Boden.

Während nun dieser Kampfbegriff genutzt wird, um schlechte Musik und gestrige Politik zu rechtfertigen, also im Wesentlichen, um eine Weiterentwicklung zu verhindern, fände ich die Einführung dieses Begriffes im Bereich der Software möglicherweise ganz sinnvoll. Im Unterschied zur Realität ist es im Bereich der Software erheblich einfacher, und erheblich weniger verpönt, Dinge zu verändern, es ist üblicherweise sogar notwendig. Die Frage darf also nicht dahin gehen, ob Veränderungen grundsätzlich gut sind, sondern, welche Veränderungen gut sind.

Und wie der konservative Politiker den Atomausstieg kritisch sieht, sehe ich die Wolkenisierung (mein persönlicher Begriff für das Bewegen in die Cloud, den ich an dieser Stelle prägen will) sehr kritisch. Die Probleme die sich mit der Wolkenisierung grundsätzlich ergeben sind bekannt und werden vielseitig diskutiert.


(via xkcd)

Dass ich nicht so ganz verstehe, warum jeder auf Wolkenisierung aus ist, ist sowieso ein ganz anderes Thema. Viel wichtiger ist aber, dass ich nicht sehe, was die große Neuerung an Wolkenisierung ist. Und hier bin ich wohl so schwer es mir fällt das zuzugeben vergleichbar mit obigen Politikern: Es steht eine Änderung bevor, jeder findet sie toll, aber was sie letztendlich an Vorteilen bringt sehe ich nicht. Viel schlimmer, ich sehe nicht einmal, was die Wolkenisierung an Neuerungen bringt.


(found at geek-and-poke, Creative Commons Attribution 3.0 Unported License)

Ich sehe beispielsweise durchaus eine Dezentralisierung von Webdiensten und eine redundante Verbindung vieler Server als positiv an, aber genau das sehe ich in der aktuellen Entwicklung nicht. Viel mehr sehe ich, dass Leute sich von klassischen E-Mails hin zum Oligopol von Gesichtsbuch und Konsorten zentralisieren, selbes gilt für dezentrale Dienste wie Jabber und IRC. Und selbst das dezentrale Web wird insofern zentralisiert, dass proprietäre Formate immernoch nicht ausgestorben sind. Eigentlich führt die Wolkenisierung eher zum Gegenteil des Cloud Computing, die Daten sind jetzt nicht mehr auf vielen kleinen PCs gespeichert, sondern auf einer zentralen (wenn auch vermutlich redundanten) Serverfarm gespeichert.

Hier kommt die Bodenständigkeit ins Spiel. Die wenigsten Neuerungen sind wirklich neu. Blogs gibt es schon lange, davor gab es Webseiten die man noch nicht Blogs nannte, die aber effektiv genau so funktionierten. Daten auf einen Server speichern, sodass man aus der Ferne auf sie zugreifen kann - kein Problem, dafür gibt es jede Menge an Protokollen, beginnend bei sowas Einfachem wie FTP, zumindest braucht man dafür keine wunderbaren neuen Web 2.0 Startups. Remote-Anwendungen kann man seit Jahrzehnten entweder per Remote-Shell, z.B. SSH, ausführen, oder, wenn man unbedingt Grafik will, per X-Forwarding. Natürlich hat das alles auch Nachteile, und man kann alles verbessern. Trotzdem sollte man Bodenständig bleiben.

Webanwendungen sind eine ganz nette Entwicklung der Neuzeit, aber auch im Wesentlichen, weil sie sich durchgesetzt haben. Wo es recht schwierig ist, einem Windows-User SSH und X-Forwarding beizubringen, hat er den Browser eben schon da. Mehr ist es nicht.

Und mehr ist es bei vielen neuen Entwicklungen nicht. Und das ist schade, denn wenn man bedenkt, was man heute alles kann, was früher unvorstellbar gewesen ist, und die Stagnation sieht, die uns jetzt durch Wolkenisierung offenbar bevorsteht. Eine Möglichkeit dem entgegenzuwirken ist, bodenständige Software zu verwenden.

Ich werde zum Beispiel häufig belächelt, weil ich BitlBee für mein Instant Messaging verwende. Es sei sehr rudimentär. Das ist wahr, und es hat auch ein paar Nachteile gegenüber Pidgin, aber diese Nachteile werden abgebaut (inzwischen kann man soweit ich das gelesen habe sogar schon Dateien übertragen und kriegt eine Skype-Anbindung). Als Frontend benutze ich Irssi. Beides sehr rudimentär, doch es hat vergleichbare Funktionalität mit den großen Clients. Bodenständigkeit ist das Zauberwort. Design kommt nach Funktionalität. Buttons kommen erst, wenn das CLI funktioniert. Und dann kommt die Desktop-Integration, die bei BitlBee natürlich erstmal komplett fehlt. Die ist aber auch das Letzte an das man denken sollte, erstmal muss der Rest gut genug funktionieren. Und bei BitlBee sieht man dass die Teile die es kann funktionieren.

Ein Negativbeispiel stellen diverse kommerzielle Softwareprodukte dar, insbesondere, wenn sie versuchen, Linux-Installer auszuliefern. Oft nutzen diese Installer irgendwelche Heuristiken, um Pfade und Einstellungen festzulegen. Anstatt zu versuchen, Software zu schreiben, die mit allen bekannten Situationen zurechtkommt, wäre die Zeit vielleicht besser investiert in das Erstellen von - bodenständigen - rpm und deb Paketen. Das ist nicht trivial, es erzeugt Probleme, aber wenigstens ist die Software dann kompatibel mit einer Standardinfrastruktur.

Only few people are really willing to implement an own operating system, at least compared to the people willing to implement an own textadventure. But there are projects. Like MikeOS. A nice little OS with an elementary text editor, BASIC interpreter and an implementation of Hang Man.

Nothing special. But nice.