This text is, as most of my texts, just a "ranty" compilation of impressions I have when watching current discussions about programming languages and software. I appreciate most comments, as long as they contain interesting points, especially when they change my point of view. It may not be worthwile to read, but as I have plenty of other posts here, like my Comics this should not become a problem.

So a few weeks ago I switched from Firefox to Chromium shortly after I updated my installation of Tiny Tiny RSS. My vserver was a bit too slow to answer all the requests immediately, which is not nice, but on the other hand nothing that I had to change immediately. However, Firefox seems to do active waiting when processing the requests (I also noticed this behavior on other occasions). And compared to Chromium, it performs really bad, even on my 8G Ram + SSD machine. And if one tab hangs, all others hang, too,  while Chromium supports multithreading. And even though many of the files in Firefox’s directories are SQLite3 databases which would support transactions, it is impossible to have two instances of Firefox running on the same session.
Of course, it is no secret that Chromium also has disadvantages. For example, it is not possible to have a multi-line tab bar, and there are no tab groups, and tabs are always on the top of the window, not on the bottom where it belongs, which can be achieved using Tab Mix Plus. The Bookmarks-Sidebar in Firefox is also much better. And tabs are apparently not synced with other instances, as it would be done with Firefox Sync.

I hope that with the change of Opera‘s rendering engine, Opera will become more usable in the future, and will therefore be a real alternative. I always liked the UI of Opera.

I think Firefox is just one instance of a much more general problem. When I run an old Firefox in a Windows 98 Virtualbox, it performs better than a native modern Firefox. Sure, the modern versions of Firefox supports some modern APIs that were not existent 1998 - but essentially, most of the stuff that can now be done could be done 1998 too, it just had different names. Of course, PNGs, Flash and some other formats were not supported, but I do not really believe that this is the reason. A similar example can be found when looking at Office. Microsoft Word under Windows 3.1 had functionality that would still be sufficient for most of the people who use it. Of course, in the last years, this software has learned a lot. But the basic functionality mostly stayed the same, so this is basically no excuse.
Probably the coding style got less efficient. A common attitude seems to be that "modern" concepts like abstract classes, JIT-compilation, highly dynamic data structures, recursion and garbage collection are the cause for this - that is, a lot of abstractions.
However, I do not think that this is the problem. There have been Lisps around since the 1960s, and Smalltalks since the 1970s. Even Haskell is pretty old now. And they all had reasonable performance on machines that are much worse than today’s computers. The problem seems to be that in those days the people made good abstraction layers which are usable for both the user and the computer, while today’s big software projects attract the stacking of abstractions which are bad for both the computer and the user.

One example for a good abstraction is the use of hierarchical file systems which is the de-facto-standard on most systems. It is overhead for the computer, who has to keep track of the names and types of files, but it is easy copared to more sophisticated database models, and it is an understandable model for the user. Of course, this model opposes limits to both the computer and the user, which is why some user interfaces stack abstractions. However, now that computers are faster and have much more memory, search databases are more predominant, and maybe hierarchical filesystems will vanish or at least lose its importance.
Another example for good abstraction is the usage of functions and dynamic programming libraries, which also has not been as predominant as it is today. Not only is it possible to reuse code, it is also a nice possibility to bring structure to programs.
An example for a bad abstraction is - in my opinion - XML. It grew out of HTML, which had a special purpose for which it was good. However, as a pure data-format, it is hard to read for humans as well as for computers, and the whole validation- and namespacing-scheme is totally over-engineered, and bloated. S-Expressions and JSON are two alternatives for structurized data which are both human- and machine-readable. Another example is C++. The pile of shit that this language forms is really amazing, it appears to me that except for the parts that come from C, every single concept in this language is not thought through in some strange and confusing way. Macros are explicitly not turing complete, but templates are. Templates generate strange error messages, and it is possible to write code of which it is not easily decidable whether it compiles. On the other hand, there now is a form of turing-complete macros, the so-called constexprs, but also with limitations that make them vitally useless. That is, though there are two turing complete techniques, macro-programming in C++ is like a rectal amygdalectomy.
Similariy, that is the case with LaTeX, which has a nice syntax for mathematica formuae, but is a mess otherwise. I do not know a singe person who really knows the internsls of latex, it could as wel just be a conspirancy, and in real, latex does not suppord programming with macros at all.

Speaking of latex, one can see another common antipattern of large software projects: Not only are bad abstractions stacked inside them, they also contain a lot of code nobody understands and nobody wants to maintain. Mostly it is possible to find a hack around every problem that arises, and some people have a good point why there is no urge to change this. It is a matter of taste whether one explicitly likes evolutionary grown software, Linus Torvalds seems to like it, I do not. But it is a fact that evolution, be it good or bad, occurs, and software is often not used the way it was designed for, but in the way that just randomly occured to be the easiest way of achieving a small goal. The Web used to be just a bunch of static pages, and now has evolved to a full-blown semantic remote desktop protocol, even though there were alternative solutions for this, and even though these would be suited better for that purpose. There are techniques and guidelines which try to prevent such things, but in general, I guess it cannot be prevented, and maybe it is better to just hack everything together until it works, instead of thinking first - it would be interesting to hear about studies that investigate whether hacking and debugging is faster than thinking about everything in the first place, it might be related to Worse Is Better. Howbeit, it might be possible to find a way to get the best out of both worlds: formal program verification.

Formal program verification and hacking seem to contradict each other, but in my opinion, it is at least worth a thought: Formal program verifiaction just ensures that certain properties of a piece of software are satisfied. And they will still be satisfied when that piece of software is used for a completely different purpose. For the web, this would mean that you have your old tag-soup parser, but at least with a verified grammar, even though it is not XHTML-compatible. And you would have a clear infrastructure for your database backends, on which you could rely and write wrappers before abandonning old code.
Well, all this sounds nice in theory. In practice, it is a bit more complicated. One major problem is that formal program verification needs a lot more thinking than just hacking down a few lines of code. It requires abstract mathematical knowledge that not everybody has, which makes it more expensive, especially since it is impossible to abstract away from it, like it can be done in numerics or linear algebra.
Additionally, most of the research appears to be done either in automated verification, or in dependently typed functional programming languages. While functional programming languages are a good tool for most purposes, the underlying machine remains inherently imperative, and therefore problems remain that cannot (yet) be solved easily using purely functional programming languages. But well, formal program verification works also for these, it just appears not to be as popular.

Hopefully, functional languages will become more suitable for low-level stuff in the future. Before it does, there are still a lot of obstacles to be taken, the hardest probably being the scepticism of the applied, practical world. For example, it is a general belief that functional programming always creates slow code. And to be honest, yes, it is probably slower than well-designed C-code, but so is Java, Python, PHP, and many libraries that can be used with C++. The functional paradigm enforces (or at least encourages) a level of abstraction which is useful, but not for free, and the real crux is whether it is worth its costs.
However, it is no elementary law of nature that sais that functional programming must be slow. There is research done, in the orbit of Okasaki, and if you think that functional languages are not fast enough for some problem, go on and do research on it, and make it more efficient!
Furthermore, something problematic about functional programming is the amount of memory software consumes when running. Well, there have been functional languages around since the 1970es, and they worked well, but apparently, they suffer from the same fate as every other software: They are bloated. However, a good concurrent garbage collector can decrease memory consumption and can optimize caching - but there are plenty of bad garbage collectors out there. Additionally, memory is cheap, and functional programming is - as soon as you are expierienced with it - faster and easier than imperative programming. A lot of heuristics for optimizations can help too. And after all, functional programming languages are still programming languages - the coding style affects the quality of the software. I think it scales on the long term. But currently, it may be too early.

Some people criticize purely functional programming languages for wasting memory and being unsafe. Actually, I would guess that functional code is usually a lot more secure than imperative code, because the common weak spots such as heap corruption and buffer overflows are less likely. Danisch criticizes that is impossible to safely overwrite memory, when coding in a purely functional manner. Well, this person appears to criticize a lot of stuff, and mostly in a way that is beyond good conversational style, at least in my opinion, but in this case, he has a good point (even though he did not really get it himself): There are problems that cannot be easily solved using purely functional programming, and safely deleting a cryptographic key from the memory seems like one, since this is a stateful operation. Of course, as soon as no reference goes to this part of memory anymore, nothing will access it - at least if the whole system was completely correct, and you will hardly find such a system. Well, there are several methods of using stateful operations in purely functional programming languages. Uniqueness types are one, but Monads are the most popular of them - and having the key encapsulated in a monad sounds like a nice API, actually: You can only access the key as long as you are inside the monad, and as soon as you leave it, you have the guarantee that the key is destroyed. So, for this special kind of problem, it is possible to give a purely functional API. But this API would still be a binding to something imperative: This kind of problem is below the usual abstraction level of today's functional programming languages. It would be nice to have something like this for system-level operations, too, but I do not know anything into that direction, except Movitz, which is not purely functional, however. On the other hand, I cannot accept this statement without a red herring: It appears to not even be trivial in C to ensure that memory is actually deleted. And with all the swapping, paging and confusing indirection of memory in current machines, it is questionable whether this is trivial in assembler.
I agree that for that kind of thing, functional languages are not (yet) suitable. But not being suitable was never a reason.
Related to this is the common will to program "on the metal". User mode x86 is not really "on the metal", with all the paging and the several indirections, it is rather an own virtual architecture. Kernel programming and programming for embedded devices is probably sort of "on the metal". But most of the code is not kernel code. And for small embedded devices, one should probably employ a different coding style than usually, anyway.
And while we are at arguments against functional programming, a common one is that no relevant commercial product was written in a functional language. Well, firstly, commercial usage does not yield quality: PHP and Java are both widespread but hated by many people. Commercial usage can depend on so many things, and quality is just one of them. Furthermore, if you count Lisp as a functional programming language, there were several Lisp Machines, there was Crash Bandicoot, there is Allegro and LispWorks. Reddit was originally written in Lisp. Then there is Jane Street Capital (whatever they actually do). And several companies use it. It may still be a niche, but it is questionable whether it stays one.

I guess it will not stay a niche, but I also do not expect too much positive development from this: You can mess up functional code, too, and I am pretty sure this is what will happen. Instead of using the additional layer of abstraction to write in a simple, short and clear way, I expect the abstraction layers to be stacked and unverified code being unmaintained, as it is done now. The whole enthusiasm about Haskell and its monads is an instance of it: Some Haskell-tutorials read like mathematical papers rather than programming tutorials, and while I have the knowledge to actually understand them, many people may not have it, and most people, including me, are not willing to read a paper about how to embed mutable arrays into a monad instead of just seeing a simple and clear example of their usage. To me it sometimes seems like the Haskell community tries to establish an elitist argot of over-formalization to enforce some strange sort of ideology, but without really providing anything new and worthwile - a programming concept that is not easily understandable and does not solve a common problem that has no widely accepted different solution is not worth anything at all. For states and IO, I consider uniqueness types the clearer and more useful abstraction. However, monads are useful too, if you do not over-formalize them. For theoretical computer science, talking about monads as monoids in the category of the endofunctors is good practice, but the nature of theoretical computer science is to be a bridge between theory and actual programming, and monads are practical, intuitive and natural in many situations, if explained correctly. However, it is a concept that is useful for some problems. Nothing more.

Juhu. Keine Praxisgebühr mehr.

So. Ich habe mir überlegt, mal wieder Prognosen zu machen, nachdem ich jetzt mal ein Jahr ausgelassen habe. Alles bestenfalls "nur so ein Gefühl", teils Befürchtungen, teils Wunschdenken. Traditionell in Deutsch, diesmal gruppiert:

  • Politik:
    • Die Bundesregierung wird bis September zusammenhalten, dann wird Bundestagswahl sein.
    • FDP und Piraten schaffen die 5%-Hürde in der Bundestagswahl knapp nicht.
    • Eine große Koalition wird sich bilden, Angela Merkel bleibt Bundeskanzlerin.
    • Das NPD-Verbotsverfahren wird vom Gericht abgelehnt. Daraufhin wird bekräftigt, wieder vermehrt V-Männer einzusetzen.
    • Die innereuropäische Fremdenfeindlichkeit wird aufgrund der Finanzkrise und der daraus resultierenden Probleme zunehmen.
    • Genitalverstümmelung von männlichen Kindern aus religiösen Gründen wird per Gesetz erlaubt werden.
    • Nachdem weitere Experimente mit Satelliten von Nordkorea unternommen werden, wird dieses von den USA angegriffen. Die Gegenwehr ist, dank Einwirken Südkoreas, gering.
  • Kunst und Populärkultur:
    • Der neue Star Trek-Film "Into Darkness" wird verspätet in die Kinos kommen, wird scharf kritisiert werden, wird aber ein Erfolg.
    • Der neue Zelda-Teil für die Wii U wird Ende des Jahres mit Titel angekündigt.
    • Ein offizieller neuer Portal-Teil wird angekündigt.
  • Software und Technik:
    • Windows 8 wird ein Erfolg.
    • Das iPod 6 wird angekündigt.
    • Wheezy wird stable.
    • Das Projekt Boot To Gecko wird eingestellt. Die Entwickler konzentrieren sich mehr auf die Integration von Webdiensten in Smartphones.
    • Diverse Compiler werden ankündigen, dass sie den C++11-Standard nun stabil implementieren.
    • Nachdem sich weder ZFS noch BTRFS unter Linux so richtig durchsetzen, wird ein EXT5 angekündigt.
  • Wissenschaft:
    • P ungleich NP wird bewiesen werden, ein Fehler wird nicht gefunden, es wird diverse Bestrebungen geben, den Beweis für einen formalen Beweisprüfer zu schreiben.
    • Es wird einen Durchbruch in der Erforschung superluminarer Effekte geben, der zwar von Überlichtflügen noch weit entfernt ist, aber trotzdem hoffen lässt und von der Öffentlichkeit halbwegs wahrgenommen wird.
    • Es wird gezeigt werden, dass die Collatz-Vermutung ein großes Kardinalzahlaxiom impliziert.
  • Sonstiges:
    • Die Welt wird nicht untergehen.
    • Wie jedes Jahr wird das Wetter auch dieses Jahr wieder irgendwie extrem sein.
    • Wie jedes Jahr wird auch dieses Jahr wieder irgendeine Seuche um sich greifen, höchstwahrscheinlich im Sommerloch.

Again, I gave a talk for the best mathematics-club in the multiverse. This time, it is for "older" people, that is, pupils in their last year and young students. So I will try to keep this text short, assuming that the reader is able to find out some details, mostly from usual mathematical practice I know, himself. Also keep in mind that a large part of this text was created during hasty evenings when I had other things to do. If something is wrong or not clear, feel free to tell me.

I will use some results from Transfinite Chomp by S. Huddleston and J. Shurman which I found on this excellent post from Xor's hammer.

Also keep in mind this is not a scientific paper, I will not always put footnotes everywhere I use information from the named paper. Especially, I change the notation so it will (hopefully) be compatible with ZFC (that is, there is no set of ordinal Numbers, etc.), as I think I have seen proofs in that paper that are not correct because of this (but I might be mistaken).

1 Finite Chomp

Before understanding ordinal chomp, it is useful to know finite chomp, and what makes it so interesting. We will try to give a definition that is less "economic" than the usual ones, but much more compatible with the intuition of a chocolate bar, which is the usual metaphor for explaining chomp.

Keep in mind that these definitions are preliminary and will later be generalized.

Definition: As a slight abuse of notation, we identify every integer with the set of its preceding numbers. So 0=\emptyset, 1=\{0\}, 2=\{\emptyset,\{\emptyset\}\}=\{0,1\}, etc. - this will later fit into our definition of ordinal numbers. It has nice properties like a<b being equivalent to a\in b. For now, it just helps us to have a simpler notation of the functions involved.

Definition: A n\times m chocolate bar is a function n\times m\rightarrow 2. The function \mathfrak{I}_{nm}=\lambda_x1 is the initial chocolate bar. The function \mathfrak{F}_{nm}=\lambda_x0 is the final chocolate bar. We may leave out the indices if their value is clear from the context. A legal move on an n\times m chocolate bar f is some \mathfrak{m}=(x,y)\in f^{-1}(1). Its application f \xrightarrow{\mathfrak{m}} g defines a new chocolate bar g with gab = \left\{\begin{array}{cl}0 & \mbox{for } a\ge x\mbox{ and }b\ge y\\fab & \mbox{otherwise}\end{array}\right.. A chocolate bar f is called legal, if it is acquired by applying a sequence of legal moves to \mathfrak{I}, that is, if there exist (\mathfrak{m}_i)_{1\le i\le n} such that \mathfrak{I}\xrightarrow{\mathfrak{m}_1}\ldots\xrightarrow{\mathfrak{m}_n} f. It is a player 1 chocolate bar if 2\mathbb{Z}+n=2\mathbb{Z}, and a player 2 chocolate otherwise. A legal move is called player n move, if it is applied to a player n chocolate.

Lemma: Every sequence of legal moves is finite.
Proof: Trivially, the number of preimages of 1 decreases with every application, and the chocolate bar is finite.

Definition: Let \mathfrak{L}\subseteq (n\times m)^{<\omega} be the set of sequences of legal moves starting from \mathfrak{I}. These sequences form a tree of legal chocolate bars, where every edge connects a player 1 chocolate bar and a player 2 chocolate bar. A strategy for player n is a subtree in which
  • every branch of the tree ends in the final chocolate bar (that is, every branch is a whole game)
  • the final chocolate bar in every branch is a player n chocolate bar (that is, player n wins)
  • every player 3-n node in the subtree is adjacent to all the player n chocolate bars that are adjacent in the original tree (that is, the winning of player n does not depend on the choices of player 3-n

We distinguish between weak and strong existence and disjunction. Usually, this is done to get a finer distinction between constructive and non-constructive propositions. However, we do not want to get too deep into that topic. So we give an informal definition.

Informal Definition: The weak disjunction A\tilde{\vee}B between two propositions A and B is a disjunction which we proved without giving a possibility of deciding which one of these actually holds inside the proof. Strong disjunction A\vee B always comes with a theoretical possibility of decision directly inside the proof. The weak existence \tilde{\exists} holds if only the existence of an object is proved, but there is no way of calculating it giving imlicitly with the proof. With strong existence \exists comes such a way.

The distinction between those two kinds of existence and disjunction can be seen as one main difference between constructive and classical mathematics. While in constructive theories, one can set A\tilde{\vee}B := \lnot(\lnot A \wedge \lnot B), in non-constructive mathematics, this is equivalent to usual disjunction.


Lemma: For every m\in\mathbb{N}_2 there (strongly) exists a strategy \mathfrak{q} for player 1 from the quadratic board \mathfrak{I}_{mm}.
Proof: Player 1 has to use (1,1), and then answer every move (i,j) of the other player by (j,i). By a simple parity argument, it is easy to see that this enforces winning.

Lemma: For every chocolate bar b there either (weakly) exists a strategy for player 1, (weak-)or one for player 2.
Proof: Note first that the "or" is exclusive here: Assume both existed, then consider a strategy for player 1. It will have some edge from \mathfrak{I} to some player 2 chocolate bar from which winning is not possible for player 2 anymore. This contradicts the existence of a player 2 startegy.
On the other hand, assume there is no strategy for player 1 or player 2. Then there must be some move player 1 can make, such that player 2 still has no winning strategy after this move. Player 2 can answer with a move after which player 1 again has no winning strategy, and so on. This is a contradiction, as every game has finite length.

Theorem: For every m,n\in\mathbb{N}_2 there (weakly) exists a strategy \mathfrak{q} that wins against every other strategy from \mathfrak{I}_{mn}.
Proof: Assume player 2 had a strategy \mathfrak{r}. Then, since player 1 can make the first move (m-1,n-1), this strategy yields a player 1 strategy from the chocolate bar \lambda_{\vec{p}} (\vec{p}=(m-1,n-1))?0:I_{mn}\vec{p}. Let \vec{q} be the a first move of this strategy, especially, \vec{q}\le(m-1,n-1). This yields a player 2 strategy for \lambda_{\vec{p}} (\vec{p}\le\vec{q})?I_{mn}\vec{p}:0. But player 1 could also have chosen \vec{q} from the beginning, and would thus have a winning strategy. Contradiction, by the above Lemma.

So we have an explicit winning strategy for finite quadratic chocolate bars. Unfortunately, even though we know we can always win, we have no winning strategy except bruteforcing for the rest of the cases (except for some additional special cases like our above Lemma).

Short introduction to ordinal numbers

Definition: A (strict) well-ordering is a (strict) total ordering in which every set has a minimum. A transitive set is a set of which every element is also a subset. An ordinal (number) is a transitive set which is strictly well-ordered by the \in-Relation.

Here are some facts about ordinals which I will not prove:
  • The above notation n=\{m\mid m<n\} is consistent with them.
  • Every well-ordering on a set is isomorphic to some unique ordinal.
  • Every decreasing sequence of ordinals is finite.
  • The collection of ordinals \operatorname{On} forms a proper class. It is not a set.
  • Every ordinal x has a successor x\cup\{x\}.
  • Some ordinals also have a predecessor. If \alpha\neq 0 has no predecessor, we say that \alpha is a limit ordinal, writing \lim\alpha.
  • The smallest limit ordinal is called \omega, it is the set of finite (natural) numbers.

Definition: A cardinal \kappa is an ordinal which satisfies \kappa = \inf \{\alpha\in\operatorname{On}\mid \exists_f f : \alpha \rightarrow \kappa \mbox{ bijective}\}.

We will only need cardinals in one small argument, in which we just need some "upper bound" for stuff we want to do.

Transfinite Chomp

A transfinite chocolate bar is defined similarily to the finite ones: An \alpha\times\beta-chocolate bar is a function \alpha\times\beta\rightarrow 2, the initial chocolate bar is again \mathfrak{I}=\lambda_{x}1, and the final chocolate bar is again \mathfrak{F}=\lambda_{x}0. The definitions of legal moves and their applications are also similar: A legal move \mathfrak{m} on a chocolate bar f is a pair x\in\alpha\times\beta such that fx=1, and the application f\xrightarrow{\mathfrak{m}}g yields a chocolate bar gab = \left\{\begin{array}{cl}0 & \mbox{for } a\ge x\mbox{ 
and }b\ge y\\fab & \mbox{otherwise}\end{array}\right.. A chocolate bar is legal if it is the result of finitely many applications of legal moves from \mathfrak{I}.

We first prove a few generalizations for properties of finite chomp.

Lemma: The sequence (\mathfrak{m}_i)_i of moves is legal if and only if \forall_{i}\forall_{j<i} (\pi_1 \mathfrak{m}_i < \pi_1 \mathfrak{m_j} \vee \pi_2 \mathfrak{m}_i < \pi_2 \mathfrak{m_j}).
Proof: By definition of the application and \mathfrak{I}, f_i^{-1}(0)=\bigcup\limits_{j=1}^i \{ \mathfrak{p} \mid \pi_1\mathfrak{p} \ge \pi_1\mathfrak{m}_j \wedge \pi_2\mathfrak{p} \ge \pi_2\mathfrak{m}_j \}. Therefore, f_i^{-1}(1)=(\alpha\times\beta)\backslash f_i^{-1}(0) = \bigcap\limits_{j=1}^i \{ \mathfrak{p} \mid \pi_1\mathfrak{p} < \mathfrak{m}_j \vee \pi_2\mathfrak{p} < \mathfrak{m}_j \}, from which the Lemma follows directly.

Theorem: Every sequence (\mathfrak{m}_i)_i of legal moves is finite.
Proof: By the Lemma, we know that the sequences (\min\limits_{j\le i}\pi_1\mathfrak{m}_j)_i and (\min\limits_{j\le i}\pi_2\mathfrak{m}_j)_i are decreasing, with at least one of them strictly decreasing at every step. Therefore, the sequence (\max \{\min\limits_{j\le i}\pi_1\mathfrak{m}_j, \min\limits_{j\le i}\pi_2\mathfrak{m}_j\} + \min \{\min\limits_{j\le i}\pi_1\mathfrak{m}_j, \min\limits_{j\le i}\pi_2\mathfrak{m}_j\})_i strictly decreases and must hence be finite.

Lemma: For every chocolate bar b there either (weakly) exists a strategy for player 1, (weak-)or one for player 2.
Proof: The above proof only requires every game to be of finite length, so it also applies here.

Lemma: Player 1 has a winning strategy for \mathfrak{I}_{\alpha\alpha}, \alpha\in\operatorname{On}\backslash 2.
Proof: The strategy is the same as for the finite case, but we have no parity here. However, it is easy to see that after every player 2 move, player 1 recovers a quadratic state, from which he has a winning move again.

Theorem: If \alpha and \beta have predecessors, and not both are equal to 1, then player 1 has a winning strategy for \mathfrak{I}_{(\alpha+1)(\beta+1)}.
Proof: The argument is exactly the same as for the finite case.

One interesting part about transfinite chomp is that in some cases it is possible that the second player has a winning strategy.

Theorem: Let \beta\in\operatorname{On}\backslash\{0\}, and a legal chocolate bar f be given. Then there is a unique h\in\operatorname{On} such that player 2 wins \lambda_{\vec{p}} ((\mathfrak{I}_{bh}\vec{p})\cdot(f\vec{p})).
Proof: Firstly we show uniqueness. Assuming the existence of h_1\in h_2\in\operatorname{On} with the property, player 1 could use (0,h_1+1) to win, contradiction. We show the existence by induction on \beta. Parallely, we maintain an cardinal \kappa, such that all ordinals occuring are in \kappa (we have to do this to make sure everything we talk about is a set)
  • For \beta=1, we use induction on f^{-1}(1). If f^{-1}(1)=1\times 1, h=1, \kappa=1. Otherwise, we assume that for all legal chocolate bars (f_i)_{i\in I} with f_i^{-1}(1)\subseteq f^{-1}(1), there already exists a h_i with the property. Especially then, for all legal chocolate bars (f_i)_{i\in J\subseteq I} that can be reached from f by one move, there exist (h_i)_{i\in J}. Then h := \operatorname{mex} (h_i)_{i\in J} is the solution: Trivially, every move player 1 makes leads to a state which player 2 can now win. Notice that if \overline{\kappa} was our previous cardinal, \kappa = 2^{\overline{\kappa}} is sufficient for everything we do (I am too lazy now to reason why \kappa=\overline{\kappa}^+ should be sufficient).
  • For every other \beta, we may assume that for every pair of \beta'\in\beta and a legal chocolate bar f' some h(f',\beta') already exists. Let \kappa be the 2 to the power of the supremum of all previous cardinals. Furthermore, for every legal chocolate bar f' which can be reached from \lambda_{\vec{p}}.\mathfrak{I}_{\beta\gamma}\vec{p}\cdot f\vec{p} for some \gamma\in\kappa by a move \vec{q} with \pi_1\vec{q}\in\beta, we also already have a h(f,\beta,\gamma). Then the \operatorname{mex} of all these exists (since the cardinality is bounded), and its own cardinality is bounded, and it trivially satisfies what we want.
Corollary 1: The second player wins \alpha\times\omega if and only if \alpha=2.
Proof: For \alpha=1 this is trivial. For \alpha=2 trivially player 1 loses after every move not being of the form (1,x>0), so assume he chose such a move. Assuming there was no winning strategy for player 2 from this point is equivalent to assuming that there is no finite h>x, such that f with \mathfrak{I}_{2h}\xrightarrow{(1,x)}f can be won by player 1. But this contradicts the uniqueness of the previous theorem. For 2\in\alpha, player 1 may just choose (2,0) as first move and wins.

In Dec 08th, 1994, Ansi Common Lisp was standardized. That means, today, it is its 18th birthday. From that age on, in many countries, you have more rights and also more duties.

The first right you usually get is to vote. What would Common Lisp vote? Politics is complicated, and there is so much to choose, while Common Lisp does not like choosing, rather supporting everything. Not sure what it would vote. It is probably compatible with everything.

Furthermore, at that age you will be sui juris. You can do everything, but you are also responsible for everything you do. Common Lisp has its own ways of dealing with things, some of them are still unique, and many of them were unique for a long time. Meanwhile, there evolved other, younger languages like Ruby with similar concepts, which are more widespread. Common Lisp pays the prize of being beyond its time, at a time when Windows 95 was new.

Also, the society's duty for your protection rapidly decreases. Not only are you responsible for everything you do, but also, there are less people willing to encourage you in continuing to find yourself - you have had enough time for that. The people begin to demand you to actually achieve valuable things, that is, either things that are economically or socially valuable, or things that show that you will become economically or socially valuable. I think that Common Lisp has done this already: It provided expierience with concepts that combined academical and practical principles of programming. Maybe its glorious days are gone, but it is still an interesting language to use and to learn from.

In Germany, you have to pay fees every quarter of a year you go to a doctor. That is, when you have a medical problem, you lose additional money. Well, it seems like Common Lisp does not really use any health insurance anyway. There are several little diseases Common Lisp has, but for which there was never enough money to heal them. I am afraid that there is and will not be enough money to heal these diseases. But hopefully, the usual practices and implementations will have a self-healing effect.

Also, you can have a drivers' license when you are 18. I am not sure how successful Common Lisp is at driving cars. But as far as I heard, it has lost much of its importance for AI and robotics. I am not sure whether Common Lisp will ever drive a car. But if so, it would be awesome!

A few years back when I was at school, we made the prognosis that genetic engineering will be the topic of the next years. It was not. No human clone exists yet, as well as no lab disaster with new super-resistant man-eating eggplants. It is still plausible that something like this can happen soon or later, but so far, it has not. There were some rumours about it, but it was not the big topic of these years.

The really big topic was definitely the internet, and the many new gadgets that evolved. So much seems to have changed. While back in my day™ - which is not that long ago yet - the greatest technical threat for the teachers' precious blabber were alarms of digital watches, it seems to be considered normal today that pupils are using their laptops, tablets and smartphones.

However, while the prognosis about the big topic turned out to be wrong, many prognoses about the internet did not. One of my teachers compared the weird situation of everybody being able to publish everything and the difficulties of checking it to the "Wild West", which may be regulated someday. Another one claimed that education will focus more on teaching how to get information required for some task, rather than building a huge collection of knowledge. Wikipedia puts lots of efforts into the quality of its source material, and people learned to distinguish between claims and quotes (which is, for example, a claim, since I have no evidence for this).

But since then, journalists are also questioned more, because people learned that not everything a journalist sais must be correct, journalists are no longer the masters of information anymore, and this takes a lot of possibilities for regulation away from the state.

And now, we have Anonymous, we have DuckDuckGo advertising for VanishingRights.com, we have an Internet Defense League, we have projects like Tor and Freenet, and even Google requests "Verteidige dein Netz" (defend your net).

Maybe in 50 years the people will wonder whether we had no other problems. Or they will look back and wish to have lived in that glorious time. We cannot know what actually happens. The internet is different to what people imagined 20 years ago, there is much more chaos than one would have expected. On the other hand, there are much more possibilities than one would have expected, like people with smartphones who are online virtually always, and cheap real-time-communication to huge parts of the world with only a little gadget.

One may argue that the internet on the whole is a bad thing. However, it is there, and it will probably stay there, and it has already influenced most of us somehow. We have something new with which we have no experience. These times are weird.

Again, I gave a talk for the best mathematics club of the multiverse, this time for the pupils who do not yet study mathematics. I thought it would be nice to have something that is seldom taught, a proper introduction to \mathbb{R} "from scratch". This is what I tried to do. Keep in mind that this has very low requirements and is not thought for mathematicians, but for people who may become mathematicians. Since I still do not have anything better than MathJax, JavaScript is needed to render the formulae, sorry about that. Here is a script:

Inroduction

\mathbb{R} is probably the most known object in mathematics. However, only few non-mathematicians know how it is actually defined. Usually, they are seen as some "obvious", "natural" object. However, there are several fallacies in the step from \mathbb{Q} to \mathbb{R}, like the fact that it is not algebraically closed (\sqrt{-1}\not\in\mathbb{R}), and that 0,\overline{9}=1. To avoid such fallacies, we should first give some axioms that we want the reals to satisfy:
  • \mathbb{R} should be a field:
    • a+b=b+a, a\cdot b=b\cdot a, (a+b)+c=a+(b+c), (a\cdot b)\cdot c = a\cdot (b\cdot c), a\cdot(b+c)=a\cdot b + a\cdot c
    • There exist additive and multiplicative neutral elements 0 and 1, such that a+0=a\cdot 1=a
    • There exists an additive inverse -a for every a, such that a+(-a)=0. There exists a multiplicative inverse a^{-1} for every a\neq 0 such that a\cdot a^{-1}=1
  • \le on \mathbb{R} should be a linear partial ordering relation:
    • a\le a
    • a\le b and b\le a implies a=b
    • a\le b and b\le c implies a\le c
  • \mathbb{R} should be an ordered field regarding \le:
    • a\le b implies a+c\le b+c
    • 0\le a and 0\le b implies 0\le a\cdot b
  • \le is complete over \mathbb{R}:
    • If a \le x for all x\in I\subseteq\mathbb{R}, then there exists a maximal a'\ge a with a'\le x for all x\in I.
    • If a \ge x for all x\in 
I\subseteq\mathbb{R}, then there exists a minimal a'\le a with a'\ge x for all x\in I.

It can be shown that these axioms are sufficient to classify a unique structure "up to isomorphy", that is, two such structures are equivalent up to "renaming" of the elements. However, this just says that if such a structure exists, it is unique. It does not show that it actually exists, which is what we will approach.


Equivalence Relations

Equivalence relations are an extremely important concept in mathematics. They show up in almost every mathematical topic, like logic, algebra, topology. It is important to understand this concept to define \mathbb{R}, but it is also worthwile for its own sake.
An eqivalence relation is a binary relation that usually somehow resembles similarities between objects. Formally, a relation \equiv on a set S is a relation which obeys the following three axioms:

  • Reflexivity: For all a\in S we have a\equiv a
  • Symmetry: For all a,b\in S we have a\equiv b \Leftrightarrow b\equiv a
  • Transitivity: For all a,b,c\in S if we have a\equiv b and b\equiv c then we also have a\equiv c.

For every a\in S we can define its equivalence class [a]_\equiv=\{x\in S|x\equiv a\}.

Lemma: Two equivalence classes [a]_\equiv and [b]_\equiv are either equal or disjoint.

Proof: Every equivalence class contains at least one element, so they cannot be both disjoint and equal. Assuming they are not disjoint, they must have a common element [a]_\equiv \ni c \in [b]_\equiv, but then a\equiv c and c\equiv b, so by transitivity, a\equiv b, hence [a]_\equiv = [b]_\equiv.

Hence, the set of equivalence classes over S regarding \equiv is a partition of S. We call this set S/\equiv, say "S modulo \equiv".

Trivial examples are standard equality =, in which case S/= can be identified with S in a canonical way, and the "squash"-relation that makes every two elements equal, in which case there is only one equivalence class.

Furthermore, important equivalence classes are the residue-equalities =_n on \mathbb{Z}, where a=_n b means that a-b is divisible by n. Their equivalence classes are the residue classes, and they can be added and multiplied elementwise, forming the residue class rings.


Natural Numbers

Our bootstrapping of \mathbb{R} begins with a slightly simpler object, \mathbb{N}. First, we give axioms for something we shall call a Peano Structure:

A triple (S,z,\sigma), where S is a set, z\in S and \sigma:S\rightarrow S is a function, is called a Peano Structure, if

  • There is no t\in S with \sigma(t)=z
  • For every t\in S, \sigma(t)\in S
  • For every s,t\in S, if \sigma(t)=\sigma(s), then t=s
  • If U\subseteq S, z\in U and if t\in U\Rightarrow\sigma(t)\in U, then U=S

That z is called zero, and \sigma is called successor function of that peano structure.

Mathematicians call a function between two structures that preserves relevant parts of that structure an isomorphism. In case of Peano Structures, a function f:S\rightarrow T between two Peano Structures (S,z,\sigma),(T,t,\rho) is an isomorphism if 

  • f is bijective
  • f(z)=t
  • For all x\in S we have f(\sigma(x)) = \rho(f(x)), and for all y\in T we have f^{-1}(\rho(y)) = \sigma(f^{-1}(y)).

If such an isomorphism exists between two Peano Structures, they are called isomorphic. They are then essentially equal structures, in the sense that there is no mathematically meaningful difference between them, and the one structure is a result of just renaming the elements of the other.

Lemma: Every two peano structures are isomorphic (in one set universe)

Proof: Let two peano structures (S,z,\sigma),(T,t,\rho) be given. We define a function f:S\rightarrow T recursively by f(z):=t, f(\sigma(x)):=\rho(f(x)). Therefore, the second and a part of the third axiom are already satisfied by definition.

We show that f is surjective: We know that the image of f is a subset of T, and we know that t=f(z) is in this image. If y is in the image of f, say, y=f(x) for some x, then we know that \rho(y)=f(\sigma(x)) by definition, hence, \rho(y) is also in the image of f. By the axioms of peano structures, this means that the image of f is T.

Now consider the function g:T\rightarrow S with g(t)=z, g(\rho(x))=\sigma(g(x)). By the same argument as above, g is surjective. Furthermore, since we have f(g(t))=f(z)=t and from f(g(x))=x follows f(g(\rho(x))) = f(\sigma(g(x))) = \rho(f(g(x))) we have f(g(x))=x for all x\in T, and similarily g(f(x))=x for all x\in S, hence, g=f^{-1}. Therefore, f is also injective, and the third axiom of isomorphism is proved.

What we now know is that if there is a peano structure, then there is - up to isomorphism - only one. There are several ways to show that (in a set universe) there actually is one. The common way is to classify a special set of finite sets, the finite cardinal numbers. In ZFC, this can be done using the well-ordering theorem, the axiom of infinity and the axiom of comprehension. But doing this is outside our scope, we just believe the following theorem:

Theorem: A peano structure exists (in every set universe).

And now we can take an arbitrary peano structure as our set of natural numbers, \mathbb{N}, by setting 0:=z and (n+1):=\sigma(n) [note that we have yet to define addition, this is just for intuition]. The first three axioms of peano structures become obvious arithmetical laws, the fourth axiom becomes the induction axiom. We can define addition recursively by

0+x:=x, \sigma(y)+x:=\sigma(y+x).

Exercise: Prove x+y=y+x

We can define multiplication recursively by

0\cdot x=0, \sigma(y)\cdot x = x+y\cdot x

The arithmetical laws follow from these definitions.


Integers and Rational Numbers

Now that we have defined \mathbb{N}, we want to reach out for \mathbb{Z}, and it turns out that this is much easier than getting \mathbb{N} in the first place. However, for a non-mathematician, these definitions sound rather overcomplicated, but it turns out that they are much easier to handle than "less complicated" ones.

The need for negative numbers comes from the fact that subtraction is not total on \mathbb{N}, that is, for example, 2-3\not\in\mathbb{N}. Subtraction itself is just the operation of solving an additive equation, 3+x=2 in our example. To be consistent with most of our arithmetical laws so far, it is desirable that the solution of 3+x=2 is equal to the solution of 4+x=3 and generally 3+n+x=2+n, due to cancellation.

We observe that the equations a+x=b and c+x=d must have the same solutions in x if a+d=b+c. Now, let us have a look at the set \mathbb{N}^2 of pairs of natural numbers. If we define (a,b)\approx(c,d):\Leftrightarrow a+d=b+c, then \approx is an equivalence relation on \mathbb{N}^2: Reflexivity and symmetry are obvious. For transitivity, consider (a,b)\approx(c,d) and (c,d)\approx(e,f). That is, a+d=b+c and c+f=e+d. Since we have equality of sums here, we may subtract without getting negative results, therefore c=e+d-f, so a+d=b+e+d-f and hence a+f = e+b, which means (a,b)\approx(e,f). Now we define \mathbb{Z}:=\mathbb{N}^2/\approx.

We furthermore define [(a,b)]_\approx + [(c,d)]_\approx = [(a+c,b+d)]_\approx. We have to show that this is well-defined, which is left as an exercise.

Now, at schools you are taught that \mathbb{N}\subseteq\mathbb{Z}, and even in higher mathematics this is often used. With our definition, this is of course not the case, but we can identify every natural number n with the equivalence class [(n,0)]_\approx. As isomorphic structures do not need to be distinguished, we can safely redefine \mathbb{N} to be the set of these equivalence classes. The changes we would have to accomplish are below the usual abstraction level.

The structure \mathbb{Z} with addition and multiplication is called a Ring.

Rational numbers are similar, this time we add solutions to a\cdot x=b, but we have to avoid the pathological case of a=0. The reason is that we want the rationals to be a Field. Similar to how we did before, we define an equivalence relation \simeq on \mathbb{Z}\times(\mathbb{Z}\backslash\{0\}), namely (a,b)\simeq(c,d):\Leftrightarrow a\cdot d = b\cdot c. You are probably familiar with this representation of rational numbers, you just write \frac{a}{b} instead of (a,b) - we just chose an equivalence that tells when two fractions are equal, and we have \frac{a}{b}=\frac{c}{d}\Leftrightarrow a\cdot d=b\cdot c.


Cauchy Sequences

The step from rational numbers to real numbers is a bit more complicated. It is not sufficient to add solutions of special equations to our system. However, of course, in many cases, the real numbers are just solutions of equations. If we add zeroes of all polynomials, for example, we would get (more than) the algebraic real numbers over \mathbb{Q}. But there are also transcendental numbers like \pi.

Let us first have a look at one of the first irrational numbers ever discovered: \sqrt{2}.

Theorem: There is no rational solution to the equation x^2-2=0.

Proof: By monotony of x\mapsto x^2 we know that a positive solution must lie between 1 and 2, and can especially not be integral. Now assume there was some solution x=\frac{p}{q} with \frac{p^2}{q^2}=2, where w.l.o.g. gcd(p,q)=1. Then also gcd(p^2,q^2)=1. But p^2=2q^2, and as 2 cannot be factorized nontrivially, p must be divisible by 2, therefore, p=2n for some n. But then 4n^2 = 2q^2 yields 2n^2=q^2 and by the same argument as before, we get that q is divisible by 2, which contradicts our assumption gcd(p,q)=1.

So we know that \sqrt{2}\not\in\mathbb{Q}, but still, we can define the set \{x\in \mathbb{Q}|x^2\le 2\}, and this set will be an open interval, with rational numbers approaching \sqrt{2} up to an arbitrary precision. The set \{x\in \mathbb{Q}|x^2\le 2\} is bounded, but has no rational bound. The set \mathbb{Q} is therefore not complete, we say.

However, for every given 0<\varepsilon\in\mathbb{Q} we can give a rational number r\in\mathbb{Q} such that |r-\sqrt{2}|<\varepsilon. We now touch the topic of infinitesimal analysis, however, do not fear, we do not have to dive in it too deeply. The crucial concept we have to define now is the Cauchy sequence:

Definition: A sequence (a_i)_{i\in\mathbb{N}} of rational numbers is called a Cauchy sequence, if for every 0<\varepsilon\in\mathbb{Q} we can find an N\in\mathbb{N} such that for all j,k\ge N we have |a_j - a_k|\le\varepsilon.

That is, if we have such an \varepsilon, we know an N such that all elements of that sequence above a_N do not differ about more than \varepsilon, and since we may choose this \varepsilon arbitrarily small, the sequence "approximates" something.


(Source)

For some of these sequences, we can give a rational number, which they "obviously" approximate. For example, somehow, the sequence (\frac{1}{i})_{i\in\mathbb{N}} "approximates" 0: It gets arbitrarily close to 0. On the other hand, if we set a_i = \max \{n\in\mathbb{N}|(\frac{n}{2^i})^2<2\}, this is a cauchy sequence, but does not approximate any rational number. However, we want this to be a number - in general, we want things to which we have an intuitive arbitrarily precise "approximation" to be numbers - we want the numbers to be complete.

The easiest way to make sure this is the case is to just postulate that every Cauchy sequence has such an approximation (a "limit point"), but then the problem remains that we do not really know how the objects we add should look like. So we simply do the first thing that comes to your mind when you are a mathematician: Just use the Cauchy sequences themselves, and define a proper equivalence relation on them.

To do this, we first have to make a formal definition of what we mean by an "approximation", but we can give it for rational approximated numbers only, so far:

Definition: A sequence (a_i)_{i\in\mathbb{N}} converges towards a\in\mathbb{Q}, if for an arbitrarily small 0<\varepsilon\in\mathbb{Q} we can find an N\in\mathbb{N} such that for all j\ge N we have |a_j - a|\le\varepsilon.

As an exercise, check whether our above statements on "approximations" hold for this definition. Now we define an equivalence relation on cauchy sequences:

Definition: Two cauchy sequences (a_i)_{i\in\mathbb{N}} and (b_i)_{i\in\mathbb{N}} converge towards the same point, denoted by (a_i)_{i\in\mathbb{N}}\asymp(b_i)_{i\in\mathbb{N}}, if the sequence (|a_i - b_i|)_{i\in\mathbb{N}} converges towards 0.

Theorem: \asymp is an equivalence relation.

Proof: Reflexivity holds trivially, since (|a_i-a_i|)_{i\in\mathbb{N}}=(0)_{i\in\mathbb{N}}. Symmetry is trivial since |a_i-b_i|=|b_i-a_i|. For transitivity, assume (a_i)_{i\in\mathbb{N}}\asymp(b_i)_{i\in\mathbb{N}} and (b_i)_{i\in\mathbb{N}}\asymp(c_i)_{i\in\mathbb{N}}. Now let an \varepsilon be given. By definition, we find an N such that for all i\ge N we have |a_i - b_i|\le\frac{\varepsilon}{2}, and similarily, we find an M such that for all i\ge M we have |b_i - c_i|\le\frac{\varepsilon}{2}. But then we have \varepsilon\ge |a_i-b_i| + |b_i-c_i| \ge |a_i-c_i| for all i\ge\max\{M,N\}. Therefore, (a_i)_{i\in\mathbb{N}}\asymp(c_i)_{i\in\mathbb{N}}.

Let \mathbb{Q}^{\rightarrow} be the set of Cauchy sequences, then we define \mathbb{R}:=\mathbb{Q}^{\rightarrow}/\asymp. We can embed the rational numbers into the real numbers by x\mapsto [(x)_{i\in\mathbb{N}}]_\asymp. Furthermore, we define addition componentwise by

(a_i)_{i\in\mathbb{N}}+(b_i)_{i\in\mathbb{N}} = (a_i+b_i)_{i\in\mathbb{N}}

same for multiplication. The simple arithmetic laws then also follow componentwise. For \le and \ge, we say that they hold when they hold for infinitely many corresponding elements of the compared sequences.

Of course, we still have to show a lot of things, like showing that this is really a field with the desired axioms, and showing that addition and multiplication defined in this way are compatible with the equivalence relation. However, this is just technical, and we omit it, since we think that for people not familiar with this topic, it is already sufficiently complicated.

Prospects

For everyone who does not yet feel challenged enough, I have a few prospects. The first one is from set theory over the real line:

Hypothesis (Suslin): Let S be a set and \le satisfy the following conditions:

  • \le is a linear ordering: a\le a, a\le b\ \&\ b\le a \Rightarrow a=b, a\le b\ \&\ b\le c\Rightarrow a\le c.
  • \le is dense: If a\le b and a\neq b, then there is a c such that a\le c\le b.
  • \le is complete: If for some X\in S there exists an a\in S such that all x\in X are \le a ("X is bounded"), then there exists a smallest b\in S with this property ("b is the supremum of X in S).
  • Define intervals in the canonical way by [a;b]:=\{x\in S|a\le x\le b\}. Then for every set I of pairwise disjoint intervals, there exists a surjective function \mathbb{N}\rightarrow I ("every set of pairwise disjoint intervals is countable").

Then (S,\le) is isomorphic to (\mathbb{R},\le).

Theorem: Suslin's Hypothesis is undecidable over standard set theory (ZFC).

Of course, we will not prove this, as it involves a lot of model theory. Let us move on with our second prospect, the p-adic numbers. If we look at the proof that \asymp is an equivalence relation, we notice that what we really needed to prove this were a few inequalities on the absolute value, namely:

  • |a-b|=|b-a|
  • |a-a|=0
  • |a - b|+|b-c| \ge |a-c|

The third one is called the triangle inequality. It is crucial for the transitivity proof. On the other hand, we might think of some other function which satisfies the same axioms. And in fact, there is a mathematical concept which generalizes this:

Definition: A function \delta : X^2\rightarrow\mathbb{Q}^{\ge 0} is called a metric (or distance function) on X if it satisfies

  • \delta(x,x)=0
  • \delta(x,y)=\delta(y,x)
  • \delta(x,y)+\delta(y,z)\ge\delta(x,z)

Note that this definition is usually made with real numbers instead of rational numbers.

Now, as above, we can define \asymp_\delta for a given metric \delta, and it will also be an equivalence relation.

Now, let p be some prime number. For every rational number r\neq 0, we can find an unique integer n, such that r=p^n\cdot\frac{a}{b}, where gcd(a,b)=1, and neither a nor b are divisible by p. Define |r|_p:=p^{-n} for every such r,p,n, and |0|_p:=0. Define \delta_p(x,y):=|x-y|_p.

Lemma: |a|_p + |b|_p\ge |a+b|_p

Proof: Let a=p^n\frac{a'}{a''} and b=p^m\frac{b'}{b''} with fully cancelled fractions not containing p as a factor, and let w.l.o.g. m\le n. Then a+b=p^n\frac{a'}{a''}+p^m\frac{b'}{b''} = p^m (\frac{a'\cdot p^{n-m}\cdot b'' + b'\cdot a''}{a''\cdot b''}). Now, a''b'' is not divisible by p, so the fraction can only contain positive powers of p. Therefore, |a+b|_p = p^{-m-m'} for some m'\ge 0. We now want p^{-m}+p^{-n} \ge p^{-m-m'}, that is, p^{m'}+p^{m-n+m'}\ge 1, which is trivial, since m'\ge0.

Theorem: \delta_p is a metric.

Proof: Trivially, \delta_p(a,a)=0 and \delta_p(a,b)=\delta_p(b,a). It remains to show the triangle inequality, that is, \delta_p(a,b)+\delta_p(b,c)\ge\delta_p(a,c). By the above lemma, we have \delta_p(a,c)=|a-c|_p=|(a-b)+(b-c)|_p\le |a-b|_p+|b-c|_p = \delta_p(a,b)+\delta_p(b,c).

The set we get when identifying sequences according to this metric is called \mathbb{Q}_p, the p-adic numbers. It has many similarities with \mathbb{R}, and you can even do analysis on it.

Als ich es als erstes sah
War es mir ja sofort klar
Ich liebe dieses lust'ge Ding
Das dort in meinem Grase hing

Ich liebe den aparten Duft
Zu riechen in der Gartenluft
Zu sehen wie der braune Teig
Sich weiß verfärbet mit der Zeit

Es ging dann später mit mir aus
Doch aus dem Cafe flog ich raus
Welch armes kleines braunes Ding
Das da in meinem Grase hing

Keiner mag das Dingchen sehen
Sollte es auch mit mir gehen
Alle Leute geh'n von Dann'
Nur die Hunde kommen ran

Ich liebe dieses lust'ge Ding
Das dort in meinem Grase hing
Ich gehe morgen Blumen kaufen
Ich liebe einen Scheißhaufen

External Content not shown. Further Information. Let's do a little C++! Let us explore the amazingly well-designed template system of C++.

Will the following code compile?

class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  NAT* doWitnessing () {
    return (C*)(0xbad);
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<ZERO> > > ());
}


Well, probably. But what about

class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  NAT* doWitnessing () {
    return (C*)(0xbad);
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<FAIL> > > ());
}


Well, hopefully not, will it? Hm. What about

class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  NAT* doWitnessing () {
    return new C;
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<FAIL> > > ());
}


Probably not. So how about

#include <cstdlib>
class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  NAT* doWitnessing () {
    return (C*) random();
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<ZERO> > > ());
}


Of course it compiles. Why shouldn't it. And so will

#include <cstdlib>
class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  NAT* doWitnessing () {
    return (C*) random();
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<FAIL> > > ());
}


maybe. Maybe not. How about

class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  void doWitnessing () {
    if (false) NAT a = C();
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<FAIL> > > ());
}


and furthermore, what about

class NAT {};

class ZERO : public NAT {};

class FAIL {};

template <class C>
class SUCC : public NAT {
public:
  void doWitnessing () {
    if (false) NAT a = C();
  }
  SUCC () {
    doWitnessing();
  }
};

int main (void) {
  (SUCC<SUCC<SUCC<ZERO> > > ());
}


Got that?


Found at XKCD
The well-known problem of transferring a file is comparably hard to solve: There are plenty of possibilities for file transferring, and as long as you work with somebody who knows how to use them, you usually find a possibility to do so. The original problem arose from NATs, which do not provide you a direct port for listening.

Meanwhile, with the rise of VoIP, this problem is solved in most cases. Routers can be configured to support UPnP, for example. The main problem is that there are plenty of people not knowing how to use tools like, say, scp. And many people are not willing to use any program that is not already installed on their system. They will try to use filehosters, and if you insist on security, they will compress the whole stuff into an encrypted RAR archive - if they have RAR.

So there is a certain motivation to have at least an own upload server - at least you can (mostly) rely on the person having some sort of webbrowser. Of course, there are plenty of PHP Upload Scripts out there. However, the other person might have a slow internet connection, and the file may be several gigabytes large. Which is mostly out of the bounds of your usual Apache and PHP configuration. And changing this configuration can be a real pain, as relevant configuration options are scattered all over the system, and you are likely to miss one and make the upload fail - and you may not want to configure such an upload site just for the casual use.

So, my approach was to use Clisp with Hunchentoot. Clisp is small, and Hunchentoot is simple but powerfull and gives me full control. I installed the necessary packages with Quicklisp, and produced the following code:

(ql:quickload :cl-base64)
(ql:quickload :hunchentoot)

(hunchentoot:define-easy-handler (index :uri "/index") ()
    "<form enctype=\"multipart/form-data\" action=\"upload\" method=\"POST\">
 Please choose a file: <input name=\"uploaded\" type=\"file\" /><br />
 <input type=\"submit\" value=\"Upload\" />
 </form>")

(hunchentoot:define-easy-handler (upload :uri "/upload") (uploaded)
    (rename-file (car uploaded)
        (concatenate 'string "/tmp/"
        (cl-base64:string-to-base64-string (cadr uploaded))))
    "SUCCESS")

(hunchentoot:start (make-instance 'hunchentoot:easy-acceptor :port 8000))

The uploaded file will be moved to /tmp/, and be named as the base64-representation of the name given by the client (to prevent any exploits). As you may notice, this is a script that is not made for "production use". I can just run it when I need it, let it run until the file is uploaded, and then stop it again. This is exactly what I need.

Well, one possible extension would be to add UPnP-Support, so I do not have to manually forward my router's port to use it.
Update: Notice that you should set LANG=C or something similar, otherwise binary data might be re-encoded.

I just happened to see this, which is about a rant by Linus Torvalds, about Gnome 3. They conclude that XFCE will probably be more popular in the future. And I think that is a correct conclusion. XFCE is not that bloated, but still versatile. However, I am afraid that XFCE will join the fate of KDE and Gnome soon or later (especially when I see how the Xubuntu-people raped it).

The Linux desktop seems to be "stuck" somehow. The systems get more and more complicated, having a lot more little tweaks popping out of edges when pressing keys, but the usability does not increase in any way. That is, you can get used to the desktop environments, but especially advanced users want to adapt the computer to their needs rather than the other way around.

I think there is a simple fact that we just have to admit: The Linux desktop is good as it is.

No matter how many tweaks you add, the basic design has not changed and just leaves a few alternatives which are commonly used. There are some niche-products like tabbed and tiling window managers, there are the usual window managers with a task bar, there are docks, there is stuff like a common main menu for all applications. There are a few tweaks like key combinations that activate some other bars to appear. And except a few newer graphic tweaks (like window-previews in the taskbar or mac's expose) that need GPU acceleration, everything has been there for years. And still, the old Windows 98 Desktop default configuration is much better for both beginners and advanced users than much of the new stuff, and even the design may not be the most beautiful, but is acceptable.

Of course, a desktop environment does not only consist of the window management. But the same goes with file managers and editors: There are several approaches for having an UI, but all of them have been there for years. There is probably always some space left for innovation, but at least currently I do not see any.

Therefore, it seems like all this blabber about new versions of desktop environments is mainly about the recombination of already existing concepts, and about design. I would appreciate if the responsible people would concentrate more on making the software stable than making it new. In that sense, I like projects like Trinity very much - but of course, for a programmer, it is more prestigious to create something entirely new than just keeping the old stuff stable.

So here is a suggestion for something really "new": As I pointed out, I think all desktop programming is mainly about recombination of existing features. There is a lot of expierience with these features, and it applies to most operating systems. So why not use this knowledge to build a "meta-framework" for defining how features can be combined, platform-independently? What I am thinking of is some simple scriptable API with which one can program and configure his own environment. For example, I could think of something that bases on HTML and CSS with some extensions that allow to include program windows, but without having to care about how exactly the graphic backend works. Where you can include program windows into your DOM-tree by a special element, like you would do with images and stuff. HTML and CSS, because these are formats that have been here for years, are well-tested, well-supported and well-maintained, but of course, any other similar format would do as well.
Such a thing is definitely not trivial to implement, but I think it should be possible, and it would be nice.

And it would be better than bloating up existing environments.