As the German Wikipedia keeps getting destroyed by arrogant pimple faced pubescent high schoolers, unlike the English Wikipedia, there is no article about the Glivenko Theorem in the German Wikipedia anymore. As far as I remember, there has been one in the past. Currently, this article in the English Wikipedia is suggested to be merged with the article about the Gödel Gentzen Negative Translation, which sounds reasonable, as it is a generalization - but it also does not have an article in the German Wikipedia anymore.

The German Wikipedia does not have any form of the quality it used to have. There is no "freedom" anymore: The last time I edited something, it took weeks (!) for the article's usurper to confirm my changes. I do not see the "freedom" it proclaims anymore.

Besides general boredness, I take that as an occasion to blog about the Glivenko Theorem. Especially, I want to try out how good (or bad) I can get MathJax to produce proof trees. Apparently, it does not look so bad, but there is no real possibility to create horizontallines, stuff like \cline does not work in MathJax. If someone has suggestions on how to make them look better, please let me know. I am using my own pre-generated HTML tables currently, which look good, but I would prefer a solution that is more integrated into MathJax.

Especially, this post's primary purpose is to be informative for people who were not concerned with logic so far. Every mathematician should already know everything stated here.

The Glivenko Theorem sais that in propositional logic, if P→Q is provable classically, then ¬¬P→¬¬Q is provable intuitionistically.

There are several ways to prove this. But let us first look at what that means.

In the end, a sufficiently simple calculus for this is the Hilbert Calculus, which only implements Modus Ponens, that is, has only one rule: From A and A→B we may derive B. Additionally, if we have a proof of B from A, we may derive A→B and strike A from the list of axioms.

A→BA
B

A
B
A→B


We get intuitionistic propositional logic by the axiom schemes (→ is right-associative and binds less strict than ∨ and ∧)

A→A∨B, B→A∨B, A→B→A∧B

A∨B→(A→C)→(B→C)→C, A∧B→(A→B→C)→C

Furthermore, we introduce a special propositional symbol ⊥, the falsum, and add the axiom of ex falso quodlibet

⊥→A

which means that from a wrong proposition we may derive everything.

For classical logic, we define ¬A:=A→⊥, "not A". Then to get classical logic, we add the axiom scheme of duplex negatio affirmat

((A→⊥)→⊥)→A

which becomes

¬¬A→A

in the above notation.

To prove the Glivenko Theorem, we show that every rule and axiom for classical logic is derivable in double-negated form in intuitionistic logic. Therefore, the first thing we shall prove is that from ¬¬(A→B) and ¬¬A we may derive ¬¬B, corresponding to the modus ponens. In words, the proof goes:

Assuming A→B and ¬B, we know that ¬A must also hold, which contradicts ¬¬A. Therefore we know that one of our assumptions is wrong. Assuming ¬(A→B), we get a contradiction with ¬¬(A→B). Therefore, the assumption ¬B must be wrong, therefore ¬¬B.

Formally, it goes:





[A→B] [A]


[¬B]

B

¬¬A¬A
¬¬(A→B)¬(A→B)
¬¬B

Furthermore, we need to show that if ¬¬B can be implied by ¬¬A, also ¬¬(A→B) holds. So let us assume that ¬¬B follows from ¬¬A, but ¬(A→B). Assuming ¬A holds, using ex falso quodlibet ⊥→B gives us A→B which contradicts ¬(A→B). So ¬¬A which implies - by assumption - ¬¬B. Now, assuming B, we can follow A→B, which also contradicts ¬(A→B), so ¬B. But ¬¬B and ¬B contradict each other. So ¬¬(A→B). I admit, this proof is somewhat hard to understand. Here is the formal tree:



[¬A] [A]





⊥→B 







B






¬(A→B)
A→B













[B]

¬¬A



[¬(A→B)] A→B









¬¬B




¬B














¬¬(A→B)






Notice that from B we followed A→B. This is correct according to the above definitions, but maybe not clear immediately.

An easier proof can be given for A→¬¬A, which we need to be able to convert all of our additional axioms for the junctors.


A
[¬A]



¬¬A

Finally, we need ¬¬(¬¬A→A) to be derivable, and by the above proof, it is sufficient to prove ¬¬¬¬A→¬¬A, because that is ¬¬(¬¬A)→¬¬(A) and therefore implies ¬¬((¬¬A)→(A)). It can be derived by:


[¬¬A]
[¬A]



¬¬¬¬A
¬¬¬A




¬¬A



We have proved the Glivenko Theorem. W00t.

The feel when diving into a virtual world. For a healthy person it can be relaxing, for a disabled person, I guess it can be a form of escape from his body.

Unfortunately, while those who would be able to go outside are given several kinds of gadgets, those who cannot appear to be widely forgotten by the game industry.

How refreshing to read that there is some person who cares. Of course, this is not the first time I hear of disabled gamers but this time in a completely different manner.

One may ask whether quadriplegic people do not have more important problems - at least this is a question I expect from the ordinary™ person. Actually, most people have more important problems than gaming, regardless of whether they are disabled or not. But thinking about it, I actually do not know anything I could do without moving. I do not like to walk, I do not like to get up, but I am aware and thankful that I can. Without this ability, what can you do all the day? You cannot eat without help, you cannot even read a book yourself, without some gadget that helps you. Playing a video game, getting a bit of the feeling of being able to move, I am pretty sure that this can make a disabled person less depressed. And there are games that can be played together. In the virtual world, they are not disabled anymore.

Above that, let us not forget that these controls, and the expierience with it, might soon or later help developing better gadgets for real life for those people.

Today was the first time that I ran into an unexpected race condition. With a bash script for a self made automounter. The automounter is invoked by udev. I did not expect udev to run the same script twice parallely, and so, an error occured.

The theoretical solution is simple, use locking. But in fact, I never have seen bash scripts doing something beyond

[ -e my_lock_file ] || touch my_lock_file && ...
to create locks, and this is dangerous, because between testing and touching a logfile, there might be the time for another test - the locking is not atomic.

I would have been surprised if this problem did not have a solution, and in fact, it does:

lockfile-create /tmp/my.lock
...
lockfile-remove /tmp/my.lock
Nice. Especially, it allows simple experiments with lockfiles through remote file systems.

In my school, there were a lot of projects trying to tell pupils how to do successful learning. Some of them were useful in special settings, but most of them were just ignored by most students, like me.

One example is the record card based learning of vocabulary. I always hated learning vocabulary, because it is just plain memorizing, there is no understanding behind it. Additionally having to maintain a bunch of cards never really helped, actually, I found it even more annoying to learn them. At that time however, computer aided vocabulary trainers were not yet widespread, maybe a record card based learning strategy is more helpful with a computer.

Even worse was everything regarding social sciences, history for example. I would have been glad if it just had been about learning dates and names. But additionally, it was about reading and "understanding" the stupid opinions of posers and explaining one's opinion (or at least the opinion the teacher wants you to have) to them.

While I never really had problems in science, I saw several people having problems because they simply did not find any way of seeing how science works. In everyday life, there is no need for proper definitions or fireproof reasoning as it is when doing science. You are rolling around opinions of yourself and others, but you have no need to prove them in an empirical or logical way, and learning to do this would probably be better than knowing the periodic table by heart.

So, well, I have done school, and I know others who have. I know people who were quite good, and I know people who were quite bad. I guess this gives me at least some eligibility to write a blog post about successful learning. However, I can only give suggestions, everybody must find his own way - probably the first thing to know: If you have found your own learning strategy, then keep it, even though others tell you it is bad.

The main lack regarding most recommended learning strategies I have seen is that they address exactly one thing very well. Namely the fact that if you practice things often enough, you will remember them. I cannot remember any learning strategy not being focused on efficient memorizing of matter. They give a way of learning certain kinds of matter fast. But they miss one crucial point: The personal interest.

No matter how hard I try, it is hard to learn something I do not really want to know. For some people, getting good grades is enough motivation to be willing to know matter. And in my expierience, the people with a better average grade were usually the people who did not really have any deeper interest in the actual matter but getting good grades. Just to make that clear, I am talking about the better people, not about the best people. And I am not talking about "nerdy" pupils who do not have any hobbies except learning for school, I am only talking about their interests regarding school matter.

So before you even try to make your learning more "effective", ask yourself whether you really want to be good at the certain subject. Ask yourself, why do you learn for that subject. It can be hard to be honest to oneself, so take your time thinking about it.

Maybe you have a deeper interest for the subject - which does not mean that every of your grades needs to be perfect. I had a deeper interest in Mathematics, still, my grades were not always perfect. School can also underchallenge you, and you should never forget that every school subject can be separated in a lot more subjects, and even if you like one subject in general, there may be parts of it that you do not like (like stochastics for me).

Still, if you have a deeper interest, then you should be able to learn it quickly, and therefore you should do so. And you should not stop at the level of current school matter: If school is too slow for you in a certain subject, try to get further information on it - books for the next class for example, or even university books, if you are good enough. Almost every subject has its own competitions on which pupils can take part. Even if you are not good enough to win a prize, participate, just to show other people that you are interested!

If a teacher notices that you become good at something, depending on how much of an asshole he is, he either will support you - then you should take this support and be thankful - or he will try to get you away from doing this - then you should ignore it. Do not try to discuss with teachers, you can only lose, and if you lose, you will probably lose more than just a discussion. Life is hard and cannot be lived without some kickbacks, and if you cannot manage to achieve all your goals, do not expect teachers to be able to console you. Some teachers may be able to, but some will also be glad about your failures, especially when they tried to keep you away from something before. Expect this. Teachers are just human, in the end. The older you get, the thinner becomes the facade of wisdom they want their pupils to see. And you are a factor that creates additional work. Keep that in mind.

Now, let us assume you have such a subject, then if you do not suck completely at everything else, you will probably not be the best of all pupils, but your grades will be sufficient for most things you want. There is no need to become good at everything.

In case you do not have any subject of interest, but you still do not feel successful enough, you should wonder why you go to school at all. Make up your mind about what you want to achieve. If it requires finishing the school you go to, then you should be able to find at least something interesting, because that usually means that school should teach at least some skill that is needed. If you do not think so, then you are probably missing something about your plans.

If you do not have any plans at all, then it is still better to finish school, to have a broader range of possibilities later. Then you will just have to learn to ignore your boredness while learning. This is probably the hardest part, because the usual learning strategies cannot handle this situation, they all assume that you actually want to learn the matter rather than being forced to.

They usually tell you not to have things that may distract you from learning. But presumably you are familiar with the situation that even though you try to keep away distracting things, you will find everything distracting if you do not really want to learn. The solution is simple: Have something that distracts you while learning. Do something else while looking at the matter. This is quite the opposite of what almost every learning strategy will tell you, and in fact, you will not even learn half as much as you could when just concentrating on the matter. So if you can manage to concentrate on it try to do it, but since concentrating on the matter is not always possible, having something distracting is basically the best possibility to find a balance between your boredness and your duty. Your grades will probably not be the best, but they should always be sufficient.

Trying to enforce your interest on a topic will fail, you can try to find something interesting in a topic, but if you cannot find anything of interest, you just have to accept that. Man can do what he wants but he cannot want what he wants, as Schopenhauer said.

And - even more important - never rely on teachers to make something more interesting to you. Probably there are some teachers who are able to achieve this. But mostly, either the teacher is pretentious and therefore not willing to give you instructions on why his or her subject is interesting, and therefore might hate you from the day you have asked, or he is not even interested himself. I coached pupils while I was still at school, and I have marked tests of teacher trainees at the university - I can say that at least in Germany, the latter case definitely occurs. Very often.

You may have noticed that I gave a quite bad image of teachers. Teachers are, as I said, humans. Teachers are the persons who give instructions to you - some of them enjoy their job, some of them hate their job. In any case, they are your instructors, not your friends.

You may like them on a personal level, as soon as you get old enough to have them let you know more about their personality. I have talked to teachers outside school, and some of them had quite interesting personalities. Of a few of them I even wondered how such a friendly person in real life can be such a dick during the lessons.

But that is how it is: They might be enjoyable in real life, but at school, which is clearly not real life, they are your instructors, not your friends. They have work to do, and you are part of the material they work with, and if you are difficult to handle, that is more work for them.

I hate firewalls, but I have no choice, with gigabytes of spam-traffic. By a mistake of mine, I probably locked out a lot of IP adresses that should not have been locked out. I am sorry for that.

If you notice that I locked somebody out, please let me know.

There is apparently no simple possibility to find out whether a given IP adress is blocked. So I cannot easily filter my logfiles. Above that, the default whois-answer gives an IP range, but iptables wants CIDR-notation.

I could not find any software calculating this (if somebody knows a good one, then please tell me). What I quickly wrote in a file range2cidr.c is:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdint.h>
#include <netdb.h>
#include <math.h>

int main (int argc, char ** argv) {
  if (argc != 3) {
    printf("Usage: %s lowerbound upperbound\n", argv[0]);
    exit(EXIT_FAILURE);
  } else {
    uint32_t lowip, highip;
    struct hostent *host;
    host = gethostbyname(argv[1]);
    lowip =
      ((((uint8_t) host->h_addr[0]) % 256) << 24) +
      ((((uint8_t) host->h_addr[1]) % 256) << 16) +
      ((((uint8_t) host->h_addr[2]) % 256) << 8) +
      ((((uint8_t) host->h_addr[3]) % 256));

    host = gethostbyname(argv[2]);
    highip =
      ((((uint8_t) host->h_addr[0]) % 256) << 24) +
      ((((uint8_t) host->h_addr[1]) % 256) << 16) +
      ((((uint8_t) host->h_addr[2]) % 256) << 8) +
      ((((uint8_t) host->h_addr[3]) % 256));

    uint32_t msk = lowip ^ highip;

    int i=0;
    while (msk != 0) {
      msk /= 2;
      i++;
    }

    printf("%s/%d\n", argv[1], 32-i);
          
    exit(EXIT_SUCCESS);
 
  }}

You might wonder why I calculated the IPs so high-level. Well, I just did not want to care about the whole lowlevel-fuss and still have it portable - I mean, this code does not need to be fast, it just needs to be correct.

Anyway, there has got to be better software. Any suggestions?

Yet another person (cl)aims to reverse-engineer yet another proprietary protocol:

This time it is Skype.

Especially, after Microsoft bought Skype, this is - generally speaking - good news. Generally speaking. The dominance of Skype could have been easily prevented, in my opinion, if a few more people were interested in good - GOOD! - free VoIP solutions.

There is Ekiga. There is Twinkle. There is Jingle. They all use SIP-like protocols, which is the first fail, a fail that comes from the arrogance of software programmers. While the programmer does not have any problems with router reconfiguration, but has problems with a quarter of a second of additional latency which he would have to accept when allowing TCP connections, the usual user will accept a little higher latency, as long as he does not have to reconfigure everything.

Meanwhile, with the rise of Skype, most NATs support STUN and several other p2p-handshake-protocols well. This was not always the case.

Now put yourself in the users' perspective: You have your Windows-Box, which may still have NetMeeting, but in a very hidden place, and only working sometimes - with rather complicated configuration (direct IP adresses, etc.). And then you have Skype, which just works vitally everywhere - without further configuration.

This is not a coincidence. This is not just marketing. Skype was just better. Because Skype wanted to actually spread, while the awful pathetic hackeries of the free software community appeared to be mainly experiments to communicate with other hackers, after negotiating the actual connection parameters via phone.

And now, everyone uses Skype, except a few paranoid people who fear that it might spy their porn data.

This seems to be a common disease of the free software and open source community. There are at least three (!!!) reimplementations of what could have become a flash player, but not a single free implementation of a usable Java Applet Plugin or SVG player. Reverse engineering must be great fun.

Hopefully, there will be more success with Skype.

Ein Zwanzigjähriger lässt aus Versehen sein Gehirn zu Hause liegen, missachtet Warnungen die eigentlich jedem fünfzehn Jahre jüngeren Spielkind bekannt sein sollten, schießt in der Öffentlichkeit mit einer Weichluftpistole auf Straßenschilder. Es handelt sich um harte Plastikkügelchen. Ein Treffer tut weh, ein Treffer ins Auge kann vermutlich böse sein.

Und wenn man etwas zu viel Hautcreme verwendet kann es auch am Hals zu Wunden kommen, wie es anscheinend einer Neunzehnjährigen passiert ist - also das mit der Verletzung, das mit der Hautcreme ist eine mögliche Erklärung, vielleicht war sie auch anderweitig krank, ich habe keine Ahnung, die Weichluftpistolen die ich kenne verletzen einen gesunden Menschen jedenfalls nicht - sie tun ihm weh und verursachen blaue Flecken, was Grund genug ist damit nicht auf Lebewesen zu schießen und soetwas idealerweise garnicht zu besitzen (zumal es irgendwie ohnehin langweilig ist).

Zumindest Bürger können für so viel DummhÜbermut bestraft werden, und so wird dem Zwanzigjährigen erstmal seine Weichluftpistole weggenommen und die Frau zeigte ihn an. Wegen gefährlicher Körperverletzung - es ist immerhin eine Wunde am Hals!!!1! Nicht auszudenken was passiert wäre, hätte er einen Fingernagel getroffen!

Die Polizei indes meinte, laut obigem Bericht, sie wollen diese Waffe auf Spielzeug-Eigenschaft weiter untersuchen. Was soll das bitte heißen? Ich habe auch in Erinnerung, dass Weichluftgewehre zumindest früher ab vierzehn Jahren kaufbar waren, nicht dass ich mich jemals für die Dinger ernsthaft interessiert hätte, jedenfalls nicht ernsthafter als für Laser oder Hochvakuumdioden. Und soweit ich mich erinnere wurde ich durch mit Gummibändchen beschleunigte Bleistifte bereits schwerer verletzt als ich es jemals von Weichluftgewehren auch nur gehört habe. In den falschen Händen ist alles gefährlich.

Dass er sich wegen Körperverletzung und überhaupt dem Schießen solcher Dinger in der Öffentlichkeit verantworten muss ist klar und richtig, aber ich frage mich, inwieweit die Polizei hier die Entscheidungsgewalt hat, sofern das Gewehr legal erworben wurde können sie doch wohl den Typen alleine wegen des Besitzes nicht belangen?

"Eltern sollten darauf achten, ihren Kindern 'sinnvollere Geschenke' zukommen zu lassen.", so lautet am Ende jedenfalls der Schlusssatz des Artikels.

Dem kann man sich eigentlich nur anschließen, und zweifelsohne ist ein Zwanzigjähriger beim Erstehen einer Spielzeugpistole auf seine Eltern angewiesen, die es ihm natürlich verwehren sollten. Heute ist es eine Weichluftpistole, und morgen eine PlayStation mit gefährlichen Killerspielen.

Da gibt es nun wirklich sinnvollere Beschäftigungen! Zum Beispiel Informationsveranstaltungen der Bundeswehr. Und überhaupt, Kinder treiben doch bekanntlich zu wenig Sport, ein Sportverein wäre vielleicht das Richtige; wie wäre es zum Beispiel mit einer Jahresmitgliedschaft im lokalen Schützenverein?

Die SZ hat einen Artikel zur Netzneutralität, und erkennt dort überraschenderweise den Kern des Problems:

"Für einen Teil der digitalen Generation ist das Internet längst der Ort, an dem er nahezu seine gesamte soziale Existenz entfaltet. Vergleiche mit Verkehrsinfrastrukturen, wie sie etwa Kruse zieht, mit anderen Kanälen für Kommunikation oder mit wirtschaftlichen Gütern greifen aus Sicht dieser wachsenden Gruppe viel zu kurz, weil sie eben nur einen Teilaspekt der Rolle erfassen, die das Internet in ihrem Leben einnimmt."

Die Provider seuseln dahin, dass irgendwelche "Power-User" ihren wertvollen Traffic verbrauchen. Die Provider sind meiner Meinung nach verlogen, und wollen eine Rolle einnehmen, die sie nicht haben. Alles was ich von meinem Provider will ist, dass er mir einen Anschluss ins Internet gibt. Roher Traffic. Ich will eine Leitung die zu einem Router führt der alle IP-Adressen erreicht. Sie sollen so viel verlangen wie es eben kostet.

Aber genau hier ist das Problem: Wie viel kostet es denn eigentlich? Wieso rücken die Provider da nicht mal raus mit der Sprache. Wieso fragt das die Provider keiner? Wie viel Aufwandt kann es denn sein, ein Paket zu transportieren?

Nicht viel, denke ich. Sie müssen Infrastruktur aufstellen und Maintainen. Dazu brauchen sie Fachkräfte, Server und Strom. Die Kosten dürften weitestgehend konstant sein, und sobald das Zeug mal steht ist es erstmal egal, wie viel Pakete da durch gehen - es gibt ein Maximum an Paketen, die da durchgehen, freilich, und mehr Pakete kosten auch mehr Strom und mehr Wartungsarbeit, aber wohl kaum in erheblichem Maße.

So, und jetzt kaufe ich als Kunde einen Anteil an deren Volumen. Inklusive einer Wartungsgarantie. Die Wartungsgarantie kauft man wohl in der Regel in Form einer monatlichen Grundgebühr, und die bin ich auch bereit zu zahlen. Der Traffic selber kostet die im Monat aber nichts extra - ich zahle für eine gewisse Geschwindigkeit, die einen Anteil von deren Infrastruktur übernimmt. Die Infrastruktur teile ich mir mit anderen Leuten, die ebenfalls ihren Anteil zahlen.

Zahle ich nun für DSL 1000, so zahle ich für einen Anteil von etwas, das da ist, und darf es damit auch benutzen. Wenn nun jemand daherkommt und mir sagt, DSL 1000 bedeutet nur eine Maximalgeschwindigkeit, und eigentlich rechnen irgendwelche Statisten schlaue Statistiken aus, die davon ausgehen, dass kaum ein Kunde das wirklich ausnutzt, sodass man mit weniger Infrastruktur auskommt, dann ist das deren Fehler, und nicht meiner, wenn ich davon ausgehe, dass ich auch wirklich mein DSL 1000 ausnutzen kann, wenn dem nicht so ist müssen sie es angeben, das wird auch teilweise getan.

Meine Meinung dazu steht fest: Alles Andere ist verlogen!

Dass deren Statistiken nicht unbedingt sinnvoll sind merkt man schon alleine daran, dass ich in einer ländlichen Gegend, wo ich nur DSL 1000 kriege (und mehr bezahle als man für die fünfzigfache Geschwindigkeit in Teilen Münchens zahlt), mehrmals Werbung für DSL 2000 und höher bekommen habe.

Nun wollen mir also irgendwelche Schlaumeier weismachen, dass das ja normal sei. Provider müssen Statisten beschäftigen, und die gehen nunmal von üblichen Haushalten aus, die ihren Internetanschluss nur alle zwei Wochen mal für ne halbe Stunde benutzen. Böse Webdienste wie YouTube oder Ekiga (oder Skype) ändern dies nun, die Statistiken gehen nicht mehr auf, weil die Leute auf die für Provider anscheinend völlig abwegige Idee gekommen sind, das Volumen das beworben wird auch wirklich zu nutzen. Ist das nun mein Problem? Offenbar erfassen diese Statistiken nicht meine Bedürfnisse, und auch nicht die Bedürfnisse vieler Anderer.

Darf nun ein Provider entscheiden, welcher Traffic zu mir bevorzugt wird, und damit effektiv in meine Privatsphäre eindringen, weil irgendwelche Statistiker sich verrechnet haben? Ich sehe keine Rechtfertigung.

Nun mag manch Einer entgegnen, es wäre doch die Sache der Provider, was sie mit ihrer Infrastruktur tun. Zum Einen haben die Provider hier aber eine eindeutige Übermacht, gerade über die "letzte Meile", die es rechtfertigt, in ihre Freiheit einzugreifen, um der Gesellschaft Willen. Zum Anderen möchte ich dem entgegnen, dass ich mich frage, ob ich, wenn die Provider mit ihrer (für die Gesellschaft wichtigen) Infrastruktur persönliche Freiheit genießen, auch zu deren Boycott aufrufen darf, wenn sie es mir zu bunt treiben? Darf ich öffentlich sagen, "Provider X macht ..., boycottiert ihn!", darf ich Flugblätter verteilen (nachdem die Internetprovider ja meinen Aufruf blockieren könnten), und bleibe ich Ungestraft, wenn die Leute es dann auch tun? Ich glaube kaum.

Zum Vergleich: Ich dürfte den Rücktritt jedes Politikers fordern, und zu Stimmentzug aufrufen.

Man darf hier nicht die Macht von Providern unterschätzen: Deren Infrastruktur ist wichtig in sehr vielen Belangen. Ihre Arbeit zur Verbesserung der Infrastruktur ist zum jetzigen Zeitpunkt unverzichtbar, aber ihre Übermacht ist besorgniserregend.

Ich sehe es also ganz klar: Es muss eine gesetzlich verbriefte Netzneutralität geben. Für alle Netze. Ohne Ausnahme.

Und ich bezweifle, dass das in irgendeiner Weise den Providern schaden kann. Ich glaube eher, dass die Provider hier eine Geldquelle sehen, indem sie kleine (und große) Unternehmen erpressen können. Und wenn sie erstmal damit angefangen haben, gibt es kein Zurück mehr - schon jetzt sehe ich nicht mehr wirklich einen Wettbewerb, wie soll sich in dem Gebräu erst noch ein Wettbewerb bilden können, wenn große Provider von großen Netz-Unternehmen mitfinanziert werden, während kleine Unternehmen erstmal Jahrelang auf Genehmigungen zum Kabelverlegen warten? Denn das ist genau das, was wahrscheinlich passieren wird.

All software sucks. Somehow. Or at least most of it. I was wondering why. Well, I do not have that much expierience yet, but of course, I can still think about it. I like to point out a few of the views I have got with my limited expierience, feel free to comment with corrections if I am wrong.

Usually, I think that software complexity must be justified: Software that does much may be more complicated than software that does essentially nothing. Therefore, I think that SSH is good software, as it is versatile and the complexity is comparably low, X11 is neutral, as it is very complex but at least can do a lot of things, while Nepomuk is bad, in my opinion, since I see its complexity, but I do not see why it is useful at all, except for a bit meta-data management of files (and as a buzzword throwing machine).

So in theory, everybody could write software that is only as complicated as it needs to be - whatever "needs" means in this case.

How complicated does a software need to be - a question quite a lot of people have argued about, and for which the worse-is-better-philosophy may be an answer, and unfortunately, it turns out that it is the paramount philosophy for most programmers, in the end.

For free programmers, it is a natural principle: Free software usually comes from companies that do not expect to earn much money with its development anymore and therefore release it to the public, or from programmers that want to solve a certain problem. And these problems are mostly flippant, without any deeper meaning for the rest of the world. Usually, it is not to make something other projects can rely on, but to make something that works as fast as possible for the moment - or sometimes just because somebody wants to show his hacker-skills.

One example which makes that clear is the plugin situation of Firefox: There is Gnash and Swfdec, trying to become an alternative to Flash. As well Gnash as Swfdec can play YouTube-Videos very well. In no way can they replace the real Flash player, but at least for the special purpose of watching YouTube-Videos, they can - but who cares, if you do not want flash, just use youtube-dl to watch them. On the other hand, I do not know of any single free implementation of a Java Applet Plugin: There is one together with GCJ, but since in GCJ nobody cared about security, besides crashing often, there is no security concept behind this applet plugin. And even worse, plugins for mplayer and vlc and xine are unusable, which is why I mostly do not install them at all. There is a lack of interest in developing these plugins.

But also what was said to become the next-generation-replacement of Flash, namely SVG, will never spread, because there has never been reasonable support, basically. And with WebGL being deactivated even in Linux-Firefox by default, the dominance of Flash will remain a looooong time, I think.

Another example I have to feel right now is the remote-desktop-solution NX. Actually, from the graphical perspective, RDP and VNC and even X11 are good enough for vitally everything that can be done with NX. The notable part about NX is the integrated sound- and samba-forwarding, which is integrated into the NX Client which also runs properly under Windows. This is, in my opinion, the main advantage of NX. But the free implementations NeatX and FreeNX lack of this support somehow, FreeNX supports it in theory, but it is impossible to configure if you need something non-standard.

Well, most of the existing software seems to have this problem. But of course, there are exceptions. Sometimes people see larger problems and are willing to try to solve them - which often leads to a worse problem, namely hundreds of reimplementations of the same problematic peace of software, but seldom there evolves a real solution. Why is that?

Again, let me give you an example. I am writing at Jump and Run games for 6 years now, the most recent immanation being Uxul World (which is likely to get finished this year, if some other things will not fail). Actually, I finished some smaller games, but I never released them, except to some friends. One example was a maze-game written in C++, in which simple mazes could be generated using text files. Why did I not release it to the public?

Firstly, it is written in C++ - I do not want people to thing that I usually write code in C++. Secondly, it was too small, and lacked of features: When I showed it to some friends, they all liked it, but they all had suggestions on how to make it better, and unfortunately, these suggestions were nonpoint and some of them were mutually exclusive: One person wanted to make some shooter out of it, like Cyberdogs, another person wanted to add more structural features like doors, switches and teleporters, another person wanted me to make it 3d and use OpenGL instead of SDL (which I was using at that time). Thirdly, a computer scientist who "reviewed" my code on request (at that time I was still mostly using Java, and new to C++) commented on my collision engine that it was way too complicated, and "can probably be used to shoot satellites into space", meaning that my code was hard to understand because it was more accurate than code of that kind usually is.

I simply did not want to write that kind of code: I do not like the concept of worse-is-better in software I actually want to release. But then again, you see people writing a "good" game in half a year, and since you do not cooperate with all of that "experts" telling you to use a pre-assembled library for that, you will not get support at all. And it goes this way for other kinds of software, too - mostly there are either solutions for your problem that other people consider "sufficient" (while you do not), or they do not understand why anybody would want whatever you want to create. So in fact, people are forced to make their software "worse" or impose a lot of additional work to themselves.

Unfortunately, while there are at least some free projects claiming to be "better" than "worse", for commercial programming this principle can never be economically advantageous, at least according to what I have heard from people working in the software industry. Software must be cheap to create, and the cheapest way still seems to be hacking around until the software works - which is what extreme programming essentially is about (except that one usually uses more buzzwords to describe it). Hack it, test it, release it, and hack again.

Especially, in the commercial world, there is no point of taking too much care about backends of programs, as long as the frontends fit for the users; making software more complicated ensures that people who once used it will depend on it: If you keep them dependent of your old software long enough, they will use newer software from you, too, on which they will depend later. Backward compatibility is not that expensive, as The Old New Thing points out in many of its posts.

Ok, it is no secret that the commercial world is absurd in many ways. But also in the scientific world, worse-is-better is a reasonable way of programming. Also scientists do have some pressure, at least bibliometry. And also in science, you do not always rewrite anything, but search for "microsolutions" using Google & co. to come to your solution faster. And above that, science is often interested in proof-of-concept-implementations rather than production-environments.

In any of the three cases, the programmer does a trade: By increasing the complexity of his software, he achieves his goal earlier, and the software spreads faster. And software can get extremely complicated. Take Windows as an example. Or Linux. Or Firefox. Or Mediawiki. Or X11. Projects with a long history. Projects which have grown extremely complicated. Active projects which "work". That is an argument I heard so often now: Implying that it is "good" just because "it works". Using a telephone to mow your lawn will work if you put enough efforts in it. Using a toaster to dry your hair will, too (I tried actually, but I would not recommend it). You can make vitally everything "work" if you take enough effort. The reason why your Windows desktop is so shiny and simple, the reason why your Debian vserver has almost no downtime, the reason why your Mac OS X recognizes your iPad so well, is not because the software is essentially "good", it is because a lot of peope are working hard to make it "work" for you.

The implication from "working" to "good" is often related to something I call "pragma ideology". Often, pragmatism and ideology contradict each other. It sounds obvious that the only reason on which one should choose software is whether it serves its purpose best, and therefore, this "pragmatic view" is chosen as a new ideology, an ideology that ideologically declines every form of ideology.

Instances of such ideology often refuse Lisp and Garbage Collection in general, but PHP, Perl and Python are appreciated since there is so much software written with it. Innovative ideas are seldom appreciated, since new ideas tend not to work out immediately. With this ideology, no real development is possible, and quite a lot of stuff we have today would have never been possiple. The "web" was a very bad idea in the past. Wikipedia was "condemned to disinterest" at a time when there was no article about navel lints. A Professor once told that even such a basic thing as a graphical user interface was seen more as a science fiction anecdote than a real work space in the beginning.

But pragma ideologists do not see this. They see what there is "now", and what is used by them and "works" according to their imagination of what "working" actually means. I always find it interesting to see two pragma ideologists with different opinions talk to each other. Since you cannot be a pragma ideologist without a bit of arrogance, of course, every both of them think that the other's software is crappy, and that he can "prove" this by his "expierience". Well, my expierience tells me that the really expierienced people are generally open to new ideas, but very sceptical about them. The expierienced people can usually tell at least two anecdotes for every new development, one constituting their openness, and one constituting their scepticism. Thus, in my expierience, pragma ideologists are usually not expierienced.

Of course, when having to make and keep a pool of computers or a rack of servers work, a little bit of pragma ideology is necessary to keep the system stringent. And the same holds for larger software projects. But there must be a balance, and expierienced people know this balance. They know when not to block new ideas.

But they usually also know when to do so. Because while pragma ideology is - in my opinion - one cause for very bad software, being too fast replacing old software by new is - in my opinion - also one. I see two major reasons for throwing perfectly working software away.

One reason is the rise of new "better" standards that everybody wants to support.

Imagine you want a simple and free replacement of the proprietary ICQ. Well, having a buddy list and chatting to multiple or single people works pretty well with IRC. So you could adapt IRC for that purpose: It worked well since 1993, but it has one major problem: It does not use XML. Thus, XMPP had to be invented, with a lot of "extensions" almost nobody uses. Who uses Jingle? Who uses file transfers in any other way than they were possible with IRC-DCC?

Imagine you want a language with a large library that is simple to learn, has a mighty object system, with an intermediate bytecode compiler to make the commercefags happy not to have to open their source, which is available on vitally every platform. You could just take a Common Lisp implementation like Clisp, extend it with a JIT-Compiler for its bytecode, extend it with a bit UI pr0n, and deploy it and make everyone happy. But why would you do that, if you can just create a new bytecode with a new interpreter and a programming language basing on C++, keeping enough C++ to confuse people not familiar with it while taking enough C++ away to angry C++ lovers.

Imagine you want a file transfer protocol supporting file locks and meta information. You could extend FTP by a few additional commands like LOCK, SETXATTR and GETXATTR. But you could also put a huge overengineered  bunch of XML meta information on top of a HTTP substandard and extend it by a few new methods, and then give it a fancy meaningless name.

Another reason for throwing away working pieces of software is the NIH syndrome. The recent discussion about Unity vs. GNOME in Ubuntu seems like an instance of this to me. But also Flash seems to be an instance - it used to base on Java, but now has its own virtual machine. Also, the only reason why BTRFS is still developed seems to me like an instance of the NIH syndrome.

In fact, it is not always possible or useful to use old software or old standards and base new stuff on them. In the end, many systems evolved by just adding small pieces of features, and after they have grown complex, it may sometimes be better to abandon them, and create something new, basing on the expieriences of the old system. It would be nice if that could happen to X11 finally - it is time for X12! It would be nice if that could happen to the whole bunch of "web standards" (javascript, XML, XHTML, SVG, jpeg, etc.) finally. But still, that means not just creating a new system that is as crappy as the old one, but creating a new one with the expieriences of the old one.

Most of this holds for scientists as well as pragmatists - I do not think that for example some sort of pragma ideology cannot also be found in a scientific setting. So these poins are similar for both classes of software producers. But while they of course have a lot of things in common, there is a reason why I think it is necessary to choose wheter one is a computer scientist or a programmer. It is not that one person cannot be able to be both, and I do not want to imply here that one is worse than the other. It is just that sometimes I get the impression that some people cannot decide which of them they are, and sometimes even in larger projects there might be this problem because some of the programmers are programmers, some are scientists, and some do not know which of both they are. Well, that is at least what I see, and how I explain some flaws in several pieces of software which I saw.

For example, take a look at object system of C++. Compared to Smalltalk and Common Lisp, even Java, it is ludicrous. And since it is so ludicrous, as far as I see in history (well, it was before my time), nobody really used most of the mechanisms it had from other object systems, and nowadays, object oriented programming mainly means putting a few methods (which are mostly just plain functions) into an own namespace - so suddenly, the namespace has become the important part, and thusly, some people get confused about what Common Lisp considers as a "class".

Looking at Linux device files in the /dev directory, one notices that block devices and character devices can usually be accessed by the default libc functions, as if they were files. So whatever /dev contains is an abstraction away from the hardware, which is sufficient for most purposes, but of course not for all purposes. Now one might expect that for example NFS or Samba will be able to export a device file as well. And in fact, they do, but they do not export it as the file it appears to be on the actual computer - they export it as an actual device file - which means that it gets a major and minor number, as all device nodes do, and it will then become an actual device pointing to the client. That is because in the end, the filesystem is nothing but a namespace, and of course, there might be reasons not to export whole disks via NFS (and there are other solutions to do that), and there might be reasons to export device nodes pointing to client devices rather than devices on the NFS server. But in my opinion, the latter is the more low-level way, and should therefore not be the default way. This is because I consider myself as a "scientist" rather than a "programmer" (while actually I am none of both yet). The programmer would say "it does what its specification sais and there are alternatives who can achieve what you want if you really want to do so". The scientist wants an axiomatically reasonable piece of software with no "surprises".

Another thing I hear very often is a mixup of software certification vs. software verification vs. whatever else. There is a process called software verification in which you usually run your software through a lot of test cases, as you would do with hardware. This is reasonable, as long as you think about your code before you test it, and not just after some tests failed. Then there is formal software verification, something that should be done whenever possible (and is almost never done). And then there is certification - which means, as far as I saw, that some company looks at the software and gives its OK. These are three concepts that are essentially different approaches to similar problems, and there seems to be a lot of confusion about which one does what.

Formal verification is still not used widely enough I think, which may be caused by the fact that non scientists usually cannot imagine what "formal verification" is. If you have a specification of what a certain piece of software should do, and you have a piece of software that really does this, then this is provable! There is no exception. I have heard so much opinions on that topic yet, but this is not a matter of opinion, this is a fact, as long as you accept that the computer is a machine that works in a deterministic way, and of course as long as you assume that the hardware complies to its specifications - if you do not do so, then you will have no way of creating software complying any specification anyway! Modern computers may react on temperature changes, brightness and the current gravitation vector, which are non deterministic, but still, your computer reacts in a deterministic way on their inputs! If you cannot tell how your software reacts on them, your software is crap, and as soon as you can, you can prove its behaviour. Again, this is not a matter of opinion, this is a fact. There is currently no widely used verified operating system and therefore no way of using an actual formal proof checker to check whether your reasonings on your software are correct, but formal verification can as well be done on the paper: Can you print out your code and reason on its correctness with pen and paper? If you cannot, then you probably do not know how and why your software works, it is as simple as that.

But formal verification will not solve all problems as well, even though some theorists think so. Even though the hardware does what its specification sais, correctness of your program may not be enough, since correctness just sais that your software does what you specified. With formal reasoning, you can eliminate common bugs you know of, by specifying that they do not occur and then proving this specification. This has the major advantage that common bugs will probably only occur finitely often until every piece of software extends its specification. But there is still the problem whether the specification is really what you wanted to have. For example, some XSS-exploits did not do anything outside the common standards, they would have worked in a perfectly verified browser, they were mainly exploiting the fact that in the past, JavaScript was not used in the way it is now. XSS-exploits are a major problems, since there is no real formal way to solve them inside the browser, since the browser's entelechy is to run the scripts given by websites, and formally therefore, the several web interfaces would have to be verified - which is neither realistic, nor does it solve the general problem: Not all bugs are bugs outside the specification. In addition to that, there is software for OCR or handwriting or other pattern recognition which basically cannot be verified to work correctly from the user's perspective. Thus, testing and informal verification will always be necessary.

Certification is just letting a company do the work, probably imposing the responsibility for problems on that company. This solves no problems a computer scientist should care about, it may solve problems for smaller companies that need some kind of insurance that their software will not make their machines burn or something.

Reliability is something very important to software users. Which brings me to the next point: Sometimes it seems like the larger software companies are trying to keep their customers stupid. And in fact, I often see that there is the attitude that "the computer knows best". They should better tell their customers the truth: The computer knows nothing! It is a complicated, sophisticated machine, but it is still a machine. Maybe one day, there will be a strong artificial intelligence, but so far, there are only weak ones, and they may be useful, but they are not reliable!

There is so much software that uses non optional heuristics. Copypasting on modern systems is an example where these heuristics can get annoying: You want to copy some text from a website into your chat client, and it uses some strange formatting that it sort of takes from the piece of text you copied, while you actually wanted only that text. On the other hand, when you use your text editor, and want the actual style information, you will only get that text. These are annoying heuristics - one could educate the user that there are two kinds of pasting, the cleartext and the formatted text, and in fact, that is what Pidgin does, it has an option "paste as text".

Another example that annoyed me more than once now is the XHTML autocorrection of WordPress, which cannot be turned off on the WordPress-hosted blogs - probably for the reason that they do not allow arbitrary content. If it at least then would just disallow any form of XHTML. But it does not, it puts on your code a heuristic that tries to guess whether you are writing HTML or plain text. It sometimes swallows backslashes and quotation marks. It is annoying!

Probably the most annoying thing, which at least can be turned off on most systems, are mouse gestures. I have not seen a single system where they worked for me - neither Mac, nor Linux, nor Windows. But I actually never got the point in them anyway - two clicks versus a stupid gesture ... what is the advantage?

The computer does not know best! He applies heuristics, and heuristics may fail. That is why for unimportant things I accet heuristics, but when it comes to the computer having to decide whether I want to delete a file or open it, this goes too far. In general, I do not like software that is written by programmers who think that nobody wants anything they did not think of.

LaTeX is a good example of such a piece of software. I tried a few times to get deeper into the mechanisms of LaTeX, as there do not seem to exist much people doing it. Well, the more I knew the less I wanted to know. And above that, there is no real community to ask when you do not understand something. As long as you have simple questions like how you get something into the center of the page or how you change the font to the size you need, there are a lot of "experts", but as soon as you want to understand the LaTeX software, there is almost nobody knowing anything. Why should you know something somebody else has already done for you? There is nothing you could want that LaTeX cannot do. And if so, then you do not really want it, you either do not know about typesetting rules or you just want something that nobody is supposed to want and you should rethink it.

Everything that is hardcoded is law! This does not only hold for LaTeX. Hardcoding stuff like library paths or paths of executables is very common, especially in commercial software, but also in kernel modules that require firmware files. With libsmbclient, a library to access SMB shares, you cannot connect to SMB shares on a non standard port, the port is hardcoded. It is hardcoded under Windows, too, well, not quite hardcoded, at least there is one central setting in the registry. Windows XP supports, besides SMB shares, WebDav shares. WebDav bases on HTTP, and quite a lot of secondary HTTP servers are running on a port different from 80, sometimes 8000 or 8080. At least the last time I tried, Windows did not support any port different from 80. Hardcoding stuff that should be configurable is a very annoying problem that unfortunately occurs very often.

Ok, I have named a lot of problems I see. I will also name some solutions I see.

One major soulution to a lot of problems would be formal program verification. Formal verification can be done on the paper as well - there is no excuse of not doing it, just because there is no widely used proof checker out there. You do not need to be a mathematician to do that  (though maybe having done simple mathematical proofs may be a good start), you do not need to give a proof in hoare logic. Most mathematical proofs are given in natural language, too. Just try to give a formal reasoning that can be challenged!

Then you always should ask yourself whether you will create a new standard for something that already has a standard, when you write software. If there is one standard, can you just extend it, instead of making something completely new? If you can not, can you make your standard at least similar to the old one? If everybody tries to keep the number of formats to support small, especially not inventing new of them without a reason, then maybe the programmers could focus on software quality rather than portability.

And probably, the most important part would be the education of the users. Software is not like hardware: It is easy to change, replace, re-invent. So the user's decision is far more important. Users should be told the fact that it is their job to command the computer, and not the other way around.