Yet another person (cl)aims to reverse-engineer yet another proprietary protocol:

This time it is Skype.

Especially, after Microsoft bought Skype, this is - generally speaking - good news. Generally speaking. The dominance of Skype could have been easily prevented, in my opinion, if a few more people were interested in good - GOOD! - free VoIP solutions.

There is Ekiga. There is Twinkle. There is Jingle. They all use SIP-like protocols, which is the first fail, a fail that comes from the arrogance of software programmers. While the programmer does not have any problems with router reconfiguration, but has problems with a quarter of a second of additional latency which he would have to accept when allowing TCP connections, the usual user will accept a little higher latency, as long as he does not have to reconfigure everything.

Meanwhile, with the rise of Skype, most NATs support STUN and several other p2p-handshake-protocols well. This was not always the case.

Now put yourself in the users' perspective: You have your Windows-Box, which may still have NetMeeting, but in a very hidden place, and only working sometimes - with rather complicated configuration (direct IP adresses, etc.). And then you have Skype, which just works vitally everywhere - without further configuration.

This is not a coincidence. This is not just marketing. Skype was just better. Because Skype wanted to actually spread, while the awful pathetic hackeries of the free software community appeared to be mainly experiments to communicate with other hackers, after negotiating the actual connection parameters via phone.

And now, everyone uses Skype, except a few paranoid people who fear that it might spy their porn data.

This seems to be a common disease of the free software and open source community. There are at least three (!!!) reimplementations of what could have become a flash player, but not a single free implementation of a usable Java Applet Plugin or SVG player. Reverse engineering must be great fun.

Hopefully, there will be more success with Skype.

Ein Zwanzigjähriger lässt aus Versehen sein Gehirn zu Hause liegen, missachtet Warnungen die eigentlich jedem fünfzehn Jahre jüngeren Spielkind bekannt sein sollten, schießt in der Öffentlichkeit mit einer Weichluftpistole auf Straßenschilder. Es handelt sich um harte Plastikkügelchen. Ein Treffer tut weh, ein Treffer ins Auge kann vermutlich böse sein.

Und wenn man etwas zu viel Hautcreme verwendet kann es auch am Hals zu Wunden kommen, wie es anscheinend einer Neunzehnjährigen passiert ist - also das mit der Verletzung, das mit der Hautcreme ist eine mögliche Erklärung, vielleicht war sie auch anderweitig krank, ich habe keine Ahnung, die Weichluftpistolen die ich kenne verletzen einen gesunden Menschen jedenfalls nicht - sie tun ihm weh und verursachen blaue Flecken, was Grund genug ist damit nicht auf Lebewesen zu schießen und soetwas idealerweise garnicht zu besitzen (zumal es irgendwie ohnehin langweilig ist).

Zumindest Bürger können für so viel DummhÜbermut bestraft werden, und so wird dem Zwanzigjährigen erstmal seine Weichluftpistole weggenommen und die Frau zeigte ihn an. Wegen gefährlicher Körperverletzung - es ist immerhin eine Wunde am Hals!!!1! Nicht auszudenken was passiert wäre, hätte er einen Fingernagel getroffen!

Die Polizei indes meinte, laut obigem Bericht, sie wollen diese Waffe auf Spielzeug-Eigenschaft weiter untersuchen. Was soll das bitte heißen? Ich habe auch in Erinnerung, dass Weichluftgewehre zumindest früher ab vierzehn Jahren kaufbar waren, nicht dass ich mich jemals für die Dinger ernsthaft interessiert hätte, jedenfalls nicht ernsthafter als für Laser oder Hochvakuumdioden. Und soweit ich mich erinnere wurde ich durch mit Gummibändchen beschleunigte Bleistifte bereits schwerer verletzt als ich es jemals von Weichluftgewehren auch nur gehört habe. In den falschen Händen ist alles gefährlich.

Dass er sich wegen Körperverletzung und überhaupt dem Schießen solcher Dinger in der Öffentlichkeit verantworten muss ist klar und richtig, aber ich frage mich, inwieweit die Polizei hier die Entscheidungsgewalt hat, sofern das Gewehr legal erworben wurde können sie doch wohl den Typen alleine wegen des Besitzes nicht belangen?

"Eltern sollten darauf achten, ihren Kindern 'sinnvollere Geschenke' zukommen zu lassen.", so lautet am Ende jedenfalls der Schlusssatz des Artikels.

Dem kann man sich eigentlich nur anschließen, und zweifelsohne ist ein Zwanzigjähriger beim Erstehen einer Spielzeugpistole auf seine Eltern angewiesen, die es ihm natürlich verwehren sollten. Heute ist es eine Weichluftpistole, und morgen eine PlayStation mit gefährlichen Killerspielen.

Da gibt es nun wirklich sinnvollere Beschäftigungen! Zum Beispiel Informationsveranstaltungen der Bundeswehr. Und überhaupt, Kinder treiben doch bekanntlich zu wenig Sport, ein Sportverein wäre vielleicht das Richtige; wie wäre es zum Beispiel mit einer Jahresmitgliedschaft im lokalen Schützenverein?

Die SZ hat einen Artikel zur Netzneutralität, und erkennt dort überraschenderweise den Kern des Problems:

"Für einen Teil der digitalen Generation ist das Internet längst der Ort, an dem er nahezu seine gesamte soziale Existenz entfaltet. Vergleiche mit Verkehrsinfrastrukturen, wie sie etwa Kruse zieht, mit anderen Kanälen für Kommunikation oder mit wirtschaftlichen Gütern greifen aus Sicht dieser wachsenden Gruppe viel zu kurz, weil sie eben nur einen Teilaspekt der Rolle erfassen, die das Internet in ihrem Leben einnimmt."

Die Provider seuseln dahin, dass irgendwelche "Power-User" ihren wertvollen Traffic verbrauchen. Die Provider sind meiner Meinung nach verlogen, und wollen eine Rolle einnehmen, die sie nicht haben. Alles was ich von meinem Provider will ist, dass er mir einen Anschluss ins Internet gibt. Roher Traffic. Ich will eine Leitung die zu einem Router führt der alle IP-Adressen erreicht. Sie sollen so viel verlangen wie es eben kostet.

Aber genau hier ist das Problem: Wie viel kostet es denn eigentlich? Wieso rücken die Provider da nicht mal raus mit der Sprache. Wieso fragt das die Provider keiner? Wie viel Aufwandt kann es denn sein, ein Paket zu transportieren?

Nicht viel, denke ich. Sie müssen Infrastruktur aufstellen und Maintainen. Dazu brauchen sie Fachkräfte, Server und Strom. Die Kosten dürften weitestgehend konstant sein, und sobald das Zeug mal steht ist es erstmal egal, wie viel Pakete da durch gehen - es gibt ein Maximum an Paketen, die da durchgehen, freilich, und mehr Pakete kosten auch mehr Strom und mehr Wartungsarbeit, aber wohl kaum in erheblichem Maße.

So, und jetzt kaufe ich als Kunde einen Anteil an deren Volumen. Inklusive einer Wartungsgarantie. Die Wartungsgarantie kauft man wohl in der Regel in Form einer monatlichen Grundgebühr, und die bin ich auch bereit zu zahlen. Der Traffic selber kostet die im Monat aber nichts extra - ich zahle für eine gewisse Geschwindigkeit, die einen Anteil von deren Infrastruktur übernimmt. Die Infrastruktur teile ich mir mit anderen Leuten, die ebenfalls ihren Anteil zahlen.

Zahle ich nun für DSL 1000, so zahle ich für einen Anteil von etwas, das da ist, und darf es damit auch benutzen. Wenn nun jemand daherkommt und mir sagt, DSL 1000 bedeutet nur eine Maximalgeschwindigkeit, und eigentlich rechnen irgendwelche Statisten schlaue Statistiken aus, die davon ausgehen, dass kaum ein Kunde das wirklich ausnutzt, sodass man mit weniger Infrastruktur auskommt, dann ist das deren Fehler, und nicht meiner, wenn ich davon ausgehe, dass ich auch wirklich mein DSL 1000 ausnutzen kann, wenn dem nicht so ist müssen sie es angeben, das wird auch teilweise getan.

Meine Meinung dazu steht fest: Alles Andere ist verlogen!

Dass deren Statistiken nicht unbedingt sinnvoll sind merkt man schon alleine daran, dass ich in einer ländlichen Gegend, wo ich nur DSL 1000 kriege (und mehr bezahle als man für die fünfzigfache Geschwindigkeit in Teilen Münchens zahlt), mehrmals Werbung für DSL 2000 und höher bekommen habe.

Nun wollen mir also irgendwelche Schlaumeier weismachen, dass das ja normal sei. Provider müssen Statisten beschäftigen, und die gehen nunmal von üblichen Haushalten aus, die ihren Internetanschluss nur alle zwei Wochen mal für ne halbe Stunde benutzen. Böse Webdienste wie YouTube oder Ekiga (oder Skype) ändern dies nun, die Statistiken gehen nicht mehr auf, weil die Leute auf die für Provider anscheinend völlig abwegige Idee gekommen sind, das Volumen das beworben wird auch wirklich zu nutzen. Ist das nun mein Problem? Offenbar erfassen diese Statistiken nicht meine Bedürfnisse, und auch nicht die Bedürfnisse vieler Anderer.

Darf nun ein Provider entscheiden, welcher Traffic zu mir bevorzugt wird, und damit effektiv in meine Privatsphäre eindringen, weil irgendwelche Statistiker sich verrechnet haben? Ich sehe keine Rechtfertigung.

Nun mag manch Einer entgegnen, es wäre doch die Sache der Provider, was sie mit ihrer Infrastruktur tun. Zum Einen haben die Provider hier aber eine eindeutige Übermacht, gerade über die "letzte Meile", die es rechtfertigt, in ihre Freiheit einzugreifen, um der Gesellschaft Willen. Zum Anderen möchte ich dem entgegnen, dass ich mich frage, ob ich, wenn die Provider mit ihrer (für die Gesellschaft wichtigen) Infrastruktur persönliche Freiheit genießen, auch zu deren Boycott aufrufen darf, wenn sie es mir zu bunt treiben? Darf ich öffentlich sagen, "Provider X macht ..., boycottiert ihn!", darf ich Flugblätter verteilen (nachdem die Internetprovider ja meinen Aufruf blockieren könnten), und bleibe ich Ungestraft, wenn die Leute es dann auch tun? Ich glaube kaum.

Zum Vergleich: Ich dürfte den Rücktritt jedes Politikers fordern, und zu Stimmentzug aufrufen.

Man darf hier nicht die Macht von Providern unterschätzen: Deren Infrastruktur ist wichtig in sehr vielen Belangen. Ihre Arbeit zur Verbesserung der Infrastruktur ist zum jetzigen Zeitpunkt unverzichtbar, aber ihre Übermacht ist besorgniserregend.

Ich sehe es also ganz klar: Es muss eine gesetzlich verbriefte Netzneutralität geben. Für alle Netze. Ohne Ausnahme.

Und ich bezweifle, dass das in irgendeiner Weise den Providern schaden kann. Ich glaube eher, dass die Provider hier eine Geldquelle sehen, indem sie kleine (und große) Unternehmen erpressen können. Und wenn sie erstmal damit angefangen haben, gibt es kein Zurück mehr - schon jetzt sehe ich nicht mehr wirklich einen Wettbewerb, wie soll sich in dem Gebräu erst noch ein Wettbewerb bilden können, wenn große Provider von großen Netz-Unternehmen mitfinanziert werden, während kleine Unternehmen erstmal Jahrelang auf Genehmigungen zum Kabelverlegen warten? Denn das ist genau das, was wahrscheinlich passieren wird.

Panel 1: Title: "Wealthy". A room with a fireplace is shown. Above the fireplace is the head of an elk. Two glasses of wine standing on the floor next to a rug where two people are lying. Text: "Two glasses of wine to share, me and my girlfriend on a smooth rug, enjoying the romantic crackling and flickering of the fire in the furnance. All I need to be happy." -- Panel 2: Title: "Student". A small room is shown with a window, a working desk, and a desktop PC with an external hard drive. A person in a sleeping bag with a bottle in front of a small laptop is shown. Text: "A bottle of coke, a sleeping bag keeping me warm (the dorm's heating does not work today), my girlfriend (who studies 400km apart) in Skype, the romantic flickering and crackling of my hard disk from which my torrents are seeded. All I need to be happy."

This comic is a fan-art for the "To Aru Majutsu no Index" anime. -- Panel 1: Touma and Index are standing in front of the Cologne Cathedral. Index: "There it is, the Cologne Cathedral." Touma: "It is impressive. Let us get inside it." -- Panel 2-3: Touma touches its door, the door begins to crack. Touma: "WTF?" Index: "Oh Shi...". -- Panel 4: The Cologne Cathedral breaks into parts. -- Panel 5: A rageface in front of a background of many "Fukoda"-Symbols. ---- guvf vf n ersrerapr gb gur svefg rcvfbqr jurer vaqrk jrnef n zbovyr puhepu naq nf gbhzn gbhpurf vg vg vf qrfgeblrq

Panel 1: Caroline and Hannibal Lecter (with his characteristic muzzle) lying in each other's arms. Caroline: "Oh Hannibal. I feel so happy here with you. I wish I could stay here tonight." -- Panel 2: "Oh Caroline. I do feel the same. Just rest in my arms tonight!" -- Panel 3: Caroline sleeps on a lot of bloody chopped-off arms.

All software sucks. Somehow. Or at least most of it. I was wondering why. Well, I do not have that much expierience yet, but of course, I can still think about it. I like to point out a few of the views I have got with my limited expierience, feel free to comment with corrections if I am wrong.

Usually, I think that software complexity must be justified: Software that does much may be more complicated than software that does essentially nothing. Therefore, I think that SSH is good software, as it is versatile and the complexity is comparably low, X11 is neutral, as it is very complex but at least can do a lot of things, while Nepomuk is bad, in my opinion, since I see its complexity, but I do not see why it is useful at all, except for a bit meta-data management of files (and as a buzzword throwing machine).

So in theory, everybody could write software that is only as complicated as it needs to be - whatever "needs" means in this case.

How complicated does a software need to be - a question quite a lot of people have argued about, and for which the worse-is-better-philosophy may be an answer, and unfortunately, it turns out that it is the paramount philosophy for most programmers, in the end.

For free programmers, it is a natural principle: Free software usually comes from companies that do not expect to earn much money with its development anymore and therefore release it to the public, or from programmers that want to solve a certain problem. And these problems are mostly flippant, without any deeper meaning for the rest of the world. Usually, it is not to make something other projects can rely on, but to make something that works as fast as possible for the moment - or sometimes just because somebody wants to show his hacker-skills.

One example which makes that clear is the plugin situation of Firefox: There is Gnash and Swfdec, trying to become an alternative to Flash. As well Gnash as Swfdec can play YouTube-Videos very well. In no way can they replace the real Flash player, but at least for the special purpose of watching YouTube-Videos, they can - but who cares, if you do not want flash, just use youtube-dl to watch them. On the other hand, I do not know of any single free implementation of a Java Applet Plugin: There is one together with GCJ, but since in GCJ nobody cared about security, besides crashing often, there is no security concept behind this applet plugin. And even worse, plugins for mplayer and vlc and xine are unusable, which is why I mostly do not install them at all. There is a lack of interest in developing these plugins.

But also what was said to become the next-generation-replacement of Flash, namely SVG, will never spread, because there has never been reasonable support, basically. And with WebGL being deactivated even in Linux-Firefox by default, the dominance of Flash will remain a looooong time, I think.

Another example I have to feel right now is the remote-desktop-solution NX. Actually, from the graphical perspective, RDP and VNC and even X11 are good enough for vitally everything that can be done with NX. The notable part about NX is the integrated sound- and samba-forwarding, which is integrated into the NX Client which also runs properly under Windows. This is, in my opinion, the main advantage of NX. But the free implementations NeatX and FreeNX lack of this support somehow, FreeNX supports it in theory, but it is impossible to configure if you need something non-standard.

Well, most of the existing software seems to have this problem. But of course, there are exceptions. Sometimes people see larger problems and are willing to try to solve them - which often leads to a worse problem, namely hundreds of reimplementations of the same problematic peace of software, but seldom there evolves a real solution. Why is that?

Again, let me give you an example. I am writing at Jump and Run games for 6 years now, the most recent immanation being Uxul World (which is likely to get finished this year, if some other things will not fail). Actually, I finished some smaller games, but I never released them, except to some friends. One example was a maze-game written in C++, in which simple mazes could be generated using text files. Why did I not release it to the public?

Firstly, it is written in C++ - I do not want people to thing that I usually write code in C++. Secondly, it was too small, and lacked of features: When I showed it to some friends, they all liked it, but they all had suggestions on how to make it better, and unfortunately, these suggestions were nonpoint and some of them were mutually exclusive: One person wanted to make some shooter out of it, like Cyberdogs, another person wanted to add more structural features like doors, switches and teleporters, another person wanted me to make it 3d and use OpenGL instead of SDL (which I was using at that time). Thirdly, a computer scientist who "reviewed" my code on request (at that time I was still mostly using Java, and new to C++) commented on my collision engine that it was way too complicated, and "can probably be used to shoot satellites into space", meaning that my code was hard to understand because it was more accurate than code of that kind usually is.

I simply did not want to write that kind of code: I do not like the concept of worse-is-better in software I actually want to release. But then again, you see people writing a "good" game in half a year, and since you do not cooperate with all of that "experts" telling you to use a pre-assembled library for that, you will not get support at all. And it goes this way for other kinds of software, too - mostly there are either solutions for your problem that other people consider "sufficient" (while you do not), or they do not understand why anybody would want whatever you want to create. So in fact, people are forced to make their software "worse" or impose a lot of additional work to themselves.

Unfortunately, while there are at least some free projects claiming to be "better" than "worse", for commercial programming this principle can never be economically advantageous, at least according to what I have heard from people working in the software industry. Software must be cheap to create, and the cheapest way still seems to be hacking around until the software works - which is what extreme programming essentially is about (except that one usually uses more buzzwords to describe it). Hack it, test it, release it, and hack again.

Especially, in the commercial world, there is no point of taking too much care about backends of programs, as long as the frontends fit for the users; making software more complicated ensures that people who once used it will depend on it: If you keep them dependent of your old software long enough, they will use newer software from you, too, on which they will depend later. Backward compatibility is not that expensive, as The Old New Thing points out in many of its posts.

Ok, it is no secret that the commercial world is absurd in many ways. But also in the scientific world, worse-is-better is a reasonable way of programming. Also scientists do have some pressure, at least bibliometry. And also in science, you do not always rewrite anything, but search for "microsolutions" using Google & co. to come to your solution faster. And above that, science is often interested in proof-of-concept-implementations rather than production-environments.

In any of the three cases, the programmer does a trade: By increasing the complexity of his software, he achieves his goal earlier, and the software spreads faster. And software can get extremely complicated. Take Windows as an example. Or Linux. Or Firefox. Or Mediawiki. Or X11. Projects with a long history. Projects which have grown extremely complicated. Active projects which "work". That is an argument I heard so often now: Implying that it is "good" just because "it works". Using a telephone to mow your lawn will work if you put enough efforts in it. Using a toaster to dry your hair will, too (I tried actually, but I would not recommend it). You can make vitally everything "work" if you take enough effort. The reason why your Windows desktop is so shiny and simple, the reason why your Debian vserver has almost no downtime, the reason why your Mac OS X recognizes your iPad so well, is not because the software is essentially "good", it is because a lot of peope are working hard to make it "work" for you.

The implication from "working" to "good" is often related to something I call "pragma ideology". Often, pragmatism and ideology contradict each other. It sounds obvious that the only reason on which one should choose software is whether it serves its purpose best, and therefore, this "pragmatic view" is chosen as a new ideology, an ideology that ideologically declines every form of ideology.

Instances of such ideology often refuse Lisp and Garbage Collection in general, but PHP, Perl and Python are appreciated since there is so much software written with it. Innovative ideas are seldom appreciated, since new ideas tend not to work out immediately. With this ideology, no real development is possible, and quite a lot of stuff we have today would have never been possiple. The "web" was a very bad idea in the past. Wikipedia was "condemned to disinterest" at a time when there was no article about navel lints. A Professor once told that even such a basic thing as a graphical user interface was seen more as a science fiction anecdote than a real work space in the beginning.

But pragma ideologists do not see this. They see what there is "now", and what is used by them and "works" according to their imagination of what "working" actually means. I always find it interesting to see two pragma ideologists with different opinions talk to each other. Since you cannot be a pragma ideologist without a bit of arrogance, of course, every both of them think that the other's software is crappy, and that he can "prove" this by his "expierience". Well, my expierience tells me that the really expierienced people are generally open to new ideas, but very sceptical about them. The expierienced people can usually tell at least two anecdotes for every new development, one constituting their openness, and one constituting their scepticism. Thus, in my expierience, pragma ideologists are usually not expierienced.

Of course, when having to make and keep a pool of computers or a rack of servers work, a little bit of pragma ideology is necessary to keep the system stringent. And the same holds for larger software projects. But there must be a balance, and expierienced people know this balance. They know when not to block new ideas.

But they usually also know when to do so. Because while pragma ideology is - in my opinion - one cause for very bad software, being too fast replacing old software by new is - in my opinion - also one. I see two major reasons for throwing perfectly working software away.

One reason is the rise of new "better" standards that everybody wants to support.

Imagine you want a simple and free replacement of the proprietary ICQ. Well, having a buddy list and chatting to multiple or single people works pretty well with IRC. So you could adapt IRC for that purpose: It worked well since 1993, but it has one major problem: It does not use XML. Thus, XMPP had to be invented, with a lot of "extensions" almost nobody uses. Who uses Jingle? Who uses file transfers in any other way than they were possible with IRC-DCC?

Imagine you want a language with a large library that is simple to learn, has a mighty object system, with an intermediate bytecode compiler to make the commercefags happy not to have to open their source, which is available on vitally every platform. You could just take a Common Lisp implementation like Clisp, extend it with a JIT-Compiler for its bytecode, extend it with a bit UI pr0n, and deploy it and make everyone happy. But why would you do that, if you can just create a new bytecode with a new interpreter and a programming language basing on C++, keeping enough C++ to confuse people not familiar with it while taking enough C++ away to angry C++ lovers.

Imagine you want a file transfer protocol supporting file locks and meta information. You could extend FTP by a few additional commands like LOCK, SETXATTR and GETXATTR. But you could also put a huge overengineered  bunch of XML meta information on top of a HTTP substandard and extend it by a few new methods, and then give it a fancy meaningless name.

Another reason for throwing away working pieces of software is the NIH syndrome. The recent discussion about Unity vs. GNOME in Ubuntu seems like an instance of this to me. But also Flash seems to be an instance - it used to base on Java, but now has its own virtual machine. Also, the only reason why BTRFS is still developed seems to me like an instance of the NIH syndrome.

In fact, it is not always possible or useful to use old software or old standards and base new stuff on them. In the end, many systems evolved by just adding small pieces of features, and after they have grown complex, it may sometimes be better to abandon them, and create something new, basing on the expieriences of the old system. It would be nice if that could happen to X11 finally - it is time for X12! It would be nice if that could happen to the whole bunch of "web standards" (javascript, XML, XHTML, SVG, jpeg, etc.) finally. But still, that means not just creating a new system that is as crappy as the old one, but creating a new one with the expieriences of the old one.

Most of this holds for scientists as well as pragmatists - I do not think that for example some sort of pragma ideology cannot also be found in a scientific setting. So these poins are similar for both classes of software producers. But while they of course have a lot of things in common, there is a reason why I think it is necessary to choose wheter one is a computer scientist or a programmer. It is not that one person cannot be able to be both, and I do not want to imply here that one is worse than the other. It is just that sometimes I get the impression that some people cannot decide which of them they are, and sometimes even in larger projects there might be this problem because some of the programmers are programmers, some are scientists, and some do not know which of both they are. Well, that is at least what I see, and how I explain some flaws in several pieces of software which I saw.

For example, take a look at object system of C++. Compared to Smalltalk and Common Lisp, even Java, it is ludicrous. And since it is so ludicrous, as far as I see in history (well, it was before my time), nobody really used most of the mechanisms it had from other object systems, and nowadays, object oriented programming mainly means putting a few methods (which are mostly just plain functions) into an own namespace - so suddenly, the namespace has become the important part, and thusly, some people get confused about what Common Lisp considers as a "class".

Looking at Linux device files in the /dev directory, one notices that block devices and character devices can usually be accessed by the default libc functions, as if they were files. So whatever /dev contains is an abstraction away from the hardware, which is sufficient for most purposes, but of course not for all purposes. Now one might expect that for example NFS or Samba will be able to export a device file as well. And in fact, they do, but they do not export it as the file it appears to be on the actual computer - they export it as an actual device file - which means that it gets a major and minor number, as all device nodes do, and it will then become an actual device pointing to the client. That is because in the end, the filesystem is nothing but a namespace, and of course, there might be reasons not to export whole disks via NFS (and there are other solutions to do that), and there might be reasons to export device nodes pointing to client devices rather than devices on the NFS server. But in my opinion, the latter is the more low-level way, and should therefore not be the default way. This is because I consider myself as a "scientist" rather than a "programmer" (while actually I am none of both yet). The programmer would say "it does what its specification sais and there are alternatives who can achieve what you want if you really want to do so". The scientist wants an axiomatically reasonable piece of software with no "surprises".

Another thing I hear very often is a mixup of software certification vs. software verification vs. whatever else. There is a process called software verification in which you usually run your software through a lot of test cases, as you would do with hardware. This is reasonable, as long as you think about your code before you test it, and not just after some tests failed. Then there is formal software verification, something that should be done whenever possible (and is almost never done). And then there is certification - which means, as far as I saw, that some company looks at the software and gives its OK. These are three concepts that are essentially different approaches to similar problems, and there seems to be a lot of confusion about which one does what.

Formal verification is still not used widely enough I think, which may be caused by the fact that non scientists usually cannot imagine what "formal verification" is. If you have a specification of what a certain piece of software should do, and you have a piece of software that really does this, then this is provable! There is no exception. I have heard so much opinions on that topic yet, but this is not a matter of opinion, this is a fact, as long as you accept that the computer is a machine that works in a deterministic way, and of course as long as you assume that the hardware complies to its specifications - if you do not do so, then you will have no way of creating software complying any specification anyway! Modern computers may react on temperature changes, brightness and the current gravitation vector, which are non deterministic, but still, your computer reacts in a deterministic way on their inputs! If you cannot tell how your software reacts on them, your software is crap, and as soon as you can, you can prove its behaviour. Again, this is not a matter of opinion, this is a fact. There is currently no widely used verified operating system and therefore no way of using an actual formal proof checker to check whether your reasonings on your software are correct, but formal verification can as well be done on the paper: Can you print out your code and reason on its correctness with pen and paper? If you cannot, then you probably do not know how and why your software works, it is as simple as that.

But formal verification will not solve all problems as well, even though some theorists think so. Even though the hardware does what its specification sais, correctness of your program may not be enough, since correctness just sais that your software does what you specified. With formal reasoning, you can eliminate common bugs you know of, by specifying that they do not occur and then proving this specification. This has the major advantage that common bugs will probably only occur finitely often until every piece of software extends its specification. But there is still the problem whether the specification is really what you wanted to have. For example, some XSS-exploits did not do anything outside the common standards, they would have worked in a perfectly verified browser, they were mainly exploiting the fact that in the past, JavaScript was not used in the way it is now. XSS-exploits are a major problems, since there is no real formal way to solve them inside the browser, since the browser's entelechy is to run the scripts given by websites, and formally therefore, the several web interfaces would have to be verified - which is neither realistic, nor does it solve the general problem: Not all bugs are bugs outside the specification. In addition to that, there is software for OCR or handwriting or other pattern recognition which basically cannot be verified to work correctly from the user's perspective. Thus, testing and informal verification will always be necessary.

Certification is just letting a company do the work, probably imposing the responsibility for problems on that company. This solves no problems a computer scientist should care about, it may solve problems for smaller companies that need some kind of insurance that their software will not make their machines burn or something.

Reliability is something very important to software users. Which brings me to the next point: Sometimes it seems like the larger software companies are trying to keep their customers stupid. And in fact, I often see that there is the attitude that "the computer knows best". They should better tell their customers the truth: The computer knows nothing! It is a complicated, sophisticated machine, but it is still a machine. Maybe one day, there will be a strong artificial intelligence, but so far, there are only weak ones, and they may be useful, but they are not reliable!

There is so much software that uses non optional heuristics. Copypasting on modern systems is an example where these heuristics can get annoying: You want to copy some text from a website into your chat client, and it uses some strange formatting that it sort of takes from the piece of text you copied, while you actually wanted only that text. On the other hand, when you use your text editor, and want the actual style information, you will only get that text. These are annoying heuristics - one could educate the user that there are two kinds of pasting, the cleartext and the formatted text, and in fact, that is what Pidgin does, it has an option "paste as text".

Another example that annoyed me more than once now is the XHTML autocorrection of WordPress, which cannot be turned off on the WordPress-hosted blogs - probably for the reason that they do not allow arbitrary content. If it at least then would just disallow any form of XHTML. But it does not, it puts on your code a heuristic that tries to guess whether you are writing HTML or plain text. It sometimes swallows backslashes and quotation marks. It is annoying!

Probably the most annoying thing, which at least can be turned off on most systems, are mouse gestures. I have not seen a single system where they worked for me - neither Mac, nor Linux, nor Windows. But I actually never got the point in them anyway - two clicks versus a stupid gesture ... what is the advantage?

The computer does not know best! He applies heuristics, and heuristics may fail. That is why for unimportant things I accet heuristics, but when it comes to the computer having to decide whether I want to delete a file or open it, this goes too far. In general, I do not like software that is written by programmers who think that nobody wants anything they did not think of.

LaTeX is a good example of such a piece of software. I tried a few times to get deeper into the mechanisms of LaTeX, as there do not seem to exist much people doing it. Well, the more I knew the less I wanted to know. And above that, there is no real community to ask when you do not understand something. As long as you have simple questions like how you get something into the center of the page or how you change the font to the size you need, there are a lot of "experts", but as soon as you want to understand the LaTeX software, there is almost nobody knowing anything. Why should you know something somebody else has already done for you? There is nothing you could want that LaTeX cannot do. And if so, then you do not really want it, you either do not know about typesetting rules or you just want something that nobody is supposed to want and you should rethink it.

Everything that is hardcoded is law! This does not only hold for LaTeX. Hardcoding stuff like library paths or paths of executables is very common, especially in commercial software, but also in kernel modules that require firmware files. With libsmbclient, a library to access SMB shares, you cannot connect to SMB shares on a non standard port, the port is hardcoded. It is hardcoded under Windows, too, well, not quite hardcoded, at least there is one central setting in the registry. Windows XP supports, besides SMB shares, WebDav shares. WebDav bases on HTTP, and quite a lot of secondary HTTP servers are running on a port different from 80, sometimes 8000 or 8080. At least the last time I tried, Windows did not support any port different from 80. Hardcoding stuff that should be configurable is a very annoying problem that unfortunately occurs very often.

Ok, I have named a lot of problems I see. I will also name some solutions I see.

One major soulution to a lot of problems would be formal program verification. Formal verification can be done on the paper as well - there is no excuse of not doing it, just because there is no widely used proof checker out there. You do not need to be a mathematician to do that  (though maybe having done simple mathematical proofs may be a good start), you do not need to give a proof in hoare logic. Most mathematical proofs are given in natural language, too. Just try to give a formal reasoning that can be challenged!

Then you always should ask yourself whether you will create a new standard for something that already has a standard, when you write software. If there is one standard, can you just extend it, instead of making something completely new? If you can not, can you make your standard at least similar to the old one? If everybody tries to keep the number of formats to support small, especially not inventing new of them without a reason, then maybe the programmers could focus on software quality rather than portability.

And probably, the most important part would be the education of the users. Software is not like hardware: It is easy to change, replace, re-invent. So the user's decision is far more important. Users should be told the fact that it is their job to command the computer, and not the other way around.

Ok, it really pisses me off! For every IP range I block, another one appears. How many damn IP adresses do chinese providers possess? I mean it is not that I do not have anything better to do. Ok, from now on:

Zero tolerance against Spam!

Which means:

  • This site is no longer accessible without a browser that supports gzip (which any reasonable browser should do) - this is because I assume that most spambots will not support it.
  • If an IP is accessing my comment-cgi-script multiple times without any comment going through, then I will do a whois-query on it, and if multiple IPs from the same provider are spamming, I will probably block that provider, especially when it is from china (which appears to be the source of most spam-attacks apparently). The chinese government has a big firewall as far as I know - why can they not block spam, instead of just infringing their people's personal rights?
  • If I see an IP from a server-provider doing nasty stuff (like accessing my comments page several thousand times), then I will contact this provider (in addition to blocking the IP). I already did this, and I will continue doing this - reasonable providers have a mail adress to alert abuses of their services, and I will use them, because I think it is the interest of me, these providers, and in the end the whole internet, to get rid of that pest!
  • If this site was hosted in a country with a law reasonable for the internet, I would not doubt to publish my list of blocked IPs, so other people could profit from it. But this site is still hosted in Germany. I am pretty sure that the rights of  worried pensioners or sons of worried parents with their infected computers weight heavier than me being annoyed by their contribution to the botnet attacking me, and so, I would not be allowed to publish their IP adresses. A pity.
  • Again, if you know somebody who cannot access this website, please tell me - I do not want to lock out any legit person, but mistakes happen!

Ein Siegel für Essen mit "Ohne Gentechnik" - wtf? Ist es das was die Leute wollen? Nur damit ich das mal richtig sehe:

Keinen jucken überzüchtete Maissorten die die Erde unfruchtbar machen, keinen jucken überdüngte Felder die das Grundwasser verunreinigen, keinen jucken Pestizide, keinen jucken die desaströsen Zustände in Deutschen Kuh- und Schweineställen? Aber Gentechnik soll gefährlich sein?

Die Steuergelder die man für so ein Siegel ausgibt könnte man erheblich besser in sinnvolle Aufklärung investieren. Das Konzept des "Gens" ist nämlich für die meisten Menschen zu abstrakt als dass sie damit etwas sinnvolles anfangen könnten, und im Alltag kommen solch abstrakte Dinge selten vor.

Mir drängt sich oft der Eindruck auf, die Leute sehen in "Genen" irgendeine magische Entität die potentiell gefährlich ist, und magisch von allem besitzergreifen kann, das damit in Berührung kommt. Und diese Vorstellung ist erstmal völlig Kontraproduktiv.

Zwar bezeichnet "Gen" immernoch eine Basensequenz, die also materiell ist, aber der kruziale Teil des "Gens" ist nicht diese Sequenz, sondern die darin enthaltene Information: Letztendlich handelt es sich bei Genen primär um Software, und ich habe den Eindruck dass auch der Sprachgebrauch eher dahin geht, dass man mit "Gen" eher die Erbinformation selbst bezeichnet, als die Basensequenz. Ein Gen ist demnach nichts was man anfassen kann, nichts was man irgendwo hinstellen kann, nichts was man nicht auch per E-Mail verschicken könnte. Es ist damit insbesondere nichts das man zerstören kann, und nichts das für sich genommen in irgendeiner Weise gefährlich sein könnte.

Gefährlich kann ein Gen werden wenn es zur Expression kommt, wenn die Information also Anwendung findet. Die betreffende Pflanze wird daraufhin entsprechend des genetischen Codes eine Aminosäurensequenz, ein Protein, erzeugen, sie könnte beispielsweise Gift produzieren, oder so überresistent sein, dass sie alle anderen Pflanzen verdrängt. Die Möglichkeiten sind theoretisch vielseitig. Praktisch ist das aber eher unrealistisch. Realistischer ist ein Szenario das eine bestimmte Pflanzenart unbenutzbar (z.B. ungenießbar) macht, und durch jahrelanges nichteindämmen irgendwann die ursprünglich nutzbare Pflanzenart verdrängt, und damit potentiell den Rest des Ökosystems aus dem Gleichgewicht bringt, und da Ökosysteme chaotisch sind, kann das verheerende Folgen haben. Die meisten Eingriffe der Menschen in die Natur mit solchen Folgen sind aber letztlich aus Dummheit entstanden. So etwas passiert zum Beispiel, wenn man Menschen, deren einziges Ziel die Profitmaximierung ist, diese Möglichkeiten eröffnet.

Und realistisch betrachtet wird es genau darauf hinauslaufen: Irgendwelche Politiker werden sich lange genug zuschwallen lassen dass sie es für notwendig erachten, Gentechnik zu subventionieren, mit entsprechenden Garantien dass der Staat für alle Folgen aufzukommen hat, und Unternehmer werden daraufhin optimieren, genau die Risiken einzugehen, für die der Staat bürgt. Zumindest interpretiere ich so die Ereignisse um die Kernkraft, und sehe nicht, wieso sie nicht auch hierauf anwendbar sind. Ich halte die ganzen Anstrengungen der Lobbyisten rund um die Gentechnik in den USA für einen Versuch, das Patentrecht zu missbrauchen, und in Europa für einen Versuch, die Planwirtschaft um die Landwirtschaft herum zu brechen.

Ich bin dementsprechend erstmal klar gegen den breiten Einsatz von Gentechnik in der Nahrungsmittelproduktion, insbesondere in der sich anbahnenden Form. Ich sehe keine großartigen Möglichkeiten darin, ich halte das ganze Thema sowohl in positiver als auch in negativer Richtung für weit überschätzt. Wer gentechnisch veränderte Pflanzen anbaut muss für alle dadurch entstandenen Schäden aufkommen, und da es sich hier um etwas handelt, das potentiell sehr gefährlich ist, ist es durchaus rechtfertigbar, soweit in die Freiheit einzugreifen, einen Nachweis zu verlangen, dass die rechtliche Person dazu im Zweifelsfall auch in der Lage wäre - sollte man übrigens genauso bei Kernkraft machen, meiner Meinung nach.

Aber in Gentechnik sehe ich bei der Landwirtschaft irgendwie immernoch das kleinste Übel. Gentechnik ist eine Form des menschlichen Eingriffs in die Natur. Eine von Vielen. Man sollte eher die Gesamtheit im Auge behalten.

Übrigens wird Gentechnik unlängst zur Medizinproduktion verwendet, modernes Humaninsulin beispielswiese wird so hergestellt.