Test Driven Development


Last week I was on the TDD Geecon conference in Poznan, Poland. It took place in a cinema. What a cool place for a conference, PowerPoint presentations on a big cinema screen. My overall impression is that testing and continuous integration have become mainstream and common practice now among developers. TDD is just an aspect, a special emphasis on a test first approach. For an overview about TDD I recommend Steve Freeman‘s presentation Ten Years of TDD and his book “Growing Object-Oriented Software, guided by Tests”.

Last year there was a bit of a controversy if TDD is dead. I think it is rather Ruby on Rails which is dead, not TDD. I used to write applications in Ruby on Rails for the last 8 years, but the last year I have mostly developed in Javascript. With Coffeescript, CommonJS modules, Bower and the whole JS ecosystem that emerges Javascript becomes very powerful – and you finally can use object-oriented programming in JS as well. New powerful Javascript libraries and frameworks appear frequently. Rails seem to disappear in the background.

I observed earlier that testing is like cleaning: as a cook you can cook in a dirty kitchen, but in the long run it is not recommendable to let bugs grow in the kitchen. Similarly as a developer you can develop applications without tests, but it is not recommendable to have undetected bugs in the application. And if you always clean a bit, that is to say if you always check in your code a little bit cleaner than when you checked it out, you will eventually arrive at a clean system (like a liveness property in distributed computing).

Tests are like seat belts. There was a time when we drove without seat belts and had no insurance at all. Today it is everywhere required to use them during driving. They increase security and guarantee that nothing bad happens. Just as in real crash tests, we try to break the system in tests in every possible way. The continuous execution of tests is the task of CI server, which rely on a good test coverage. They are like insurances, possibly costly and cumbersome to configure, but once they are setup, they are increse safety a lot and a very comfortable.




If tests are like seat belts, then mocks can be compared to crash test dummies: they look and act a bit like the real objects, but they are just fake. And they are quite useful, if they are not overused. Overuse can be costly and bad. It is important to use safety tools like tests and dummy objects in the right way. A  few points I noticed in the conference were:

  • TDD guarantees that your code has tests. Because you write tests first and code later, the code you write is always covered by tests, and can be refactored well later on.
  • Mocks are indispensable for TDD, like crash test dummies for car safety, but bad if you overuse them. Do not overuse mocking.
  • A single test should only test one thing, one rule, or one piece of business logic. If the test fails, you know exactly which rule has been violated
  • Tests should reflect what a method does, not how it does it. We should test behavior, not the underlying algorithm or implementation. If you test the implementation, you can not change it without breaking the test.
  • Sometimes duplication can be useful to avoid duplication, duplication of test cases is useful to eliminate duplication in code if it helps to remove repetition by suitable refactoring. In the end more repetition in tests could mean less repetition in code.

If we take a look back, then we notice that tests have always been import in engineering. In rocket science it is well know that a rocket which looks perfect on paper but has not been tested is more likely to end in explosion than in orbit. For the moon rocket Saturn V for instance, each element was tested individually over and over again before the first launch of a complete rocket: the rocket engines, the different stages, the Launch Escape System, etc. I read somewhere that there was not single element of the Saturn V rocket which has not been tested before thoroughly. Tests can bring us to the moon and back :-)

Tests are also a bit like exams. They examine if the system or the code fulfills all the necessary requirements. In the far future, when we have possibly deep learning in autonomous entities, we might write only tests, and the system tries to pass the tests after it has learned and trained for a while itself. You write the test and the system does the rest. In this sense, TDD could be the future.

Photo Credit:
– Don’t be a Dummy from Brett Klger
– Touring Club Suisse/Schweiz/Svizzero TCS via Compfight cc


Coding and Communication


Many of the best coders, developers and programmers have one problem: they do not know how to communicate well. Or they do not want to communicate. They know how to write code in the most complicated languages, but they do not communicate well with their peers, neighbors and colleagues, although they communicate well with their machine all the time (by typing, hacking, pointing, etc.). Either they do not speak the language well, they find it boring, or they do not like to waste time with talking. They rather sit with their headphones on and talk to their computer, which is of course what they are paid for. But this lack of communication is of course a problem, because every developer in a team (a developer works rarely alone) would like to know what the other team members are doing, have done and plan to do.

But there is indeed a way to bring developers to talk with each other: give them a program to communicate. Give them a chat program, and they can communicate by coding. Developers only really talk with each other if they can use an application for it. Luckily we have plenty of chat programs like Skype, Campfire, HipChat or Slack. If we need to exchange larger texts we can use email and wikis.

Likewise developers are always happy if their fellows show them what they have done, i.e. their code, their work, and how it works. Unfortunately they usually won’t do it. They only show their code to each other if they must, or if they can use an application for it, for example a version control system with a nice GUI like GitHub, GitLab or Gitorious. The pull requests from GitHub or merge requests from GitLab can be used for code reviews. Actually, this is one of the best features of GitHub, isn’t it?

Finally, developers only ask others for help if they can use an application for it, like Stackoverflow for instance. Maybe instead of forcing programmers to communicate in a language they do not like, it is better to give them an additional tool they like, an application. By using this application they can communicate by coding. Many of the most promising startup companies at time, like StackoverflowGitHub, or Slack are actually tools for coder to communicate with each other.

The network cable picture is from Flickr user tueksta

Mind the gap between platform and requirements


You probably know the famous “Mind the Gap” signs in the subway (for example in London). They remind you of the gap between train and platform. As developers we should always be aware of the gap between platform and requirements. It is the task of the developer is to close the gap between framework and requirements. But if the gap is too large,you might stumble, and the risk of failure rises.

If you need more than a few lines for a “Hello World” program, then the gap is apparently too large, and you are probably using the wrong language, library or framework. If you need already many lines of code for a very simple problem, than you need of course much more lines for a complex real-world problem. Probably too much to keep it simple, the rule number 1 in software development. In order to bridge the gap without stumbling (or falling into the abyss) we often use plugins, libraries or frameworks.

Actually, closing the gap means closing the gap on multiple levels. Frontend development means adapting the views and templates until the gap between things which should be displayed and things that can be displayed is closed. Things which should be displayed are typically specified in the requirements and the wire frame models. As a developer you tweak and twist your interface until every pixel looks like it should.

Backend development means similarly adapting the data model and the business logic until the gap between things which should be stored and things which can be stored is closed.

The Flickr photo is from user comisariopolitico

The rise and fall of the Microsoft empire

EmpirePeople have always been fascinated by the rise and fall of empires, as the popularity of Edward Gibbon’s monumental work ‘The History of the Decline and Fall of the Roman Empire‘ has shown. Even a large and mighty empire can crumble and fall. The Roman Empire vanished. The British Empire is gone. It can occur for tech empires as well: does anyone remember the rise and fall of DEC? DEC (“Digital Equipment Corporation”) was a major American company in the computer industry and a leading vendor of computer systems, software and peripherals from the 1960s to the 1990s. The empires of IBM and DEC are gone. IBM is only a shadow of its former self, and DEC has vanished with the emergence of Microsoft. Now, there is no reason why Microsoft should not have a similar fate. Empire can rise and fall again.

The reason why Microsoft became a successful empire is not because their software was superior. Neither MS-DOS nor the x86 processors from Intel were better than comparable products. The x86 processor architecture is indeed often considered as ugly. But they were cheap and widespread. Compatibility was the key. PCs with MS-DOS were business standard. They were good enough to run simple word processing and spreadsheet software. Software written for MS-DOS would run on any MS-DOS computer. A lock-in effect with a positive feedback loop set it: people wrote software for PCs because PC sold well and were widely distributed in the business world, and people bought in turn PCs because there were at lot of software available for them. Soon everybody in the business world was using PCs, and the old DEC empire started to crumble. Microsoft used the new market power to gain a competitive advantage in the world of windows systems. Again compatibility was the key. How many people remember the OS/2 operating system from IBM or VAX/VMS from DEC today? All commercial competitors disappeared until only Microsoft was left with Windows. Linux was able to survive in the open-source corner, a niche that is hard to tackle even for large corporations. But it was no serious opponent in the world of window systems.

This has changed. There are 750 million Android devices today. Times in the IT industry change fast. Now apparently the Microsoft empire starts to decay (or at best to stagnate). The very pillars which made Microsoft successful begin to crumble. The new Windows 8 system is no longer compatible to the classic world of Microsoft Windows software. There is no longer a central desktop where Windows applications would run. There is a desktop, but it is hidden behind a new interface. As you know Windows 8 comes with a new colorful surface named “Metro”, which is intended to replace the desktop. Microsoft wants people to use the new “Metro” interface instead of the classic desktop, and wants to people to download apps from their app store, similar to Apple’s app store, or Google Play (the former Android Market). Apparently Microsoft tries to keep pace with their competitors. Unfortunately they seem to damage the very pillar they are built on: compatibility.

Using old Windows software on a new Windows 8 system is a hassle. Older versions of windows programs for instance use often a help in the Windows Help format. This format is no longer supported in Windows 8. Just try to enable the legacy windows help system winhlp32 on windows 8. It is annoying. If you start an old applications which uses Windows Help, then you might get the following message:  “The Help for this program was created in Windows Help format, which depends on a feature that isn’t included in this version of Windows. However, you can download a program that will allow you to view Help created in the Windows Help format.” If you do this, and follow the official links, then you will get a link to an update of the help system, and if you try to install this update, then an error message occurs which claims “the update is not applicable to this computer”. Great. It is possible to get it working, it is just difficult. There is in fact a non-functional stub of WinHlp32.exe in Windows 8, which shows the above message that the help does not work. It is possible to replace the WinHlp32 file, but the “TrustedInstaller” prevents you from doing it. Obviously Microsoft does not mind or does not care if older programs (for their own platform) do not work.

From my humble point of view, Microsoft needs to fixed two things: they need to ensure compatibility as much as they can (for example by fixing things like the WinHlp32 problem, even if it is a minor issue), and they must win the hearts of business customers back. These are the pillars their empire is built on.

  • Microsoft successfully managed to alienate many of their loyal developers and now even their main customers, i.e. small and large businesses. Their main software is called Office, and it is used in offices: in most offices I know there are PCs running Microsoft Windows. If MSFT continues to alienate these customers, then they should have a problem. These users do not have touch screen devices, and they are used to classic graphical user interface with desktop and mouse input. They want to use the Office software they know (Word, Excel and Powerpoint) in the way they always used it. The new Metro interface is not useful at all for classic computers with keyboard and mouse. By hiding the old desktop behing the new Metro UI, the multi-dimensional Window UI is essentially being replaced by a 2-dimensional UI made of rectangular colorful tiles. Like the ones we had in the age of DOS. The new Metro UI and the flat colored “live tiles” feel like a step back to the age of DOS. A finger is always less precies than a mouse pointer, just because it is much wider. It is maybe useful to point to pictures or icons, but it is not useful to use office software. A real step forward would have been a 3D UI (as they can be found in games today), where the traditional desktop could be accessed through windows. That would have been revolutionary.
  • Apparently they neglected the compatibility of existing Windows software. This was always an advantage of Windows. Now traditional Windows software does not run as good it always did, and the new Microsoft App Store offers only a few apps. If Microsoft’s app store will offer as many good apps as the stores from Apple and Google remains doubtful. Developers tend to develop software for widely distributed systems, but most of the new devices run Android (i.e. a Linux derivative). Users increasingly use and buy computers without Microsoft OS, either smartphones (iPhones and Android phones) or tablets (iPads or Android tablets). Whether Windows phones will be successful is an open question.Any UI rises and falls with the number of good apps available for it. A total replacement of the old desktop in the medium term would render all existing applications useless. And when it comes devices with touchscreens, iPad and Android devices are at least as good as the new Windows 8, but wider distributed.

This means Microsoft loses all traditional advantages at once by the radical switch to a new UI. We will see how it turns out. I have a feeling that it will not turn out well. Too much change and too late. Is this the beginning from the end of the Microsoft empire? Will they end like IBM, a pale shadow of their former self? People increasingly buy smartphones and tablet PCs, but they are not from Microsoft: they are mainly from Apple (iPhone & iPad), or equipped with Android. We have seen in the Microcomputer revolution what happens to older, larger systems if they are increasingly replaced by newer, smaller systems with a new operating system. I am curious how it will turn out this time.

( Photo Credit: Pedro Vezini via Compfight cc )

Unsteadiness of progress in development

CanyonThere is a certain unsteadiness and ruggedness in the software world. Software development often feels like moving on a rugged landscape: sometimes it goes amazingly fast, but often you are just stuck and do not make progress for hours. Either you make a lot of progress in a few time, or you make no progress at all for a large time span. There are times when you make a few keystrokes and everything just works, for instance when you stick a few plugins together, make some function calls, add a few lines of code, and everything just works. These are the good times, when you think you have achieved world domination and can move an army of bits with a few keystrokes, when the programmers are like little gods in their little self-made binary universes.

And then there are times when things look desperate, when nothing works at all, and you do not know why, and can not figure it out. An exception has been raised, an error occurs, or something does not work, and you have no idea why. Plugins for instance are wonderful if they work out of the box, autmatically. But if they do not work, then it becomes cumbersome. The more automated a plugin or component is, the more annoying is it when it stops to work, because in this case you have no other options than examining it in detail, which means to drill down through the simple shell into the complex core where you understanding nothing at first.

Version conflicts and dependency hells can be very time-consuming and annoying, too. Ruby-on-Rails programs for example need the right combination of Ruby Version (for example Ruby 1.8.7 or 1.9.2), the right Ruby-On-Rails Version (2.3.8 or 3.2), and the right RubyGems Version (say 1.3.5). The gems or plugins have their own versions, too. The whole system only works if everything fits together. In the beginning this is no problem, for a new system usually everything is up-to-date. But then time goes on, and you have to update the Linux version, or the Ruby version, or the RubyGems Version. And suddenly the other versions no longer fit. It can be very frustrating to get the system working again in this case.

Software programs usually are not fault-tolerant systems at a basic level, there is no graceful degradation in machine language. On the lowest level in machine language or assembly the program works only if there is no error. A single error can be the system to a full stop. Either the computer program runs, which means you have to get every instruction right, or it hangs, throws an exception and stops completely. It is of course usually possible to figure the problem out, if you have enough time, but sometimes it takes a long time to understand what is going on in the various stages of debugging.

Photo Credit: tim caynes via Compfight cc

Fundamental attribution error of programming

codeSam Stephenson is the creator of the Prototype JavaScript framework and rbenv, the competitor to RVM. He recently wrote an interesting article why programmers are not their product named “you are not your code“. Are you?

This is in fact what programmers do quite often: their identify themselves with their code. After all, they have written and created every line and every character. They have invented the names, the functions, and the structures. Nobody else knows their code as good as they do. They own their “precious” code. Programmers are like little gods who like to rule their own universe.

The advantage is obvious: if the software is succesful and you identify with it, it is your success. The drawback: if the software is not succesful and you identify with it, it is your failure. This is similar to a sports team: if a sports team wins, then everybody wants to take part in the success. If the team continues to lose, then everybody starts to blame each other: the president the trainer, the trainer the players, the players each other, etc.

It often works to claim the ownership of something because people have a lot of cognitive biases. One of these biases is the fundamental attribution error in Psychology: we have a tendency to over-emphasize personality-based explanations and ignore the role of other influences (for instance situational ones). We also tend to attribute great events to great men, know as great man theory.

While it is debatable if this is a good thing or not, a developer of a modern web application can hardly claim he is the only author of it. In the early days of PCs, it was only the programmer and the CPU that mattered, at least if you did machine programming in assembly language directly. Then we had the first high-level programming language to program systems with disk-operating systems like CP/M or various forms of DOS. Together with graphical user interfaces object-oriented programming languages arrived, and for the web comfortable high-level languages like Java, Ruby or Python with garbage collection appeared. Today we have 4 or 5 layers between the programmer and the CPU: for example for Ruby programs the programs are written in Ruby, Ruby is written in C, C is written in Assembly, and Assembly boils down to machine code.

And this is only the language itself. A modern web application is like an iceberg, the stuff above the surface is written by you and your team, the stuff below by countless others. It is not only the language and the tools for editing and debugging, a web application is based on a lot of different servers and systems

  • the operating system like MacOS or Linux
  • the web server like Apache or Nginx
  • the web server modules like Phusion Passenger
  • the database server like MySQL or PostgreSQL
  • the caching server like Memcached or Redis
  • the mail server and mail transfer agents like Postfix or Sendmail
  • the message queue processing server like ActiveMQ, RabbitMQ or ZeroMQ

Then there are also the languages and version management systems, frameworks and libraries,
gems and plugins, written by countless other developers:

  • languages like C, Ruby, Python or Javascript
  • version management systems like SVM, Git, RVM or rbenv
  • frameworks like Rails or Django
  • libraries like Prototype or jQuery
  • gems and plugins for pagination, authentication, etc.

In order to build a modern application, you setup different servers and configure them, choose a language, a framework and suitable libraries, and finally you select different plugins and gems and stick them together in a unique way. If you have done all this you can hardly claim you have created the system. And yet we tend to do it..

Therefore if you are a Ruby developer and you have produced more than others, it is not because you are taller or smarter. It is probaby because you are standing on the shoulders of many others.

(The sourcecode photo is from Flickr user nyuhuhuu)

Ubuntu on Samsung Series 7 Chronos


After my 8-year old laptop refused to work this year, I looked for a while to buy a new one. The Lenovo ThinkPads looked good, they are quite popular among Linux fans. Sony and Apple make good machines as well. Finally I decided to buy a new Samsung Series 7 “chronos” laptop, and tried to create a dual boot system for Windows 8 and Ubuntu 12.10. This turned out to be more difficult than expected.

By default the machine has Windows 8 installed, uses UEFI and has “Secure Boot” switched on in the BIOS by default. After I switched “Secure Boot” off in the BIOS (and set it to “UEFI and CSM OS”) I was able to install Ubuntu, by booting from CD with Settings/Change PC Settings/General/Advanced Startup in Windows 8. The installation was cumbersome, because after the installation and the restart of the machine somehow ignored Ubuntu and booted only Windows 8. With the help of Boot Repair it finally worked.

So now I have got a new Samsung Series 7 laptop with dual boot setup for Windows 8 and Ubuntu 12.10. Or so I thought. Windows 8 starts fine, but if I wanted to start Ubuntu regularly the following Machine Check Exception error occured:

[Hardware Error] CPU 1: Machine Check Exception: 5 Bank 6
[Hardware Error] RIP !inexact! 33
[Hardware Error] TSC 95b623464c ADDR fe400 MISC 3880000086
.. [similar messages for CPU 2,3 and 0] ..
[Hardware Error] Machine Check: Processor context corrupt
Kernel panic - not syncing: Fatal Machine Check
Rebooting in 30 seconds

As you know kernel panic is the Linux equivalent of the Windows Blue Screen of Death. Something which you don’t want to see too often. It certainly does not sound good. The laptop started to reboot every time after the Kernel panic. The second boot trial often worked, but the Kernel Panic errors were of course annoying. I wondered if it is a Kernel or a driver problem. I deactivated Hyperthreading in the BIOS and also disabled the Execute Disable Bit (EDB) flag in the BIOS. EDB is an Intel hardware-based security feature that can help reduce system exposure to viruses and malicious code. Then the error did occur less frequently, but it still appeared occasionally.

Finally I found a Kernel bug report 47121 where someone reported that it maybe helps to set the “OS Mode Selection” in the BIOS to “UEFI OS”, instead to “UEFI and CSM OS”. The packages and libraries that are loaded seem to be different. I needed to switch to “UEFI and CSM OS” to install Ubuntu in the first place. Now I had to switch it off again. But after I switched it back to “UEFI OS” the Grub boot meanu now seems to have a higher resolution and – it booted without errors. It looks like UEFI was the root cause for all the major troubles.

Thus if you get a Kernel Panic error on a Samsung Series 7 and Series 9 laptop like the above one, then have look at the BIOS settings. Deactivate all advanced settings to increase performance like Hyperthreading and EDB Bit, and set “OS Mode Selection” to “UEFI OS”. Using the right BIOS settings the laptop from Samsung works really well, with both Windows 8 and Ubuntu 12.10. It is a nice machine, high quality, good equipment, comparable in every aspect to a Macbook Pro (just like the Samsung Galaxy S2/3 is like the iPhone 4/5, and the Samsung Galaxy Tab is like the iPad).