Welcome to the Church of TDD

! Warning: this post hasn't been updated in over three years and so may contain out of date information.

Church of TDD

“Test Driven Development (TDD) is great. TDD is awesome. TDD will give you 100% code coverage. TDD will give you clean code. TDD is your coding saviour. Evil sinners test after they code and their code is bad. ‘Test first’ is the only one true way. You are a professional developer only if you use TDD. All bow down and worship at the alter of TDD. Oh by the way, TDD isn’t a religion. Honest.”

The above is my précis of chapter 5 of The Clean Coder by Robert C Martin. You might think I’m trying to be deliberately provocative with the way I’ve written it, and to a very small extent I’ll concede I am. It’s only to a very small extent because, in all honesty that is exactly how the chapter came across to me, but I know that admitting this will be seen as provocative by many TDD fans. Despite this, I’m saying it anyway.

It’s likely by now that you are jumping to some conclusions about me. So let’s clear a few things up. I write unit tests. Lots of them. I aim for 100% code coverage with my tests. I run my unit tests with every compile. I refactor my code all the time – with confidence – due to those tests. I have recently started viewing those tests as a vital part of my code’s documentation and I strive for clean code. I fully embrace the idea that the old “design, develop, test” waterfall method is a failed methodology. Also I agree that the iterative agile approach of no task – no matter how small – is “done” until it is tested and documented is good practice. Yet despite all that, I do not use TDD.

Many TDD advocates like to sell a simplistic dichotomy: either you employ TDD when developing your code, or your code is untested, poorly structured, unmaintainable and you fear to refactor it. Mr Martin’s testing chapter is written in this vein. He, and other folk that peddle this false dichotomy are wrong though. It is perfectly possible to produce professional quality, fully tested, clean code without using TDD. I claim this, because that is what I do. Now I might be some unique individual with special powers that let me do this. I doubt it though; far more likely is the idea that TDD is not “the one true way”; it’s just one of many ways of achieving clean code.

Developers over the years have been called all sorts of fancy names: software craftsmen, software engineers and even computer scientists. If TDD advocates want to demonstrate their way is the only way, they need to act like scientists. They need to propose testable hypotheses; they need to strive to eliminate all but one variable in their experiments; they need control groups; they need large sample sizes and they need to apply statistical analysis to the results. Even then, they have to talk in terms of there being less than a 10%, 5%, 1% or whatever probability of their results being purely due to chance. For that is the nature of real science.

An examination of the available empirical data quickly reveals that such high quality experiments have either never occurred, or haven’t been well published. One can find a few studies, but they tend to use small sample sets (so the probability that the results arise through pure chance might be 10% or more) and the conclusions drawn can be truly bogus. A detailed analysis of one such study revealed that the data actually showed that the code-first approach resulted in better quality tests!

So given the lack of convincing evidence that TDD is the only way, should we expect phrases like “The upshot of all this is that TDD is the professional option … it could be considered unprofessional not to use it.” and “‘But I can write my tests later,’ you say. No, you can’t. Not really.” to be tossed about as if they were fact? No of course we shouldn’t. These are baseless assertions, better known as dogma. Yet The Clean Coder’s chapter on testing genuinely contains these two quotes and other such outlandish claims.

Interestingly, I suspect that Mr Martin felt – at the back of his mind – that he was making ridiculous claims about TDD in his book. The chapter finishes with an immensely sensible statement: “For all its good points, TDD is not a religion or a magic formula. Following the three laws does not guarantee any of these benefits. You can still write bad code even if you write your tests first. Indeed, you can write bad tests.”

The problem though with this quote is that it is stated after a full chapter of claims that TDD is a magic formula; that following the three laws does guarantee many benefits and that the only way to avoid writing bad code is to strictly follow the path of TDD. Such a closing statement is akin to folk who say “in my humble opinion” before expressing a vain, proud opinion. It’s disingenuous. A disclaimer to not be preaching a religion at the end of a religious preaching session is just plain ridiculous.

TDD can sometimes offer huge benefits to a developer. Sometimes TDD can help a developer write clean code that is well tested, well documented and refactored until it is as clean as can be. TDD is no guarantee of this though. Sometimes, writing tests after you write your code, then fixing and refactoring your code as a result can work better. Sometimes it won’t and maybe a halfway house works best. To be a good developer, one must write good tests that test all of one’s code and one must remember that the primary audience of good code is other developers, not the compiler. How you personally achieve that will depend on how you work. For someone – no matter how famous – to suggest you are implicitly unprofessional if you don’t do it his way is deeply insulting to our profession.

The bulk of the chapter is full of good ideas: goto is harmful; automated tests provide certainty over your code quality; automated tests do reduce defect injection rate and give you the courage to clean up bad code through refactoring. Those tests can, if well designed and written, provide useful API documentation. Testing during the development phase helps with design, code quality, production rate etc. Yet absolutely none of this – in my experience – requires you dogmatically follow the three laws of TDD.

TDD ten years ago was a brilliant wake up call for a seriously screwed up industry. These days, any developer worth their salt realises that testing as you develop is a “good thing.” Anyone disagreeing with this fits firmly in the “what’s wrong with goto anyway?” camp of programming luddites. However TDD advocates have to wake up to the fact that things have moved on. The TDD methodology doesn’t work for some folk and we can accept this and embrace more flexible Test Orientated Development practices. Or we can risk those folk for whom TDD doesn’t work returning to the old design/ code/ test waterfall methods.

TDD is one great solution to helping developers improve their code quality. It isn’t the only way though. We have a choice: the development community can become a body of dogmatic preachers on one side and a bunch of heretics who vary from the one true path on the other. Or we can become a body of professionals that recognises there is more than one way to write good code.

I know which option I prefer. What about you?

7 thoughts on “Welcome to the Church of TDD

  1. I do agree with you that it is not essential to write all your unit tests before writing the implementation code, you can still generate equally professional well tested code by writing your tests in parallel or just after. There are generally no one-size-fits-all solutions in software so to claim any paradigm as ‘the answer’ is bad.

    However I personally do find that when I write the unit tests first it forces me to think of the contract of my objects in advance and see problems coming early. (Perhaps making the fixing/refactoring after the tests which you mention unrequired). If I can take some requirements convert them into unit tests / interfaces and Pojos, then fill in implementation until they all pass then I have fulfilled all requirements. Going back over the requirements afterwards to write the tests may reveal I have missed something crucial in my thinking.

  2. Having worked with David for 20 years (on and off) I can confirm that he is indeed a ‘unique individual’ but I’m not quite sure what his special powers are yet.

    I do find myself agreeing with him in this case though, much as I also admire Uncle Bob (I consider Clean Code an absolute must read for anyone who is serious about software development).

  3. I have to say I totally agree. But am Anonymous because I think I would be burned at the stake for saying this. I write code before I write the tests! whew, OK that’s off my chest. Here is my standard cycle.

    1. Get Problem > Think of the code solution.
    2. Think about Design Patterns that could easily solve the issue.

    3. Write some very simple code to solve problem.
    4. Write some Unit Test to test as many things as I can think of involved in that problem. Including some Negative tests / and things that should throw exceptions.
    5. Refactor
    6. Add more tests.
    7. Think about performance.
    8. Refactor
    9. Add more tests.
    10. Think about Re-usability
    11. Refactor
    12. Add more tests.
    13. Think about Read-ability
    14. Reractor
    15. Add more tests.

    Step #4 Unit Tests allows me to refactor with ease and confidence, it is absolutely vital. But it doesn’t need to be done first.

    We have to ask why TDD makes up on 3% of the development community, it is because it doesn’t follow a natural creative pattern. That being said the people who get really into TDD do see benefits, but if we constantly ignore the large barrier to entry that is “reorganizing your thoughts” we will continue to call good coders bad coders just based on dogma.

  4. Hi “Anonymous”,

    Thanks for the feedback. Looks like you and I have a very similar approach to testing. A question for you if I may: do you have a source for your 3% figure for developers who use TDD?

  5. Here’s a related question I’m struggling with.

    I work on a large (java) code base. It has many, many, “bogus” tests. E.g. unit-tests that have no assert’s at all. Tests that have EasyMock.replay() but no EasyMock.verify(). Presumably these were written by monkeys who were taking a break from typing Shakespeare.

    I’d like to find, or create a static analysis tool that can parse the java code for the test classes, and find these bogus methods (at least with some reasonably high degree of probability).

    Any suggestions? Has anyone been down this road before?

  6. Fantastic blog entry, great discussion, a must-read and everyone should think about it…

    What I also do see is if you write your unit-tests upfront, you get a much better design of your APIs; you might later then do some coding before you write your tests, and I’m also not a voter for 100% coverage, as there is so many code (getters/setters) that does not necessarily need to be tested, but all-in-all writing those unit-tests, keeping them small and simple, is a MUST in nowadays development.

Comments are closed.