Why POUT (aka TAD) Sucks

Tim and I are no strangers to controversy and debate. As agile coaches, we get challenged all the time. "TDD is a waste of time, it doesn't catch any defects, you don't want to go past 70% coverage," and so on. They sound like excuses to us. But we're patient, and perfectly willing to have rational discussions about concerns with TDD. And we're the first to admit that TDD is not a panacea.


Still, we haven't yet seen a better developer technique than TDD for shaping a quality design and sustaining constant, reasonable-cost change over the lifetime of a product. Yet: If you show us a better way, we'll start shouting it across the rooftops. We're not married to TDD, it just happens to be what we find the most effective and enjoyable way to code.


Many of the challenges to TDD come from the crowd that says, "write the code, then come back and write some unit tests." This is known as POUT (Plain Ol' Unit Testing), or what I prefer to call TAD (Test-After Development). The TAD proponents contend that its practical limit of about 70% coverage is good enough, and that there's little reason to write tests first a la TDD.


70% coverage is good enough for what? If you view unit testing as a means of identifying defects, perhaps it is good enough. After all, the other tests you have (acceptance and other integration-oriented tests) should help catch problems in the other 30%. But if you view tests instead as "things that enable your code to stay clean," i.e. as things that give you the confidence to refactor, then you realize that almost a third of your system isn't covered. That third of your system will become rigid and thus degrade fare more rapidly in quality over time. We've also found that it's often the more complex (and thus fragile) third of your system!


And why only 70%? On the surface, there should be little difference. Why would writing tests after generate any different result from writing tests first? "Untestable design" is one aspect of the answer, and "human nature" represents the bulk of the second part of the answer. Check out the card (front and back).



6 comments:

  1. Not all cards are intended for ultimate publication! Tim and I will have a bit of fun with some of these, which may help stimulate other ideas or help improve the content of existing cards. Please chime in with your opinions.

    ReplyDelete
  2. TAD sucks because you only start testing after you've written code that's hard to test.

    TAD sucks because it tends to duplicate whatever was in the AT in a different language.

    TAD sucks because it is hard, slow, and unrewarding. I can't come up with anything worse to say about a coding practice.

    TDD on the other hand, works like a son-of-a-gun!

    ReplyDelete
  3. Hey Tim--I kind of like the idea of having a category of "argument cards," things that I might carry to be able to refute common points that someone might come up with.

    ReplyDelete
  4. @jeff: not a bad idea. We should come up with argument/however/however/therefore cards.

    ReplyDelete
  5. Why would writing tests after generate any different result from writing tests first?

    The answer is that your notions of risk, your test ideas, and all the problems that we'll encounter cannot all be anticipated in advance of writing the code. We will learn things as we write the code and after the code is written. These learnings lead to new risk ideas and new test ideas, things that make us curious about the behaviour of the program. "Okay, that's done. Hey... I wonder... what if...?"

    I'm not arguing against TDD here. I'm a tester. Programmers who use TDD provide me with more robust and more testable code. Thank you. I can spend more time getting better test coverage if I don't have to investigate and report on the kinds of problems that TDD helps to prevent. This is a Good Thing. I like the idea, and I wish more programmers would do it.

    Instead, of objecting to TDD or advocating only TAD, I'm cautioning that TDD is necessary but not sufficient for a program to be well tested. That is, testing only after you've written the code, and testing only before you've written the code both suck.

    ---Michael B.

    ReplyDelete
  6. Michael: nice.

    Of course TDD is testing before, during, and after. It is indeed the "test only after" that we're vocally against.

    We find that the red-green-refactor cycle (test before and as you code, clean as and after you test) has had a profound effect on the code we write, and not merely in lower defect density.

    Some people call TDD unit tests "programmer tests" because they are of benefit to us primarily. TDD is sufficient to enable us to make steady progress and clean the code as we work. It may not be sufficient to other purposes.

    The app still has to be run and demonstrated, and there have to be acceptance tests, and there is that extremely human activity of exploratory testing, there needs to be performance testing and scalability testing. There are plenty of "ilities" remaining which require other testing.

    But I'm glad that testing professionals find TDD-ed code easier to work with.

    You mentioned "testing only before you've written the code" which is something I've never seen done. Would anyone really write all the tests first before coding? That sounds like big-design-up-front (BDUF) and I don't think it would work. Maybe you have references to people actually working that way. It would be new to me.

    Tim

    ReplyDelete

Note: Only a member of this blog may post a comment.