Discussion On
Test Driven Development and the Meaning of Done

by in Tips & Tutorials

11 Comments

  1. Bryan
    Thursday, December 227, 2010

    Excellent points. I’d be curious to know if the TDD team was still slower when it comes to version 2 and version 3 of the software. I did notice a couple quick issues you might want to know about: you double the word “strong” just after the TDD steps, and I think you meant relieve rather than relive in the last sentence. Thanks for posting a great entry!

    Reply
  2. anon
    Thursday, December 242, 2010

    This is comparing apple and oranges. The two development teams in the test should keep working until they have reached (roughly) the same amount of bugs or hours invested in the project and then compare the remaining free variable.

    Reply
  3. Jon
    Thursday, December 239, 2010

    Why did they declare the project done and ship with so many outstanding bugs?
    Why, indeed! To my mind, “done” means “feature complete and free of defects.” Hell, I could build a spaceship in a week if it doesn’t have to actually work. “Survive the stress of lift-off or the heat of atmospheric re-entry? Well, we didn’t exactly test for that…”

    And if the project is truly calendar-bound, then I would challenge the client for a smaller featurelist — build fewer reliable features. It may not do as much, but what is there works. Well. But then, I use an Agile approach which strives for maximum transparency. The client knows exactly what is possible based on the team’s velocity and we simply reflect reality, not wishful thinking. We don’t “hope” for good results and settle for whatever happens. IMO that’s just unprofessional.

    I, too, would be interested in the point Bryan raised — what happens when it’s time to build v2 or v3? IMO, bugs are like an infection — the longer that bugs are allowed to fester, the more deeper their impact and the greater the difficulty when it comes time to solving them due to side effects. Similar to poor design decisions. A shortcut which “saves time” up front can cost many hours/days later due to shortcomings.

    As an aside, I simply LOVE the “Programmers/Coders at work” series for the insights into the minds and working styles.

    Reply
    • Jess Johnson
      Thursday, February 312, 2011

      I absolutely agree with your definition of done, and wouldn’t want to call any project of mine “done” unless it was feature complete and free of defects. However, I can imagine a few scenarios where it might be better to ship a buggy feature than not to ship it at all. A startup that wants a first mover advantage might fit into this category. Or to take your analogy a bit further, what if your spaceship was the only hope to destroy a comet on course to demolish the earth in a week?

      Reply
      • Jeff L.
        Tuesday, August 2852, 2012

        Hi Jess,

        I’m ok with the lean startup notion of shipping code that’s likely to be crappy. That’s a calculated risk. The agile folks make the case for a different tradeoff–shipping a smaller feature set of high quality, then incrementing the product. Both can work; depends on the context.

        I’d done enough of both TDD and not TDD. It usually only takes me a couple days before I start recognizing time I’m wasting because I’m *not* doing TDD. If my spaceship was the only hope, TDD would start paying off a day or two in. I’d rather have a tool that prevented me from incorporating a major defect that would cause it to crash and burn before I smugly sent it up at the last minute.

        Contrived examples aside, I’ve also seen shops that hacked together quite solutions and shipped them to some success. Years later, every developer cursed every minute of working on the software, now several hundred thousand lines built atop a core hack-base of maybe 50,000 lines of really shitty code.

        Lack of quality costs far more in the long run.

        Reply
  4. Enzo
    Tuesday, May 3103, 2011

    It is the 80/20 rule. 20% of the code base will cause 80% of the bugs. Not all bugs even matter. Spending so much time testing can be even more wasteful than not. If the time taken to write and maintain unit tests is greater than the cost of just fixing the bug, then you are wasting time and resources. Now you have to bug fix your regular code and you have to bug fix your test code too. If you can’t write regular code properly, what makes you think you will write test code properly? Don’t you then need a unit test for the test code?

    Reply
    • J. B. Rainsberger
      Wednesday, August 2924, 2012

      When I write a test, I have to clarify what I’m trying to do. This act alone reduces the number of mistakes I make. True, it doesn’t focus that reduction in “the parts that matter”, but I don’t usually know before building the system which parts will matter. I can guess, but I don’t think I can guess well enough to trust those guesses.

      Reply
  5. Jeff L.
    Tuesday, August 2806, 2012

    An odd, somewhat cynical conclusion.

    Doesn’t introducing a significant defect (i.e. one that must be fixed) that is found by QA delay the ability to ship product? The cost of managing each defect also incurs incremental delays that do add up.

    More studies would be useful here, but “faster to develop” isn’t the only relevant factor. What is the cost of a defect? It’s not simply the time to fix it. Part of the cost gets shifted to the customer in the form of wasted time or inability to effectively complete work, part of it to your company in the form of support calls. When a defect is significant enough, or when you have gobs of them, you lose customers (seen it). You also lose opportunity time while you fix it, and the cost to fix will often increase as the time between introduction and discovery increases.

    Other studies might examine TDD (practiced with high levels of effort to incrementally keep the code clean) and corresponding codebases over time. Most codebases are not factored well and contain 50-100% more code than they need (this is anecdotal but based off seeing hundreds of customer systems).

    What is the long-term cost of a code base that is difficult to understand and maintain? How many of us have spent an afternoon deciphering and struggling with adding a small feature that should have taken only 15 minutes?

    If I did TDD only to minimize defects, I’d probably worry about the research and numbers more. But I’d also insist on a proper accounting for the true cost of defects.

    Reply
  6. J. B. Rainsberger
    Wednesday, August 2901, 2012

    I strongly doubt, with admittedly only mostly anecdotal evidence and personal experience as justification, that giving the non-TDD teams more time will help much less than you’d expect, primarily because the people on those teams spend the extra time thinking about algorithms and data structures, rather than thinking about interfaces and behavior.

    In other words, simply giving the non-TDD teams more time will not encourage them to think about the kinds of things that lead to asking “Have we done everything we need to do yet?” TDD provides but one mechanism to encourage people to ask this question — indeed, it encourages people to make a habit out of asking this question.

    Reply
  7. shmoo
    Wednesday, November 2119, 2012

    I’ve seen many TDDers write terrible code, all of them falling for the myth that testability=good design.

    This is wrong. Testability is a tiny part. It’s been overwhelmingly my experience that TDDed code has to be re-written by others within only a couple of sprints became its so bad.

    Reply
  8. Eric
    Friday, October 1650, 2015

    Didn’t see this anywhere in the comments so I thought I’d point it out. TDD isn’t the only effective defect reduction mechanism in town. In fact, it’s not even considered to be the most effective. Individual developer and team design and code reviews are usually far more effective at identifying defects than TDD alone. Especially for non-trivial systems. Should those non-TDD teams have been doing design and code reviews rather than TDD, they probably would have released higher-quality software, and they would have done so more quickly. Only practicing high-yield design and code reviews made using regularly updated checklists of common defects will convince you of this though. I’ve seen it work wonders in my own code and in my organization though.

    Reply

Leave a Reply