Project Management

Do I Really Have to Test Everything? (part 2)

From the Sustainable Test-Driven Development Blog
Test-driven development is a very powerful technique for analyzing, designing, and testing quality software. However, if done incorrectly, TDD can incur massive maintenance costs as the test suite grows large. This is such a common problem that it has led some to conclude that TDD is not sustainable over the long haul. This does not have to be true. It's all about what you think TDD is, and how you do it. This blog is all about the issues that arise when TDD is done poorly—and how to avoid them.

About this Blog


Recent Posts

Do I Really Have to Test Everything? (part 3)

Do I Really Have to Test Everything? (part 2)

Do I Really Have to Test Everything?

TDD Tests as “Karen”s


Categories: TDD

This is part 2 of a three-part posting.  If you have not read part 1, I strongly suggest you start there:

Answer #2:
You will not test everything. You'll want to, but you won't.  

In TDD the theory is that you always write the test first, observe its failure, then do the work to make it pass.  Consequently, in a perfect world no line of code would ever be written that was not also tested.  It is not, unfortunately, a perfect world.

So you'll miss things because you're human.  When you do, then defects may escape your notice, and some of them will likely make it into the delivered product.  I wish I could tell you I knew how to prevent that completely, but I don't.  Far fewer defects will end up in your customer's lap, but not zero.  So what is the value of TDD in this case?  

It is huge.

First, when a customer encounters a bug, they will hopefully report it back to you.  Most organizations have some mechanism for this: an 800 number or a website, something.  When such a report is made a "trouble ticket" or similar artifact is generated and assigned to a developer.  Normally the job is now to do two things: 1) Find the cause of the problem either by attempting to replicate it, or by running the system in debug mode, or using any number of techniques we have created for this purpose.  2) Once found the issue is fixed, the fix is confirmed, and the ticked is closed.

Not in TDD.

In TDD a defect that makes it into the product is not a bug.  It is a missing test.  It is the test that should have been written but was not.  It is the test that would be failing now, but because it doesn't exist the product was released in ignorance.  Job #1 is to figure out what test was missed, and while this may involve many of the same activities as before, the goal is different. We want to write the test, run it, and watch it fail since we have not yet addressed the defect at all. That failure almost completely confirms that yes, indeed, we found and created the missing test. Almost.

Now we fix the problem and run the tests again.  The single red one should now go green, and because of the way we made that happen we now have complete confirmation that it was the right test.  But we also run all the other tests too.  Always.  Because if they stay green this also confirms that the action we took to fix the bug didn't, in turn, create another one.  Any developer will tell you what a nightmare that can be: you fix one thing and break another, you fix that and break something else, and down the rabbit hole you go.  TDD will tell you that this has not happened or, if it has, exactly where and why.  Immediately.

One more, very important thing:

The actions we take are not all that different from tradition except that they end up producing a test, and the test is kept forever.  Normally when a bug is fixed it is quite possible that it will come back later, because someone working on the system inadvertently re-introduces it.  That's how it got there in the first place after all.  But if you follow the TDD process, the bug will never come back because we never release software with a failing test.  So whatever effort this involves produces permanent value.  I know of no other way to make that happen.  

Stay tuned for answer #3.

Posted on: November 29, 2022 11:47 AM | Permalink

Comments (3)

Please login or join to subscribe to this item
Dear Scott
This topic is very interesting and brought to our reflection and debate
Thank you for sharing and for your opinions.
I will follow closely your articles

TDD requires automated testing. Without that, your testing becomes too onerous and laborious.

Because of this, your test scripts become code in their own right. You then have to worry about your test scripts being bug-free.

TDD at the developer level does require automation. With Acceptance Tests is still a good idea but they have value even if you execute them manually. That said, your point about buggy test code is salient, however we find that the test, written first, observed to fail, and then made to pass without altering the test in any way (boilerplate TDD) tends to mitigate this problem to a large degree.

Please Login/Register to leave a comment.


"I would never die for my beliefs, cause I might be wrong."

- Bertrand Russell