This is part 2 of a three-part posting. If you have not read part 1, I strongly suggest you start there:
You will not test everything. You'll want to, but you won't.
In TDD the theory is that you always write the test first, observe its failure, then do the work to make it pass. Consequently, in a perfect world no line of code would ever be written that was not also tested. It is not, unfortunately, a perfect world.
So you'll miss things because you're human. When you do, then defects may escape your notice, and some of them will likely make it into the delivered product. I wish I could tell you I knew how to prevent that completely, but I don't. Far fewer defects will end up in your customer's lap, but not zero. So what is the value of TDD in this case?
It is huge.
First, when a customer encounters a bug, they will hopefully report it back to you. Most organizations have some mechanism for this: an 800 number or a website, something. When such a report is made a "trouble ticket" or similar artifact is generated and assigned to a developer. Normally the job is now to do two things: 1) Find the cause of the problem either by attempting to replicate it, or by running the system in debug mode, or using any number of techniques we have created for this purpose. 2) Once found the issue is fixed, the fix is confirmed, and the ticked is closed.
Not in TDD.
In TDD a defect that makes it into the product is not a bug. It is a missing test. It is the test that should have been written but was not. It is the test that would be failing now, but because it doesn't exist the product was released in ignorance. Job #1 is to figure out what test was missed, and while this may involve many of the same activities as before, the goal is different. We want to write the test, run it, and watch it fail since we have not yet addressed the defect at all. That failure almost completely confirms that yes, indeed, we found and created the missing test. Almost.
Now we fix the problem and run the tests again. The single red one should now go green, and because of the way we made that happen we now have complete confirmation that it was the right test. But we also run all the other tests too. Always. Because if they stay green this also confirms that the action we took to fix the bug didn't, in turn, create another one. Any developer will tell you what a nightmare that can be: you fix one thing and break another, you fix that and break something else, and down the rabbit hole you go. TDD will tell you that this has not happened or, if it has, exactly where and why. Immediately.
One more, very important thing:
The actions we take are not all that different from tradition except that they end up producing a test, and the test is kept forever. Normally when a bug is fixed it is quite possible that it will come back later, because someone working on the system inadvertently re-introduces it. That's how it got there in the first place after all. But if you follow the TDD process, the bug will never come back because we never release software with a failing test. So whatever effort this involves produces permanent value. I know of no other way to make that happen.
Stay tuned for answer #3.