Say Bye to Testing Metrics in Agile Teams
Stop counting test cases and bug reports in development! They aren’t as helpful as you might think. There is no direct correlation between these counts and delivered value or quality.
This perceived correlation reminds me of the movie The Princess Bride, where the character Vizzini keeps using the word inconceivable in an incorrect way. Just as his character needed the feedback to correct the mistake, I am here to do the same for you when considering the transition to agile development.
Faulty Counting Assumptions
The project manager view of things says that if we have 10 things to do and we know roughly how long it takes us to do those things, then we can estimate the total time, cost and effort to complete the project. Makes sense--for certain kinds of tasks, that is.
Apply this logic to the testing phase in a development project. We count test cases so we have an idea of how long it will take us to complete an assessment of the quality of an application or system under test (SUT). There are many problems with the underlying assumptions in that statement. Here are a few:
- There is no direct correlation between the quality of the delivered system and the number of test cases executed.
- The real assessment of system quality happens between the test case runs, not during it.
- That is, assessing system quality requires human discussion,
Please log in or sign up below to read the rest of the article.
"I once took a cab to a drive-in. The movie cost me $190." - Stephen Wright |