Knowing the overall health of the build or product is far more useful than knowing that you have ten unresolved bugs. Those ten bugs could be symptoms of something serious within the system, or they could be just minor text and UI changes that need to be implemented.
What’s wrong with bug counts?
The other day I was speaking to the product owner of an interesting project. He was complaining about the number of bugs logged that were essentially duplicates. We spoke for a bit about the kinds of bugs he was seeing, and I asked how the testers were measured. The testers were measured on bug counts. The metric had encouraged behavior that put the focus on completely the wrong things. The testers were concentrating on easy-to-find, visible UI bugs that could be logged separately per screen or per event to keep their counts up. They were not focusing at all on getting information about the product health as a whole because those tests take time and patience and intricate thinking to invent and run. And it’s those tests that provide valuable information if they yield any issues. When used as a measure in this way, the metric was totally driving the wrong behavior.
The meaning in the metrics
Metrics come in all shapes: velocity; bug counts; code coverage; cycle time; live releases per sprint, week, or month; etc. These can all be valuable providers of information to the team to create visibility about unknown things. If the focus of the metric is only on how much or how many or how often, then I believe the value for the team becomes less. The wrong behaviors are rewarded and encouraged, and far too often, trust levels are affected. In my experience it is far more useful—and far less dysfunctional—to pay attention to the meaning and the nuances rather than just the numbers.