What would be the best way to show the value QA provided towards a project, like a post-mortem QA document

Manuel Trinidad's picture

I have been tasked by my company to come up with a single page document that shows the "value" that QA provided to the project.  Basically we need to show how we have saved and/or made the company money through any given project.

I have things like number of hours tested, if we went over budget, number of defects found, etc., but what are your suggestions?

How do I show the value of the defects found and come up with a comparison to what it would cost to fix these things in the field?

 

Tags: 

3 Answers

Lisa Anderson's picture

The best way to measure QA's effectiveness is take the number of bugs found after shipping by tech support and divide that by the total number of bugs found during the product life cycle. For example: 500 bugs were found during development. 25 new bugs were found after shipping. 25 divided by 500 = .05 or 5%. You should strive to get that number under 2%. That 2% represents the bugs that were not found during development. Do a root cause analysis on the cause of those bugs and fix those problems.

Don't try to get that number to 0%. The idea is not to ship perfect software. It's to ship software that you know exactly what it does and what it doesn't do.

Rick Grey's picture
Rick Grey replied on June 22, 2013 - 11:28am.

I agree with Lisa that you want to understand what you're shipping. I think of testing as a risk information service, helping stakeholders make informed desicions (like whether or not to ship).  

Numbers, without context, don't tell the whole story though. Another way to to demonstrate value is to figure out if the test team is finding the bugs that *matter*. 

Consider the following comparison of hypothetical project's outcomes.  For the purposes of the comparison, let's say that the only variable between the two scenarios is the specific bugs found--the team members, lines of code, project team, business purpose, method of bug review for fix/backlog, etc. are all otherwise identical.

 

Scenario One: finding the bugs that matter

 - 500 bugs found during development

   - 480 fixed prior to release

   - 20 put in the backlog

 - 25 bugs found after shipping

   - 0 required immediate service packs

   - 25 put into the backlog 

 

Scenario Two: not finding the bugs that matter

 - 500 bugs found during development

   - 350 fixed prior to release

   - 150 put in the backlog

 - 25 bugs found after shipping

   - 20 required immediate service packs

   - 5 put into the backlog 

 

[Note that the total number of bugs was *not* 525, it was some much higher number.  That's how the 500 found during development got different prioritization (fixed vs. backlog) across the two scenarios.]

We've all worked in environments with bugs in the backlog that are of such low priority that they'll never be fixed. So was it worth the effort to find, replicate, report, triage, and then backlog those bugs? Which of the 150 bugs that were backlogged in Scenario 2 were really worth finding? 

This is an oversimplification, of course, but hopefully it makes the point.

If your company wants you to show "value," first figure out what *they* "value" and map your answer to that as effectively as you can. It will be easier for them to understand your value if you have congruence there. You call out money as the measure of value in your question, but make sure that's really what they want to know about the test effort.

If it is really just about the money then one thing to look at, as suggested by the scenarios above, is to see if you can come up with some rough numbers around the support/development/test/management/devops cost to deliver service packs. The cost in doallrs is what you've saved the company from having to spend. The cost in hours is time you've allowed the company to focus on money-making activities. The company should already understand the revenue they hoped to generate with the development hours they'd already planned to devote to new features and products. 

mike rucker's picture

this is a tough question.  and it's tough because a lot of people still work in the paradigm where there are programmers and testers, and never the twain shall meet.  in fact, i'd go further to say that the prevalent view is that a programmer does all the work, and a test resource has to show documented bugs to justify his paycheck.  with agile, we who wear a "qa" hat are more focused on *preventing* bugs in finished code than in *finding* bugs in finished code.  if i'm primarily responsible for testing during a sprint, i'm working all day with development, getting early looks at the code, suggesting unit tests to add, exploratory testing, etc.  and litte of this activity really results in documented bugs.  if your methodology has a hardening sprint where no new content is delivered, then perhaps there is a pass of regression testing for which there are traditional bug metrics.  but the "value" that a "test" resource provides is really no different than the value that any other scrum team member provides: a functional system implemented into production.

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.