Real Money: Poor Software Testing Practices Cost US Companies $59 Billion

[article]
Summary:
According to a new government report, inadequate software testing costs the US economy $59.5 billion a year. How's that for proof that software testers perform a vital service? If you'd like to know how that number was derived, read on as Linda Hayes unwraps some of the methodology behind the study.

According to a new government report, inadequate software testing costs the US economy $59.5 billion a year. How's that for proof that software testers perform a vital service? If you'd like to know how that number was derived, read on as Linda Hayes unwraps some of the methodology behind the study.

Editor's Note: The editors at StickyMinds saw this report and asked Linda to write a column about it. Special thanks to her for tackling the lengthy NIST report and delivering this column on short notice.

If you have been struggling with making a business case for more and better testing, you now have an ally in the form of the National Institute of Software and Technology (NIST), which is part of the Department of Commerce Technology Administration. In their 300+ page report released in May of this year, the NIST estimates that inadequate testing costs the US economy $59.5 billion a year.

As staggering as that amount is, I initially thought the number was too low, considering that USA Today previously estimated the annual loss at $100 billion. But then I realized that the NIST was strictly looking at the costs to developers and users of fixing defects and not to the costs to the business in the form of lost revenue or productivity. But whether it is $100 billion or $59.5 billion, as they say in Texas: A billion here, a billion there, pretty soon you're talking about real money!

Methodology
The NIST surveyed two industries that are heavily dependent on technology: transportation-manufacturing and financial-services. These two categories represent hard and soft goods: transportation-manufacturing systems are used to produce tangible goods, and financial-services systems are used to process electronic data. These differences account for some wide variations in the cost of defects, as you will see. The survey forms delivered to these industries are contained in one of the report appendices.

The report then proposes a taxonomy of costs for software developers and users. Developer costs include labor costs to find and fix bugs, as well as the supporting software, hardware, and external services (think consultants) costs. For users, the costs included the time and resources invested to select software and install it, and the ongoing maintenance costs to detect and repair defects and any damaged data.

Findings
The report is extensive and there's not room here to show you all of the findings. I'll mention a few especially for testers. One caveat: for purposes of the report, testers are lumped in with developers. Try to overlook that if you can, otherwise it may become distracting.

Early on, the report offers a definition of software testing that is—well, interesting. It says "Software testing is the process of applying metrics to determine product quality and the dynamic execution of software and the comparison of the results against predetermined criteria." This wording will probably launch the code review and walkthrough contingents into orbit. In defense of the report, though, it does go on about the difficulty in identifying and measuring quality attributes. At least they noticed.

Next, it refers to a 1995 book that estimates the allocation of effort within the development cycle at 40% for requirements analysis, 30% for design, and 30% for coding and testing. I don't know about you, but I think these numbers are wishful thinking. It also estimates that programmers spend 10% of their time testing, while software engineers spend 35%. The distinction between programmers and engineers is not spelled out, but I must be used to working with programmers.

But the real meat of the report, at least in my opinion, is when it gets down to costs. First it offers the observation that we have all heard before—the sooner you find defects, the less they cost to fix. They offer an example of 1X cost to fix a defect in requirements and analysis, 5X to find in coding and unit test, 10X in integration and system test, 15X in beta test, and 30X in post-release. I've seen them much higher—as much as 1000X in production-but whatever. The point is made.

About the author

Linda Hayes's picture Linda Hayes

Linda G. Hayes is a founder of Worksoft, Inc., developer of next-generation test automation solutions. Linda is a frequent industry speaker and award-winning author on software quality. She has been named as one of Fortune magazine's People to Watch and one of the Top 40 Under 40 by Dallas Business Journal. She is a regular columnist and contributor to StickyMinds.com and Better Software magazine, as well as a columnist for Computerworld and Datamation, author of the Automated Testing Handbook and co-editor Dare To Be Excellent with Alka Jarvis on best practices in the software industry. You can contact Linda at lhayes@worksoft.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!