We live in an online, wired world. The digital and physical worlds are so intertwined that almost nothing can happen unless the relevant applications are up and running. You would think that as software applications become indispensable, they would likewise become more and more dependable. Instead, the rising tide of systems complexity and interdependency are making enterprise application problems inevitable, intractable, and more elusive.
“In the past year, alone, we saw Lufthansa cancel the flights of over 3,000 booked passengers due to a computer fault in the check-in systems. We’ve seen insufficient testing lead to the failure of Nike’s i2 demand forecasting in 2000, costing the company more than a forecasted $100 million in sales. We’ve seen a failure to test for specific conditions contribute to the August 2003 blackout that affected the Northeastern United States and Canada. And beginning in June of last year bugs in connections between Hewlett-Packard’s legacy order-entry system and SAP systems caused a backlog of customers orders ultimately costing the company $40 million is lost revenue (CIO, 11/15/05).”
In fact, software errors, or bugs, are so prevalent and so detrimental that, according to a study published in 2002 by America's National Institute of Standards and Technology (NIST) software bugs cost the American economy $60 billion a year or about 0.6% of its gross domestic product.
Missed deployment and delivery dates, budget overruns, failure to comply with industry regulations, interrupted workflows and frustrated customers are the natural byproducts of application flaws. The current approach to resolving application problems, and ensuring software quality throughout the entire useful life span of the application, is simply not getting the job done. The time has come for a sea change in how we manage problem resolution, and ensure quality for complex, composite new applications, as well as aging legacy systems.
The Current Zeitgeist: Testing = Quality
Researchers from the Carnegie Mellon University estimate that programmers make between 100 and 150 errors per 1,000 lines of code. A plethora of similar research has clearly established the legitimacy of Quality Assurance as a fundamental life cycle process. Gone are the days when the length of the quality assurance phase was axiomatically the number of days between the end of coding and the release date. There are still some laggard organizations that task the QA team to “make-up” time lost in development, but the zeitgeist in software quality is still rooted in the concept that quality can be “tested in” to software applications. There are some solid economic roots to that thinking. In his 1981 software classic, Barry Boehm published some, at the time, eye-opening information to demonstrate how the cost for removing a software defect grows exponentially with the phase of the development lifecycle in which it is discovered.
Forward looking organizations realize that the new paradigm for life cycle application quality is to build quality assurance into every phase of the development life cycle. There’s no doubt that applying more rigour to the definition of business requirement and investing more in design reviews will result in software applications that are more aligned to the business needs (one dimension of total software quality). But many organizations are still locked in struggle to ensure the software works, as designed, and performs as expected, after deployment. To address that challenge, they are optimizing their application life cycle quality practices.
Optimizing Life Cycle Quality Practices
Optimizing quality across the application life cycle requires the integration of people, process and technology in an optimal organizational structure to deliver a very straightforward result – to help the development process deliver quality software faster, at a total lower cost, and to maintain high quality levels through the application’s productive life.
Leading companies are making organizational decisions to create separate QA organizations; they are elevating the role and stature of the QA engineer; they are implementing best practice processes with rigour. However, it is surprising that with the maturity and wide availability of testing technologies, the technology dimension of many QA