Testing for Knowledge

[article]
Summary:

In a number of manufacturing industries, the quality function has become an integral part of the business. The software quality movement has not had nearly that level of success, but the pressure is building to do so. The obstacle is cost—particularly the cost of delivering that improved software quality. Since the beginnings of our industry, we have emulated the quality movement in hardware manufacturing; unfortunately, that path will not lead us to the success they found. Software development and hardware manufacturing are different. Our goal is the same but the path to success is different. To achieve the success we seek, we must set our own path and expand the testing objective from "test to fail" to "test for knowledge."

Winners vs. "Also-Rans"
The success of the hardware quality movement, or rather, our inability in the software quality movement to achieve a similar level of success, is frustrating. In the automotive, computer, and consumer electronics industries, the quality function has become an integral part of the business with recognized value. We, on the other hand, continue to be treated in too many cases as a "nice-to-have," but not really necessary, function. The difference is easy to explain. Other industries have been able to show a cause-and-effect relationship between their efforts and decreased costs and increased productivity—we have not done so as definitively. The difference is particularly frustrating when we look at the computer hardware industry. The quality function enjoys a recognized role in that industry's success. But computers are little more than hard-wired software. How have they succeeded when we have not? The difference is simple; they had a manufacturing problem and we do not.

The factory gave the hardware quality movement a place to demonstrate initial, small-scale successes. Their successes were based on two fundamental truths about the manufacturing process: variability is inherent and variability drives cost. If you can drive down variability in the manufacturing process, you might drive down cost-unit cost-with an emphasis on "might." The cost of discovering the nature, source, and amount of inherent variability is high. The cost of reducing that variability is high. The decrease in unit cost is low. But when you manufacture enough units, the balance tips in favor of reducing variability. The key is the number of units produced. In businesses where manufacturing volumes were high enough, the math worked. Initial successes within the factory led to opportunities for continuous improvement initiatives involving the design, development, and purchasing functions. The math is why process-improvement initiatives succeeded in the high-volume automotive, consumer electronics, and computer hardware industries while they have not succeeded to nearly the same level in other, lower-volume manufacturing operations.

It's also why process improvement initiatives have not led us to success in the software quality movement. In software, we don't have a manufacturing problem; we have a development problem. In software, process improvement initiatives start in development and stay in development. They drive up costs—costs that can't be recouped by savings in manufacturing because our initiatives do not, cannot, impact manufacturing costs. Because we have a different problem, we must take a different path to find success. The pressure to do so is growing.

The Challenge of Web-Based Opportunities
Businesses today see opportunities to materially reduce operating costs and increase operating efficiencies in Web-based B2C e-commerce, B2C and B2E self-service, and B2B c-commerce applications. Seizing these opportunities means producing applications that have increased levels of reliability, performance, and security to accommodate a new, worldwide base of users. The challenge is to deliver these additional levels of quality through improved productivity rather than increased costs.

The amount and type of testing that's done before software is deployed has always been driven by the size and nature of the intended user community. The early users of computers did not demand much from software other than that it produce the right answer when used correctly. For this user base, "Test to Pass" was the appropriate testing objective. It met the needs of the business. The 1970s saw an explosion in the number and types of computer users as microprocessors, networks, and video display terminals materially decreased costs, increased accessibility, and brought computing to the general business community. But the new user community did not welcome computers as earlier users had. By the late 1970s, it was clear that the quality of software had to be improved; otherwise, the improvements in productivity and profitability that management sought to achieve would be undermined. More failures had to be found and fixed before software was deployed to the general business community. The testing objective, and the associated cost of testing, was allowed to expand to "Test to Fail" to accommodate the needs of the business.

Web-based opportunities to achieve new levels of productivity and profitability bring another expansion of the user community. The expanded user base is, in the aggregate, less experienced, less tolerant of system errors, and less patient than the user community we've served in the past. Reliability and performance have increased in importance, as customers are now only a click away from the competition.

At the same time, the new user base contains a malicious element. Security has become a major issue. E-commerce sites have been hacked, costing their owners millions of dollars in lost revenue. Consumers have had their credit card information stolen and misused. Medical records have been accessed and made public. The list goes on. What's worse, the hackers known to be responsible for the damage inflicted to date have had relatively benign motives. Government experts tell us we need to be ready for attacks from terrorists intent on bringing down our entire way of life. Software quality becomes a matter of survival.

It's not that we don't know how to deliver increased levels of reliability, performance, and security. It's the cost. As an example, Microsoft recently shut down Windows development, sent all the programmers to a week of training, and spent the rest of the month finding and fixing security-related bugs in existing code. Using an estimated average ratio of six programmers to one tester, this means that Microsoft increased its Windows testing budget by fifty percent this year. At the same time, it decreased programmer productivity by eight and a half percent by taking them away from programming for a month. Programmer productivity will also take an additional, long-term hit as a result of the ongoing additional effort to code in security in future product releases. Given the effort being applied, the number of new test cases being generated and added to their standard regression test suite must be huge. Testing costs will also take a long-term hit. Add it all up, and that's a nontrivial number. And that's just for security.

While it's likely that Microsoft will find a way to spin this to their benefit, there's no new revenue in this for the typical business. To make matters more difficult, this happens at a time when businesses are particularly sensitive to costs. Chances are that IT budgets will not be increased to fund additional testing. Businesses are compelled to maintain and improve their profit margins. IT budgets, as a percentage of the business's revenue, will likely remain at or near current levels for the foreseeable future. Software quality must be improved to achieve important business opportunities. The obstacle is cost. We need to anticipate management's directives: Do more with less. Work smarter, not harder. We need to respond with "We can do that!"Time to Change Paths
The expansion of the testing objective in the late 1970s signaled the rise of testing as a distinct, full-time occupation. We looked, quite naturally, at the advances taking place in the hardware quality movement led by Deming, Juran, and others, and we drew analogies. Unfortunately, most of these analogies were flawed, and still are. The barriers to productivity and profitability in software development are different than they are in hardware manufacturing. Following the path laid by our brethren in hardware will not lead us to the successes they found. We pursue the same goal, but we must take a different path.

That does not mean that our efforts at process improvement are wrong. They're not. We can point to process improvements in requirements management that decrease the probability that project costs will go out of control. We can point to process improvements using software test automation that improve software reliability by increasing test coverage. But we must face facts; in development, process improvements do not increase productivity or profitability. They provide insurance. Like any insurance, unless something goes wrong, the money spent on it simply adds cost. This is why focusing on process improvement has not increased the value and stature of our occupation.

For the software quality movement, success lies down a different path; one designed to address a different barrier than that addressed in hardware manufacturing. The barrier to improved productivity and profitability in software development is not process variability. It's knowledge. Knowledge not just of tools and processes, but knowledge of how our systems work, how they work together, and how the business uses them. And testing, not process improvement, will lead us to that knowledge. The solution lies in expanding the testing objective from "test to fail" to "test for knowledge." The expansion requires change both in what we say about testing and in what we do while testing. These changes will provide the basis for a new perception of software quality efforts and for achieving new levels of productivity; first within testing and then across the development process.

To change the business's perception of the value we provide, we need first to change how we talk about our objective. Today we say, "We test to find defects in the software." But "test to fail" gives the impression that we have an on/off, pass/fail view of things. That's not very business-like. Instead we should say, "We test to ensure the software and the documentation match. When they don't, we broker the discussion about which one to change. Change the software or change the documentation. When they match, we'll sign off." Our new job is to collect and disseminate knowledge; in this case, knowledge of whether or not the software and the documentation match. When they don't, we report it and make the business responsible for the cost decision. "How important are the specifics of this requirement to you? The software doesn't meet them all today. Do you want to spend more money to have the developers rework the software and have us check it again? Or would you rather modify the requirement?"

Next, we need to change what we do. To achieve the success we seek, we must demonstrate that our efforts improve productivity and, as a result, profitability. Automation plays an important part in improving productivity, in our case as in most others. Many of us use software test automation today. It allows us to do more testing in the same amount of time. We must expand the use of that automation-not to do more testing, but to gain knowledge that will let us do less. By driving trace utilities with our existing software test automation tools, we can discover the program modules executed by each user transaction with little or no additional effort or cost. Stored in a database, this knowledge can be queried from both directions. We can ask, "Which modules does transaction number 1 use?" and discover that it uses modules X, Y, and Z. We can also ask, "Which transactions use module X?" to discover that transaction numbers 1, 7, and 23, and only those transactions, use that code. When module X is modified, we can safely limit our testing to only those transactions. Contrast that approach to the fuller set of regression tests we typically run today based on our lack of knowledge of the system's inner workings. "Testing for knowledge" will allow us to safely do less testing in less time. That's quality and productivity improvement.

That same knowledge base can be used on subsequent projects involving that system to produce further cost improvements. The time the development group spends in the analysis and design phases to rediscover the system's inner workings can be reduced. This is typically a substantial component of the front-end effort on enhancement and maintenance projects. Part of those savings can be used to improve the requirements definition process. The knowledge can also be reused by the testing organization to decrease the cost of test planning. Test execution will enjoy even greater benefits than on the initial project, since the knowledge base already exists for the portion of the system not being modified. And if the requirements were improved, testing and rework will be reduced still further. That's continuous improvement.

"Testing for knowledge" addresses the real barrier to productivity improvement in software development: knowledge. It will allow us to deliver measurable productivity improvements "within the factory walls." It is difficult to overstate the importance of this. The quality movement in hardware manufacturing would not have succeeded in America had it not been able to demonstrate this. Downstream impacts like reductions in help desk calls or improvements in customer satisfaction are difficult to forecast and costly to measure. Business leaders are justifiably reluctant to make investments based on "iffy" forecasts. In software development, as it was in hardware manufacturing, they will make larger investments based on initial successes. Success for the quality movements in both hardware and software amounts to the same thing: delivering improved productivity and profitability for the business. The difference lies in who can deliver the initial successes. In hardware, it was the QA function that implemented process improvement. In software, it will be the QC function, "testing for knowledge."How "Testing for Knowledge" Benefits the Testing Profession
The change in our conversation will have positive impacts on perceptions about our contribution. Today, the testing function is perceived as driving the cost of the testing phase. Once we change the conversation, the business becomes responsible for deciding what costs to incur for each defect. Their perceptions about the cost of testing will change as they continually confront the fact that a major cost driver in the test phase is development rework. There is a big difference in the way most people react to the statement "it failed" versus "it doesn't match." The change in our conversation removes any hint of an adversarial relationship with either development or the business. We simply report the facts and function as a knowledge broker. And, perhaps most importantly and without ever having to ask it out loud, we make visible the important question of all. When the business decides that a requirement isn't important enough to spend the money to rework and retest the related code, it begs the question, "Why did we spend the money to have the code developed and tested in the first place? Why didn't we firm up requirements before we started coding?" Over time, if we keep a record and make it publicly available, these cost metrics will drive the transformation of the software development process to everyone's benefit.

The change in what we do has positive impacts, both direct and indirect, on our value to the business. Testing for knowledge allows us to reduce the amount and cost of testing we do. It improves our productivity on the current project and decreases costs in both test planning and test execution on future projects involving the same system. Those benefits have a direct and positive impact on our value to the business. "Testing for knowledge" can increase the productivity of the development organization in the testing phase by providing knowledge that helps them debug problems. It can also increase their productivity in the analysis and design phases of future projects by providing knowledge of how the system currently works. These benefits increase our value to the development organization and, through them, to the business again.

Finally, testing for knowledge produces a tangible asset. That's important. As the business begins to recognize that both the testing and development groups are utilizing the knowledge base to improve productivity and decrease costs, this knowledge partnership will prove its own value. This testing approach will be tied at a primary level to the final product. We will know we've arrived when the conversation ends not with "Did the software pass?" but with "Has the knowledge base been updated?"

Summary
Testing for knowledge extends the current "test to fail" objective to focus on the true source of productivity in software development: knowledge. Testing, beyond process improvement, can deliver the knowledge we need to demonstrate the initial successes the business requires. Business is business, and in this, there is no real difference between hardware and software. Investments start small and get bigger, based on successes where the return can be tied directly, in a cause-and-effect relationship, to the investment. The difference between software and hardware is which quality function can deliver those successes. In hardware manufacturing, it was quality assurance implementing process improvements. In software development, it will be quality control "testing for knowledge."

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.