As the new development manager, I was involved in a rather dramatic decision meeting at Mondo-Corp (not its real name) about deploying a third-party customer care system. As the interrogation circled the table, each person said the deployment had to go ahead: the director and assistant from Customer Support, the vendor's representatives, two people from IT, and the IT director-my boss. The guy from corporate pointed to me last, I suspect by design.
"What do you think?"
"No way. If we do this, we're doomed."
"We haven't tested it enough."
"Why does that matter?"
I didn't say anything about code coverage, defect counts, or conformance to specification. Instead, I said:
"How sure are we that we can continue to run the business with this system? How many bill cycles, at $1,000,000 each, can we afford to have late? How late? How much of our revenue do we lose each day that billing is late? What's our cycle time fixing billing problems? Do we know this system can sign up two to three thousand new customers a month, as we must to maintain our customer base? Do we even know if we can run it and stay on line?"
I was talking about business risk based on our confidence in the system. The message was:
"Testing will increase our confidence in the correct operation of these valuable functions, and decrease the odds of a catastrophic system failure."
I was proposing to introduce a process change-more testing-into the deployment of this system. The investment turned out to be in the low hundreds of thousands of unplanned dollars over about six and a half months, and nobody quibbled. Because I talked about business risk, instead of talking about testing for testing's sake, I got all of the time I needed for what was really an extended system and functional test.
By the time we deployed:
How did we accomplish this? I was able to convince the executives at Mondo-Corp to invest in testing-even though they were under considerable pressure to "ship it now"-by helping them understand how testing was valuable to them.
When we (systems professionals, especially testers) try to justify testing, we often argue from a misconception. We assume that testing is the right thing to do. So that's the way we explain it: talking about how right it is. But executives faced with making the decision to support testing (or not) focus on testing as a business-process investment that is different from business as usual: it's not capital investment in the physical plant, or an acquisition, or any other everyday kind of way to spend money.
If you want to convince executives to sponsor testing, your justification must:
Naïve attempts to justify testing often fail all three criteria. That's why we don't get the money or the time to "do it right."
Selling Testing to Sponsors
An executive faced with a potential business-process investment will seek to answer seven questions, more or less in this order:
To secure that investment, think of yourself as developing a proposal that will be evaluated in competition with every other good idea and funding request. And in your proposal, address each of those seven questions.
Understand Testing's Unique Capability
Testing produces information reliably grounded in the observed behavior of a system.
Executives want to know what they're getting from this process that they can't get from that process. Testing's unique capability (especially distinct from other "quality" efforts) is producing information reliably grounded in observed system behavior. Inspections, reviews, and development are based on anticipation of what the system might do. Testing is the only software quality activity that directly observes system behavior and reports what it does. Testing is valuable anytime you could benefit from direct observation, free of speculation.
Speculation is an integral part of development. In fact, all development is a kind of disciplined speculation. Analysis answers the question "What might be a valuable behavior, if we could create it?" Design answers the question "What might we do to support this behavior?" Implementation answers "What will realize the design?" Planning answers "How can we arrange activities to get what we want?" If this speculation were perfectly reliable, however, we'd never need to test and we'd never have a defect. I have a personal rule of thumb: Three or more inferences in sequence and you might as well flip a coin. Development is inferences stacked on inferences. It's surprising we get as much of it right as we do.
Organizationally, Testing is often grouped with Development. But they're not the same process; the outputs are different.
In systems design terms, grouping the functions of testing and development because they employ similar skills, tools, or timing is weak binding. Executives understand processes in terms of functional binding based on the processes' unique output. Identifying the output of a business process-what it's good for-traces naturally to justification. It's easier to understand, and to explain.
Identify the Kinds of Values from Testing
Testing produces value indirectly through information about valuable system behaviors.
We don't use a test; we use a system. Testing-justified by the information it produces-influences the value of the system we will use.
Quantify Testing's Values
Testing's value equals the sum of all of the changes in our confidence about valuable system behaviors.
The people who fund testing don't generally work with formulas involving sums and deltas, but that's exactly how they think about business processes. To them, testing's value is their increased confidence in all the things the system is supposed to do. This formula is the way to quantify testing's value with potential sponsors: "With this investment in testing, we're going to get [insert magnitude of increase here] more confident about this list of valuable assertions about the system." At an aggregate level, where processes operate, this formula is true.
For presentation, remember that executive decision-makers appreciate summary tradeoff data. "Of 1,000 functions in the code that was tested, two failed in operations during the first month. Of 1,000 functions in the untested code, seventeen failed during the same period." Framed this way, the decision of whether to test a chunk of code is now a choice between two different frequencies of reported failure. Organizationally successful testers always represent the testing process in terms of the aggregate reasoning of management decisions.
Our billing system back at Mondo-Corp is an example of this thinking. Initial attempts to run billing blew up 100% of the time. As we fixed these problems we ran multiple test cycles with different data, each one increasing the confidence that billing would run reliably. We also had categories of customers, and different types of bill items. Untested customer "types" billed correctly about 75% of the time. Untested bill item types billed correctly about 50% of the time. But after we had tested a customer type or a bill item type, over 90% of them billed correctly.
One particular customer type represented about half of our customers-and about half a million dollars per bill cycle. Testing that one type took the fraction of correct billing from 75% to almost 90%, representing $150,000 per month and 1,500 customers. The savings from avoiding rework, loss of good will, and various intangible costs were also significant. We could justify doing that test with no problem.
But it gets even better. When you improve your testing skill or process disciplines, the odds of a passed test actually representing correct behavior in operation go up. At Mondo-Corp one tested "customer type" had about a 90% billing success rate. Of the 10% that failed, half were bad tests (addressed by getting better at testing), and half were configuration management problems (addressed by improving development processes). Fixing either one of those 5% areas of error would pay for one third of the entire testing process. Here again, we could justify the test, and significantly improved both areas of error.
The last question with this kind of approach is: "How do you know when you're done?" For billing at Mondo-Corp we considered process measures and costs. When the defect rate leveled off, we concluded that repairs were introducing as many problems as they solved-so no net progress would be made until the repair process itself was improved. Every month we didn't convert cost us several hundred thousand dollars. If the system improvement created by our testing didn't at least match that monthly cost, there was no point in doing it.
These are very heuristic, round numbers…but that's how executives think.
Describe How Testing Integrates with the Rest of the Business
Any business process in isolation is worth exactly nothing.
Testing must interact with development and support organizations to get a system to test, and to get an environment to test in. So before producing any results, Testing must interact with two parts of the organization. This is the kind of integration that executives want to see accounted for when you propose a new process: who do you depend on?
In addition, the executives want to know who's going to use what you will produce. Each result from testing must have a consumer in order to be valuable: either Development (where they're used to repair the system), or users (who can avoid problems), or decision-makers (who decide to ship or not). Bugs and fixes without consumers aren't very valuable. Reports of passed tests have to go to system users and sponsors, who use the data to make decisions about the system.
Nothing will turn off an executive faster than some eager beaver touting the value of information they've got no customer for. If you want to get approved, have a customer for the information you're using as the basis for your proposal. Even better, have your customers ask for it. This dependence on customers is more obvious with information about process improvement and new features. Providing this information if Development doesn't want it will only annoy the developers…and the executives who sponsored your proposal will hear what a jerk you are, from multiple sources. Remember, their job is to safeguard the health of the organization, and now you're messing that up by being a busybody. It doesn't matter how right you are if nobody wants to hear it.
To satisfy potential sponsors, a successful testing proposal should include process integration, identified customers, an existing interest in the information you will produce, and alignment of this information with the organization's strategy.
At Mondo-Corp testing was integrated into the organization as part of the weekly project meeting. Everyone around the table voraciously consumed the data from testing: bug reports, tests passed, potential features, and process data. Every week we went over what was working, what was broken, what we had tested, and how well Development was responding-to see if we could go live.
Our vendor was highly motivated to complete the system and get paid, so bugs mattered a lot to them. And bugs mattered on the customer side, as well. Every bug was discussed in detail and assessed with the question "Can we live with it?" Additional features were implemented only when repairs were in progress on the same code. In all other cases additional features were universally deferred (deferring them but not ignoring them actually made this easier). Finally, all parties were very focused on confirming that critical functions would work: clean bill cycles, each kind of activation type, and so on. At this point, we were the customers, and we judged our risk (and therefore our willingness to deploy) based on the system functions successfully passed.
I suspect that if you're not reviewing test results (bugs, tests passed, potential features, all of it) with your customers, with Development, and with process improvement groups, you don't have the customers you need to be successful. An executive will note that there is no market for all this information you propose to produce, and will decline your proposal.
5 Propose How Much to Invest in Testing
Testing isn't a moral issue; it's a proposal.
Here's the hard part about justifying an investment in testing. The "correct" investment in testing is completely situational.
Without knowing these variables I can't tell you what exactly to propose. If you're just starting out, some of the more substantial investments may not pay off in your organization. For example, it has become routine for companies to make large investments in test automation, which they are not ready to exploit.
This is the approach that will convince sponsors:
Remember that there isn't one true investment in testing. You are proposing to implement a process change at a particular level of investment. Looks remarkably like a development proposal, doesn't it?
At Mondo-Corp, I could have proposed unit, system, and integration testing back at the vendor's site, over the year and a half it would have taken to put those processes in place. (In fact it did take almost that long for them to put configuration management and release management in place.) And I would have gotten nowhere.
Instead, I backed into the proposal and suggested things that they believed we could do. And I focused on the timeline instead of the cash cost. We were losing money every day, so the question was "How fast and how reliably can you make this system viable?" To have gone for lowest cost would have been optimizing for the wrong thing.
Executives listening to a proposal will interpret your ideas in terms of:
By eliminating what was impossible (or believed to be impossible) and focusing on the measure of cost that mattered to my sponsors, we got the opportunity to do some good at Mondo-Corp.
6 Propose How to Monitor Testing Implementation
How will they know you are implementing the change?
By identifying how testing integrates with the rest of the business, you have identified how sponsors will monitor their part of the implementation. They will ask your customers-the people identified as needing and using the information you will produce-what information they are getting, and what they are doing with it.
Doing testing is a technical activity. Implementing testing, on the other hand, is a long-term commitment to adding (or changing) a business process, and then integrating that change with the rest of the business. It has to be tracked, just like any other project that encompasses both infrastructure and process changes. For your part, make sure your plan incorporates basic project management of the testing implementation, including:
7 Explain How to Monitor Testing's Performance
How will they know they're getting what they paid for?
Business managers manage to performance measures, or the company ends up in trouble. They talk about cost of sale, cost of lost customer, cost to resolve a complaint. For monitoring performance, we've got to make the connection between testing investment and similar operational measures.
The two fundamental measures of testing are the confidence it produces in tested assertions, and the scope of the assertions that can be tested. Performance measures describe how well this capability is supplied: cost, precision, and timing are a few examples. You don't have to get mathematical about it, but you do have to name what is going to get better.
Now that you've laid our your plan, the only thing left to do is deliver.
How Not to Justify Testing
We've discussed seven investment questions to answer for Management, and seen that to get investments in testing you've got to talk to business decision makers in their language, and follow their process. In my experience, technicians attempting to justify testing still make three common mistakes.
Mistake: The Endless Narrative
My former boss once explained why his boss was (much to my annoyance) micromanaging me. "You're new," he explained. "But over time you'll develop a track record with him, and he will ask for less detail. He's just trying to figure out if you can really get it done." I was used to Big Bosses getting MEGO syndrome (My Eyes Glaze Over) when I talked about work. I learned that MEGO was actually preferable to being micromanaged in this way.
The Big Boss was really seeking information so he could make his own assessment of whether I could do the job. If managers feel compelled to keep meddling in the details, that's a vote of no confidence in you. They're paying you to know how testing works. They are employing an expert: You. Often the request for details-for a narrative-isn't about the story. It's about the storyteller.
The problem with this isn't the narrative; it's the endless narrative. When they've heard enough to judge whether you know your stuff or not, they'll change from asking "How?" to asking "How much?" and "When?" and so on. When that happens, stop talking details and summarize, in terms of the seven parts of your testing proposal.
Mistake: Selling Features, Not Benefits
The benefit of a car isn't traveling long distances in relative comfort at high speed; it's more time at Grandma's. Going fast is a feature of a car that makes Grandma more available, but it isn't valuable in itself.
As technicians, we tend to focus on features. We presume being at Grandma's is valuable, then talk about how fast the car goes. The connection between a fast car and more Grandma is left out, because it's obvious to us. But speed is a feature.
We don't discuss benefits in justifications, and we don't report benefits in operations, so of course our customers don't understand why they should care about testing. Bugs don't cut it for users, customers, sponsors, or the people who make business-process investment decisions. The value they can expect from a system does make sense. So talk about increased confidence in the system and more or better features that come from running lots of tests and reporting lots of bugs.
The message is: "This well-chosen process is big magic that will get you more time with Grandma."
Mistake: The "One True Thing" Problem
Business is what the military euphemistically calls a "target-rich environment." There are always far more opportunities for process improvement (or other investments) than there are resources available. There's never "enough" money to "do it right" for anybody. Businesses are highly interactive systems, in which each function influences the whole in many ways. Nothing is more useless than a function that has been highly optimized in isolation.
If you advocate testing as "the one true thing," you'll be dismissed out of hand. Competent executives learn not to listen to true believers who think about processes in isolation. Instead, talk about testing as an investment tradeoff, and emphasize how it impacts other functions; that will show that you've thought your position through.
Testing changes will inevitably impact Development, Support, and users. If you can't get their buy-in, maybe you shouldn't get the change you seek; the costs may well outweigh the benefits.
At Mondo-Corp I talked about the risk to the business from system misbehaviors, in terms of the benefits sponsors were looking for-billing, serving, and signing up customers. To get support for testing at Mondo-Corp, I presented several arguments:
I discussed testing as a business process with a unique capability to produce reliable information about a critical system, thus reducing both costs and risk. In doing so I was also proposing that I could produce this critically valuable information in return for a particular investment in a process change. Because I knew that's what I was doing, I structured my presentation correctly.
The justification for information systems is changing. They are no longer providing a marginal gain in efficiency for established internal processes. Increasingly the system is the business-and when these systems fail, there's no fallback position. You simply lose business capability. The business criticality of systems such as ERP, e-commerce, CRM, and Supply-Chain Automation means you're betting the business on that system every day.
As systems become more critical, system quality becomes a strategic-not a tactical-consideration. Testing will increasingly be an executive concern, so if you're going to have to recommend testing, then you have to justify why it's a reasonable investment. Your sponsors may or may not value the information testing can provide; that's their decision. Your job is to offer the investment decision in a way that they can understand, so they can make an informed choice.
As software systems become mission-critical, you get to play in the big game. With a little foresight into how to speak the players' language and understand their expectations, you've got the tools you need to succeed.