I was very pleased with myself. I'd just found a bug that, under certain circumstances, could result in data in a stored file becoming corrupt. I tried not to gloat as I explained the bug to the project manager. His response floored me. "Oh, that. Yeah, we know. No time to fix it. How did the upgrade tests go?"
"You KNEW? And you won't be fixing it?!? Data gets corrupted!" Self-righteous anger bubbled up, blurring my vision.
"Whoa. Calm down. Yes, we got a report about that bug from one of the field engineers last week. We gave it a lower priority because it was easy to tell the file was messed up and there is an easy workaround. The bug has been there since 1.0 and fixing it now will require major changes. We're just about to release and can't delay the schedule to fix an old bug. So now tell me about the upgrade tests."
I fumed in silence, then turned to leave. The project manager stopped me. "What about the upgrade tests?"
I shot back over my shoulder: "I was so busy isolating this bug that I didn't finish them. I'll have the results tomorrow."
The project manager frowned. "I really need the results today. Last week you told me you'd have no problem getting them done. I don't think you understand how important these results are."
"I'll get them done before I leave today," I mumbled. As I left his office, I wondered, What went wrong? Why didn't he care about my news?
I realized that I hadn't clarified up front what kind of information was most important to the project manager. If I had, I would have understood the importance of those upgrade tests before spending half a day chasing down the file corruptor bug.
It all comes down to requirements. Tests have requirements. That incident with the project manager was a wake-up call for me. It was the first time I realized that my audience (managers) needs particular kinds of information. Like me, you could argue that the file corruptor bug was important. However, whether or not to fix a bug is a business decision. The project manager had the perspective and authority to make that kind of decision; I did not. At the same time, the project manager was relying on me to give him accurate information to support his business decisions.
So tests have requirements, but I wasn't sure how to discover those requirements for my tests. The laundry lists of features-often labeled "Requirements"—didn't help. I started by asking, "Who uses the information I produce and for what purpose?"
In this case, the project manager wanted to know if the upgrade process worked as designed so he could make a release decision. He didn't want more bug reports unless the bugs were new to this release or interfered with the core functionality of the software. If I happened to encounter bugs, he expected me to file them. He just didn't want me to spend all my time digging for bugs at the expense of running the upgrade tests.
Different projects have different test requirements. Further, the nature of the test requirements may evolve as the project progresses.
For example, on one project, we needed to find as many bugs as possible early in the project. Later in the project, we needed to understand the end user's experience during typical use. Our test requirements changed mid-way through the project. As a result, we changed our testing strategy. We did a lot of bug hunting early and spent a great deal of time characterizing the performance from the end user's point of view when the system was closer to release.
On another project, the decision-makers needed to know, "What's the worst thing that can possibly happen if we release this to the field?" In other words, management wanted the testers to find nightmare bugs: the biggest, nastiest, slimiest bugs possible. If the bugs were bad enough, they would hold the release. If the worst bugs we could find were largely cosmetic, we'd ship. Our goal wasn't to find a lot of bugs but to find significant bugs.
In each case, management—whether a single project manager or a committee of stakeholders—needed the testers to learn about particular characteristics of the software and report back what they'd found. The more we focused on gathering the information that management needed, the more effective we were. The more we focused on gathering information we happened to find interesting, the less effective we were. When managing a test effort, I need to know what questions other managers expect us to answer. Do they want to know:
- What is the user experience for typical usage scenarios?
- How well does the software implement the design?
- How well does the software meet requirements?
- What kinds of bugs crop up under less-than-ideal conditions?
- How reliable or accurate is a particular feature?
- How stable is the software under normal use?
- How reliable/stable is the software under load?
When I'm testing, I find that I am most effective when I focus on answering one or two of these kinds of questions at a time. When I try to gather too many kinds of information at once, I get sidetracked—as I did with isolating the file corruption bug. When I'm not sure what questions I'm supposed to be answering, I ask.
These insights led me to another realization: when management seems to be undervaluing testers, it may be because the testers aren't getting the information management really needs. Perhaps the most powerful question a tester can ask managers is, "If you could know any one thing about this software, what would you want to know? The answer may surprise you.