For example, on one project, we needed to find as many bugs as possible early in the project. Later in the project, we needed to understand the end user's experience during typical use. Our test requirements changed mid-way through the project. As a result, we changed our testing strategy. We did a lot of bug hunting early and spent a great deal of time characterizing the performance from the end user's point of view when the system was closer to release.
On another project, the decision-makers needed to know, "What's the worst thing that can possibly happen if we release this to the field?" In other words, management wanted the testers to find nightmare bugs: the biggest, nastiest, slimiest bugs possible. If the bugs were bad enough, they would hold the release. If the worst bugs we could find were largely cosmetic, we'd ship. Our goal wasn't to find a lot of bugs but to find significant bugs.
In each case, management—whether a single project manager or a committee of stakeholders—needed the testers to learn about particular characteristics of the software and report back what they'd found. The more we focused on gathering the information that management needed, the more effective we were. The more we focused on gathering information we happened to find interesting, the less effective we were. When managing a test effort, I need to know what questions other managers expect us to answer. Do they want to know:
- What is the user experience for typical usage scenarios?
- How well does the software implement the design?
- How well does the software meet requirements?
- What kinds of bugs crop up under less-than-ideal conditions?
- How reliable or accurate is a particular feature?
- How stable is the software under normal use?
- How reliable/stable is the software under load?
When I'm testing, I find that I am most effective when I focus on answering one or two of these kinds of questions at a time. When I try to gather too many kinds of information at once, I get sidetracked—as I did with isolating the file corruption bug. When I'm not sure what questions I'm supposed to be answering, I ask.
These insights led me to another realization: when management seems to be undervaluing testers, it may be because the testers aren't getting the information management really needs. Perhaps the most powerful question a tester can ask managers is, "If you could know any one thing about this software, what would you want to know? The answer may surprise you.