How can you test software without knowing what it should do? Here is a step-by-step approach to overcoming undocumented requirements, including how to discover the requirements, how to define "quality" for the project, and how to create a test plan including release criteria.
A project manager strides purposefully into your office. "This disk has the latest and greatest release of our software. Please test it. Today." You say, "Okay, sure...what does it do?" The manager stops in his tracks and says, "Uh, the usual stuff..." Sound familiar? We've run into this situation as employees and as consultants. And we've seen testers take the disk, stick it in the drive, and just start testing away. That's testing in the dark. We think there are approaches that are more productive. When we test or manage testers, we plan the testing tasks to know what value we can get from the testing part of the project.
Let's try to take off the blindfold.
Even for a short (two-week) testing project, we've used this strategy. Consider this approach:
- Discover the product's requirements, to know what testing needs to be done
- Define what quality means to the project, to know how much time and effort we can apply to testing
- Define a test plan, including release criteria, to check out different people's understanding of what's important about the product, and to know when we're ready to ship
Discover the Requirements
The first part of planning is to play detective. Your product will have a variety of requirements over its lifetime, through several releases. Some requirements will be more important sooner, and others later. You have to discover this release's requirements.
At the beginning, you gather data; you cram your head full of a vast amount of relevant technical detail. In the next step, you transform the data into requirements specifics - you filter that information down into the form of requirements, where you really discover the product's intent.
Requirements are the reasons that drive design choices, and since you were handed a disk, the very fact that you have it in hand indicates that design decisions were made. The product was built based on its requirements; otherwise, you wouldn't have a product to test. Lots of people made lots of choices in building the software on that disk. The reasons for those choices are the requirements.
Software systems may have hundreds of requirements. You don't have to uncover them all - you get to choose how much you're willing to invest in finding out what the requirements are. Think about how much risk you want to take. The fewer requirements you probe in depth, the more risk you incur. You want the company to meet its release deadlines, but you're taking dangerous chances if you don't learn enough about the product's requirements to decide what to test and at what depth.
Customer requirements are the design decisions about your customer's problem. Use those design decisions to solve that problem with your product. A useful way to categorize customer requirements is to divide them into:
Users: Users are people in roles, who often appear as the subjects of statements. Who will the product affect? Directly, as end-users? Indirectly, by its very existence, and by others who are using it?
Attributes: Attributes are characteristics that appear as adjectives and adverbs in statements. What characteristics do the users need? How reliable does your product have to be? How fast? What else?
Functions: Functions are actions the system performs which appear as verbs in statements. What does your product do for the users?
Many people have different pieces of the picture. Programmers have been programming their bits. Marketers have been setting customers' expectations. Architects, managers, designers, and others have been discussing requirements in a variety of forums. Most often, their discussions focus on functions, rather than attributes. "Should we enable email here?
|Testing in the Dark||144.28 KB|