risks. Peter Neuman, in Computer-Related Risks, catalogs a broad range of system failures that have cost time, money—and even lives. Cem Kaner, Jack Falk, and Hung Nguyen include an appendix of 400 common software errors in Testing Computer Software . Boris Beizer offers a taxonomy of bugs in Software Testing Techniques. The clever tester can construct a vast catalog of nightmare scenarios for any but the most trivial software, but you probably don’t have the time—or the money—to test them all, nor should you.
Tailoring Testing to Quality Risk Priority
To provide maximum return on the testing investment, we have to adjust the amount of time, resources, and attention we pay to each risk based on its priority. The priority of a risk to system quality arises from the extent to which that risk can and might affect the customers' and users' experiences of quality. In other words, the more likely a problem or the more serious the impact of a problem, the more testing that problem area deserves.
You can prioritize in a number of ways. One approach I like is to use a descending scale from one (most risky) to five (least risky) along three dimensions.
Severity. How dangerous is a failure of the system in this area?
Priority. How much does a failure of the system in this area compromise the value of the product to customers and users?
Likelihood. What are the odds that a user will encounter a failure in this area, either due to usage profiles or the technical risk of the problem?
Many such scales exist and can be used to quantify levels of quality risk.
Analyzing Quality Risks
I’m familiar with three techniques for quality risks analysis. The first technique is informal, and works well regardless of development process maturity. To use it, list the quality risks that might apply to your software in the leftmost column of a table. Add a middle column for associated failure modes, and describe the kinds of problems that can occur within that class of quality risk. Keep the descriptions—and the table—short. Now, add a blank rightmost column for priority.
Properly done, the table becomes the map for an exploratory dialogue, either one-on-one, as a group meeting, or via e -mail, with the key stakeholders in the project. I seek input from people like development managers, product architects, business analysts, technical or customer support managers, project managers, sales and marketing staff, operations personnel, and, if available, customers and users. In this dialogue, I solicit a priority number—possibly using a three-part scale as discussed above—for the risks and failure modes, along with delving into any quality risks that I overlooked. Table 1 below shows a fragment of a quality risk analysis for a hypothetical word processor.
A slightly more formal approach is the one described in the International Standards Organization document ISO 9126. This standard proposes that quality of software system can be measured along six major characteristics:
Functionality. Does the system provide the required capabilities?
Reliability. Does the system work as needed when needed?
Usability. Is the system intuitive, comprehensible, and handy to the users?
Efficiency. Is the system sparing in its use of resources?
Maintainability. Can operators, programmers, and customers upgrade the system as needed?
Performance. Does the system fulfill the users’ requests speedily?
Within these six characteristics, the ISO 9126 process asks that stakeholders identify the key subcharacteristics for their system. For example, what would it mean for your system to have quality in the area of performance? For a web application, it might mean giving an initial responses to user input with a
|The Risks to System Quality||126.24 KB|