Risky Business: Using Risk Analysis to Prioritize Testing

[article]
Summary:

Many of us think about system risks when deciding what to test, but it seems that few have a standardized way to assess the risks of a given system. In this week's column, Rick Craig shares a simple method you can use to target your test efforts according to risk. This method comes from, and is developed more fully, in Systematic Software Testing, of which Rick is lead author.

It seems that risk-based testing is a hot topic. Yet many testers are not exactly sure what that term means or how to do risk-based testing. A poll of the audience at the recent STAREast software testing conference showed that less than half of the attendees
conducted any type of deliberate risk analysis.

If resources were unlimited (just imagine), perhaps using a risk analysis to prioritize the tests would not be necessary. But getting back to real life, a risk analysis can help us determine the priority of tests. In all fairness, many who do not conduct a risk analysis no doubt still prioritize testing based on what is important to the customer, what has failed repeatedly in the past, what is complex, and what has changed the most since the last release. All of these reasons are a sort of "risk analysis" conducted in the heads of the testers. But by formalizing this process just a little bit, we can get more consistent results and achieve credibility with our managers, developers, and colleagues.

Here's an outline of a simple ten-step process for conducting a usable (and reusable) risk analysis to help you prioritize tests. Use it as a starting point from which to add your own experiences to improve your own organization.

Step 1: Organize a brainstorming team.
This team should be made up of three to six stakeholders that possess some level of expertise about the nature of the system or how it will be applied in the business sense. Developers, business analysts, users, marketers, customer support, and testers are likely candidates.

Step 2: Compile a system-wide list of features and attributes.
For example, the brainstorming team for an ATM application would probably compile some of the following features: withdraw cash, deposit cash, check account balance, purchase postage stamps, transfer funds, etc. It may also be useful to include global "attributes" such as security and usability in the list.

Step 3: Determine the likelihood of failure of each feature or attribute.
The brainstorming team should assign a value for the relative likelihood of failure of each feature and attribute. For simplicity, you can use the scale of high, medium, and low. Our ATM risk analysis team might determine that the likelihood of failure of the withdraw cash function is relatively high since it requires the system to access and update the account, physically remove cash, etc. Transfer funds may be assigned a relative value of "medium" since, even though it requires the interfacing between accounts, it lacks the complexity of physically dispensing currency. Developers and systems architects are particularly valuable in this portion of the risk analysis, since the
likelihood is largely based upon the systemic characteristics of the software under test (SUT).

Step 4: Determine the impact of failure for each feature or attribute.
Use the same scale as in step 3 to assign a value for the impact of failure. A failure is when the system does not function or does not function correctly. Risk analysis team members who have a strong understanding of the business aspects of the SUT are particularly valuable here, since the impact is based upon the effect a failure will have on the users. Our team might assign a high value to the "withdraw cash" feature since they would rightfully conclude that this is the single most important feature to most users. One hint: Many users will insist that every feature should be assigned a "high" value for impact of failure. Obviously, if we are trying to use the risk analysis to prioritize tests, it doesn't help us if everything

About the author

Rick Craig's picture Rick Craig

Rick Craig is a consultant, lecturer, author, and test manager, who has led numerous teams of testers on both large and small projects. In his twenty-five years of consulting worldwide, Rick has advised and supported a diverse group of organizations on many testing and test management issues. From large insurance providers and telecommunications companies to smaller software services companies, he has mentored senior software managers and helped test teams improve their effectiveness. Rick is co-author of Systematic Software Testing and a frequent speaker at testing conferences, including every STAR conference since its inception. 

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03