# Measuring the Risk Factor

[article]

into smaller, less-complex sub-modules." There are other measures of complexity that can be used for risk analysis: Halstead Complexity Measures, Henry and Kafura metrics, and Bowles metrics. Assign a value of H for high, M for medium, or L for low for each requirement based on its complexity.

Severity Indicator
My approach is different from Craig and Jaskiel in another way. I consider the severity of failure as a separate indicator. Severity is defined as "harshness" of the failure. What do we mean by harshness of failure in relation to software defects? Harshness of failure indicates how much damage there will be to the user community, and also implies that there will be some suffering on the part of the user if the failure is realized. This suffering could be in the form of money, emotional stress, poor health, death, etc. Consider the following case of a software failure that resulted in deaths. Alan Joch and Oliver Sharp write in How Software Doesn't Work , in 1986, two cancer patients at the East Texas Cancer Center in Tyler received fatal radiation overdoses from the Therac-25, a computer-controlled, radiation-therapy machine. There were several errors, among them the failure of the programmer to detect a race condition (i.e., miscoordination between concurrent tasks). Or consider the case where a New Jersey inmate escaped from computer-monitored house arrest in the spring of 1992. He removed the rivets holding his electronic anklet together and went off to commit a murder. A computer detected the tampering. However, when it called a second computer to report the incident, the first computer received a busy signal and never called back. These examples illustrate that software failures can both be fatal and cause suffering to those whose lives are affected by the deaths of loved ones.

Thus, severity is different from expected impact in that expected impact does not consider the suffering imposed on the user, but merely considers the effect of the failure. Therefore, I argue that the greater the severity, the higher the risk. Assign a value of H for high, M for medium, or L for low for each requirement based on its severity.

 Table 2—Expected Impact, Likelihood of Failure, Complexity, and Severity for the Login Functionality.

The Method of Risk Analysis
At this point, the software team should assign a number to each high, medium, or low value for likelihood, expected impact, complexity, and severity indicators. It is possible to use a range of 1-3 with 3 being the highest or 1-5 with 5 being the highest. If you use the 1-5 range, there will be more detail. To keep the technique simple, let's use a range of 1-3 with 3 for high, 2 for medium, and 1 for low. As Craig and Jaskiel state, "Once a scale has been selected, you must use that same scale throughout the entire risk analysis." Furthermore, they state that "If your system is safety-critical, it's important that those features that can cause death or loss of limb are always assigned a high priority for test even if the overall risk was low due to an exceptionally low likelihood of failure."

Next, the values assigned to likelihood of failure, expected impact, complexity, and severity should be added together. If a value of 3 for high, 2 for medium, and 1 for low is used, then 9 risk priority levels are possible (i.e., 12, 11, 10, 9, 8, 7, 6, 5, 4).

 Table 3—Risk Priority Cube

Notice that the requirement "system shall validate each UserId and Password for uniqueness" has a relatively high likelihood of failure, a high degree