high, medium, or low for each requirement as a measure of the expected impact. Concentrate your focus only on those features and attributes that directly impact the user, not necessarily on the testing effort. If you run into the situation where every feature or requirement is ranked the same, then limit the number of highs, mediums, and lows each user can assign. Let's look at the expected impact and likelihood of failure for a hypothetical Login system.
|Table 1—Expected Impact and Likelihood of Failure for the Login Functionality.|
The requirement that the "UserId shall be 4 characters" has a low expected impact of failure because there is not much of an impact to a user if the UserId is more or less than 4 characters. The same reasoning can be applied to the requirement that the "Password shall be 5 characters." However, the requirement that the "System shall validate each UserId and Password for uniqueness" has a high impact of failure because there could be multiple users with the same UserId and Password. If the developer does not code for this, security is at risk.
Likelihood of Failure Indicator
As part of the risk analysis process, the software team should assign an indicator for the relative likelihood of failure of each requirement or feature. Assign H for a relatively high likelihood of failure, M for medium, and L for low. According to Craig and Jaskiel, when the software team assigns a value of H, M, or L for each feature, they should be answering the question, "Based on our current knowledge of the system, what is the likelihood that this feature or attribute will fail, or fail to operate correctly?"
At this point, Craig and I differ in that he argues that complexity is a systemic characteristic and should be included as part of the likelihood indicator. My argument is that complexity should be an indicator on its own. Furthermore, severity should also be considered. Four indicators provide more granularity and detail than just the two typical indicators. In Table 2, I have shown that if the prioritization is the same between two different requirements, then it is not possible to discern which requirement is more risky. If we have three or more indicators, we are in a better position to evaluate risk.
Something that is complex is intricate and complicated. The argument here is that the greater the complexity of the feature, the greater the risk. More interfaces means that there will be more risk involved with each interface as well as with the overall system.
According to Craig and Jaskiel, Tom McCabe devised a metric known as cyclomatic complexity that is based on the number of decisions in a program. His studies have shown a correlation between a program's cyclomatic complexity and its error frequency: "A low cyclomatic complexity contributes to a program's understandability and indicates it is amenable to modification at lower risk than a more complex program." He, along with others, has shown that those parts of the system with high cyclomatic complexity are more prone to defects than those with a lower value. According to Edmond VanDoren, in an article titled "Cyclomatic Complexity," cyclomatic complexity can be used in the test planning phase because "mathematical analysis has shown that cyclomatic complexity gives the exact number of tests needed to test every decision point in a program for each outcome. Thus, the analysis can be used for test planning. An excessively complex module will require a prohibitive number of test steps; that number can be reduced to a practical size by breaking the module