Risk-Based Test Reporting

[article]
Member Submitted
Summary:

Suppose we have done a risk analysis and all tests for all test stages are related to a risk. We can obviously say at the start of system test execution that none of the test objectives have been met. Because test objectives all relate to a single risk, we can therefore say: At the start of test execution, we presume that all the risks to be addressed by this phase of testing still exist. Keep reading to see why it is important to view a system as "guilty until proven innocent."

Suppose we have done a risk analysis and all tests for all test stages are related to a risk. We can obviously say at the start of system test execution say, that none of the test objectives have been met. Because test objectives all relate to a single risk, we can therefore say:

At the start of test execution, we presume that all the risks to be addressed by this phase of testing still exist.
That is, all known product risks are outstanding. With this assumption, we are saying that the system is "guilty until proven innocent" or put another way, the system is entirely unacceptable. This is obvious perhaps, but why is this important?

On the first day of testing, we can say, “we have run zero tests, here are the outstanding risks of release”. As we progress through the test plan, one by one, risks are cleared as all the tests that address each risk are passed. Halfway through the test plan, the tester can say, "we have run some tests, these risks have been addressed (we have evidence), here are the outstanding risks of release." Suppose testing continues, but the testers run out of time before the test plan is completed. The go live date approaches, and management want to judge whether the system is acceptable. Although the testing has not finished, the tester can say, "we have run some tests, these risks have been addressed (we have evidence), here are the outstanding risks of release." The tester can present exactly the same message throughout the test phase, except the proportion of risks addressed to those outstanding increases over time.

How does this help? Throughout the test execution phase, management always have enough information to make the release decision. Either management will decide to release with known risks, or choose not to release until the known risks (the outstanding risks that are unacceptable) are addressed.

A risk-based test approach means that cutting testing short does not preclude a rational decision from being made. It just makes the decision to release less likely.
How might you report risk-based test execution progress?

In Figure 1, you can see a diagram representing progress through the test plan, but in the form of risks addressed over time. Along the vertical axis, we have the known risks of release, to be addressed by the testing. On the first day of testing all risks are outstanding. As the test progresses along the horizontal axis, you can see risks being eliminated, as more and more tests complete successfully. At any time during the test, you can see the current risk of release. "Today", there are six risks remaining. If the risks represented by the solid line are 'critical' risks, then it is clear that the system is still not yet acceptable. The tests not yet executed or not yet passed that block acceptance are clearly the high priority tests.

If we identified many risks in our project, and in a large, complex project you might identify between 60-80 risks, these risks are likely to be addressed across all the development and test stages. So, the horizontal scale might include all stages of testing, not just system or acceptance testing. As the project proceeds, the risks that are down to the developers to address through unit and integration testing are as visible as those in acceptance. The value of reporting against all risks in this way is that the developers, system and acceptance testers all see clearly the risks for which they are responsible. Management too, has visibility of the risks and can see risks

About the author

Paul Gerrard's picture Paul Gerrard

An internationally renowned, award-winning software engineering consultant, author, and coach, Paul Gerrard is host of the UK Test Management Forum and Programme Chair of the 2014 EuroSTAR Testing conference. He is a consultant, coach and mentor, author, webmaster, programmer, tester, conference speaker, rowing coach, and publisher. Paul has conducted consulting assignments in all aspects of software testing and QA, specialising in test strategy and assurance.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!