- Continuity—will problems affect the continuity of use of the module? Users are often more interested in ensuring the software does everything it did before than they are in new bells and whistles.
- Relative complexity —is this module complex by its very nature?
- Prior testing initiatives —has this module been exhaustively tested in previous releases or test phases? Conversely - has this module been only cursorily tested in previous releases or test phases?
In your own project(s) you may well have others.
Methods and strategies of risk assessment may differ from project to project however the points below should provide a good starting point:
- Old test logs and bug reports —check to see where the problem areas have traditionally been (if its not a new application).
- Exploratory observations —perform "smoke" or exploratory tests first to determine where the problems are likely to come.
- Confidence level —listen to where the developers, testers and users etc. suggest the software is relatively clean and conversely where they suggest the problems may be.
- Level of test coverage —do the test requirements and cases cover all the significant features, functions, modules, areas etc?
- Positive/negative conditions —do the test cases cover both positive and negative test conditions?
- Miscellaneous—for example, where new or unproven technologies have been introduced, areas that relying upon interfaces, where quality practices have been poor, where specifications are loose or ambiguous, where a variety of people have been doing the coding, where a particular developer renown for short-cutting has been working, areas that have had high staff turnover etc.
Using these methods you should be able to bring the correct focus to your risk assessment.
Assessing each requirement and case against these criteria will help you build a priority profile. You may even wish to add a weighting to reflect importance of expected issues. This can be done for each requirement/case by assessing what the likelihood of problems is and what the impact is if problems do arise. For example:
Once we have completed the risk assessment then test requirement/cases can be assigned the appropriate priority. By starting with the high priorities first and estimating the time for test case development (where necessary) and execution, you can quickly get a rough picture of whether you can meet your deadline. Test estimating is a whole separate field which I won’t go into here however be sure you include adequate time for test data set up, defect fixes and retests, defect logging and reporting etc. as these are often omitted from estimates.
Now that you have your prioritised set of test requirements and with any luck, test cases, and your estimates of how long its going to take to test (allowing of course for a certain failure rate or level of defects) you can set about setting the appropriate expectations with your project managers and sponsors. You might need to do a bit selling here, as their expectations are nearly always that the product will be delivered fully tested with no bugs. Use your test plan as the basis of setting the expectations and selling the plan for what can be achieved in that time with which resources. Make sure that in one form or another, you obtain their agreement (in writing is preferable) as to what your testing initiatives will and will not cover.
There may also be occasions where you have play the role of the obnoxious son-of-a-camel and challenge the business (or whomever) over their views on potential risks and priorities. You may even have to the resort to asking if they had one requirement that they needed covered first, what