control, and document the testing activities.
Unstructured or exploratory testing that is too disorganized and thus confuses and delays, and does not illuminate the situation.
Unhelpful problem reports.
Ineffectual bug advocacy.
Development Process Causes
Low priority given to debugging and fixing-leads to long defect aging (bug fix turnaround time).
High rate of insertion of new defects with fixes.
Fixes that do not resolve the problems, requiring cycles of refixing.
Test Environment Causes
Inadequate tools or equipment available for the testing.
Delays in obtaining the needed testware, e.g., test tools and facilities.
Gremlins in the testware.
Difficulty in using the testing tools, either because of limitations in the tools themselves or in the skill levels available to utilize these tools.
Underestimation of the effort needed to climb the learning curve with new testing tools, facilities, and procedures.
Difficulty in test automation.
Corrupted, unrepresentative, or untrustworthy test databases.
Project Management Causes
Lower priority routinely given to testing activities, versus development or other project activities.
Lack of early tester involvement in the system development project, so there is less time to prepare.
Lack of clear, agreed-on system acceptance criteria (and thus test completion criteria).
Taking testers away from critical tasks, such as running test cases, for
noncritical tasks, such as required attendance at meetings on unrelated topics.
Vague, general test plans.
Out-of-date test plans.
Lower priority routinely given to the testers in the competition for scarce resources.
Beginning the testing prematurely, before the test entry criteria have been met, leading to test rework.
Unwillingness of the senior managers to make timely decisions on which the test team is waiting.
Lack of a code freeze. Undisciplined, last minute additions or changes of features, which may invalidate the performance measurements.
Lack of reliable test project status information-for monitoring the testing project, tracking progress versus plan.
Significant underestimating of the number of cycles of performance measurement, evaluation, and tuning before the system is ready to go live.
Lack of contingency plans for events that do not happen as expected.
Unplanned waits for other groups to do things on which the testers are dependent.
Risks are situation-specific, and this certainly is not a complete list. What other nontrivial risks would you add to this list?
7. Actively and Aggressively Manage the Process
Be decisive, and ready to react quickly as conditions change. Conditions always change as projects progress, and the responses to these changes need to be nimble, not ponderous. "Fast-smart" decision making is needed, rather than mindless adherence to a partly obsolete test plan, or endless meetings to decide what to do. Slow responses, even if correct, may be too late.
Develop a workable schedule with frequent milestones to use in tracking the testing project. We want to obtain early warning that things are going off track. The best way to identify delays and bottlenecks is to have a detailed and realistic test project plan, containing frequent interim milestones at which actual progress can be compared easily with the plan in order to identify deviations. The testing project needs to have "inchstones" (pebbles) as well as milestones.
Doing this requires strong project management skills and a lot of savvy about what it really takes to get a testing project done.
This project plan needs to be updated as conditions change, of course, to still be accurate and usable. This means the project plan should be easy to maintain, preferably with a project management software package.
Aggressively fight slippage. Sometimes people are complacent when their project slips a little, especially early in the project. They figure that they have lots of time. Only after an