Manage the Risks and the Process

[article]
Speeding the Software Delivery Process, Part 3
Summary:

Including a testing/QA component early in a software project necessarily prolongs the schedule, right? Not so, according to Ross Collard. In this, the third of a three-part series, Collard explains how to anticipate risks and to aggressively manage the process to prevent disaster.

In the first articles in this series, I argued that speed doesn't necessarily sacrifice quality. Software can be developed faster and better, with the help of efficient testing and quality assurance activities. I listed the following ways of reducing the test cycle time and speeding the overall delivery process:

  1. managing testing like a "real" project
  2. strengthening the test resources
  3. improving system testability
  4. getting off to a quick start
  5. streamlining the testing process
  6. anticipating and managing the risks
  7. actively and aggressively managing the process 

The first two articles of this series discussed points 1-5. This article will finish the list with points 6 and 7.

6. Anticipate and Manage the Risks

Organize the risk management process. In my observation, the risk management skills of many software engineers, test and QA professionals, and even project leaders are seriously underdeveloped.

When you say to people: "Manage the risks," their answer is "We are already doing that," "That's obvious," or "The risks can't be managed any better." Sometimes you feel like you're explaining the risks of climbing Mt. Everest to a Sunday jogger.

The following checklist can be useful, as a series of reminders of the risks to watch out for. If some of these points are likely to apply to your project, it is a good idea to identify them early and see what can be done to minimize them. (For completeness, some of the points on this list repeat points which I have made elsewhere.) Also as a friendly warning, reading the following list may be depressing when you realize how many points apply to your situation.

The common causes of test project slippage, or inadequate testing within the time allocated, are

People Causes
Under-staffing the test team. (There are many reasons for this, and they  may be difficult to overcome.)

Contention for scarce people resources.

Adding additional people to the test team, but too late to help-usually after the first third of the project (Brook's law).

Lack of sufficient experience in the test team, either in the functionality being tested, test methodology and tools, etc.

Lack of expertise in specialized aspects of testing, such as security controls testing, reliability testing, usability testing, etc.

Lack of sufficient user involvement and cooperation in the testing.

Lack of sufficient developer involvement and cooperation in the testing.

Test team learning curves that are longer than anticipated.

Tester turnover.

Fragmentation-assignment of people to too many projects in parallel, leading to time juggling.

Lack of access to important information, or miscommunication.

Failure to coordinate, or conflict with other groups, specifically the system developers, system maintainers, users, or marketers.

Lack of sufficient allowance for the overhead of organizing the work, monitoring and reporting the status of the test project.

Unreasonable deadline pressures, which lead to demoralized testers and burn-out.

System or Product Causes
The system version delivered to testing is too raw, buggy, and unstable to
test effectively.

Scope creep in the product.

Volatility-frequent changes to the system functionality.

Test Process Causes
Unfamiliarity of the testers with the test process to be used for the system they are testing.

Revising the testing objectives during the test project.

Scope creep in the testing project, e.g., expansion of the types of testing to be undertaken.

False test results.

Test cases that provide untrustworthy information.

Lack of reusable test plans, cases, and procedures.

Slow, cumbersome, and inefficient test procedures (e.g., spending a lot of time looking for prior test cases for reuse).

Ineffective or poor-quality bug fixes, which may introduce new bugs or fail to fully resolve a problem, leading to extra debugging followed by extra testing.

Failure to adequately monitor,

Pages

About the author

Ross Collard's picture Ross Collard

Ross Collard is a consultant who currently is working on software testing and quality assurance projects for AT&T, Cisco, GE, Lucent, and the State of California. He teaches software testing for UC Berkeley. Ross has an MS in computer science from the California Institute of Technology and an MBA from Stanford.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03