A Project View of Risk: Will Your Project Deliver?

[article]

chance to have people involved from the various project roles, such as developers, testers, and project managers. Finally, the team helps to carry the workload-although this is not a heavy process.

Tip: Try to keep the same assessment team intact for the entire project, since the assessment will need to be performed-or at least updated-throughout the project. Continuity is a good thing!

Ask the Right Questions

The key to performing this assessment is found in the questions asked for each CSF. Your goal is to discover whether or not the right things are being done to achieve each success factor. Conversely, you are also trying to discover if the wrong things are being avoided.

Instead of thinking of these questions on the spot, I prefer using a predeveloped questionnaire. Besides the reusability and consistency of the questionnaire, another advantage of this approach is that it allows me to conduct either an interview or a survey, depending on which is the most appropriate for the time at hand.

As an example, let's take the most common CSF, correctness. Your goal is to assess the ability of the project to deliver a correct system or application. Therefore, the questions you ask people should directly pertain to this factor. Examples include:

  • Are user requirements defined in writing for all features?
  • Are reviews or requirements performed consistently?
  • Is unit testing performed consistently? 

You will notice that these are all yes-or-no questions. That's because I like to keep the assessment simple. You may find that a one-to-six scale works better for you, especially if the answers aren't so well defined in your situation.

You'll also notice that some of these questions aren't answerable early in the project. That's why you will need to revise your questions, depending on when in the project you ask them.

Quantifying the Responses
Although you could get some value from just asking the questions, I like to make this assessment a dashboard item. To do this, you need to quantify the questionnaire results.

First, assign a weighting factor to each question. I use a range from one to five, where one is the least important question and five is most important.

Then, when the questionnaire has been completed, you take the score for each question and multiply it by the weighting factor. A "no" response on a question gets a score of zero.

Finally, sum the multiplied scores and divide by the total possible score to get a percentage score for the critical success factor. For example:

    Score Weight Weighted Score
1. Are user requirements defined in writing for all features. 1 5 5
2. Are reviews or requirements performed consistently? 0 4 0
3. Is unit testing performed consistently? 1 3 3
4. total 2 12 8

Eight divided by twelve yields a score of 67 percent.

Now, continue this process for all the CSFs you wish to assess. If you use a survey approach where several people are completing the questionnaire for the same CSF, you can average the final percentage score.

Getting Visual

To present the assessment results in an easy-to-understand fashion, I use Kiviat charts. Some people call this chart a radar chart.

Each CSF is shown on a separate spoke, with the percentage scores shown on concentric circles, as shown in figure 1.
 

Figure 1: A Sample Kiviat Chart

The beauty of the Kiviat chart is that it immediately reveals two things: 1) the degree of coverage and 2) strengths and weaknesses. In this example the coverage isn't all that great, and we see two major weak points: security and compatibility.

User Comments

4 comments
Kevin Robertson's picture

Nice job Randy; this is succinct and usable. Risk management made simple!

May 17, 2007 - 1:59pm
Kevin Robertson's picture

Nice job Randy; this is succinct and usable. Risk management made simple!

May 17, 2007 - 1:59pm
Kevin Robertson's picture

Nice job Randy; this is succinct and usable. Risk management made simple!

May 17, 2007 - 1:59pm
Kevin Robertson's picture

Nice job Randy; this is succinct and usable. Risk management made simple!

May 17, 2007 - 1:59pm

About the author

Randall W. Rice, CTFL's picture Randall W. Rice, CTFL

Randall Rice is a practitioner, trainer, and consultant in the field of software testing and software quality assurance. He is co-author of Surviving the Top Ten Challenges of Software Testing and the forthcoming book Testing Dirty Systems. Randy is a regular speaker at STAREAST, STARWEST, and EuroStar conferences. You can reach him at rrice@riceconsulting.com or by visiting www.riceconsulting.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01