be either combined so that they are at least a week, or split out into sub-features which fit the sizing guidelines. Then when you hear the 5 of 50 features are incomplete, you know roughly where you stand. Another example: a design problem can manifest itself in dozens of ways. Don't count the dozens of ways as problems, just the root problem. Otherwise it will look like you're fixing a whole host of problems with one simple change. Good testers should know how to recognize problems stemming from the same root. If not, they should be consulting directly with design, or else the problems they raise should be screened before entering the "design" problem domain.
My Favorite CM Metrics
I like to look at trends in my CM world. Good tools help. Every so often I'll spend a few minutes issuing a bunch of queries just to get a feel for the landscape, to get my finger on the pulse. It's usually quite interesting. Some of the queries are useful to look at regularly (i.e over time) or across subsystems. These are the metrics - they show a trend over time or across some other dimension such as subsystem or build. Make sure you have tools that allow you to do this. If you don't, you won't do much learning and your processes won't improve adequately.
I've grouped some of my favorite CM metrics below by functional area.
Problem Arrival and Fix Rates - These are good to see if we're staying ahead of the game. Usually they are reported separately for each development stream. Far more significant to me is the stream to stream comparison. For example, we see the peaks and valleys in problem arrival rates in one stream and map them to events such as verification test runs, field trials, etc. When the next stream comes along, we see a similar pattern - but this time we're more confident in predicting when each peak will settle back down.
# Problems Fixed per Build - Typically we see a fairly constant failure rate for problem fixes (until we change our process to improve that rate). This number tells us how bumpy the build is expected to be until we get the fix fixes turned around. It is also a good indicator for overall stability of a release stream.
# Problems Fixed and # Features implemented per Release - This one, again, needs to be compared on a stream to stream basis. It tells us how complex a release is likely to be - how much training, documentation and other resources will be required.
Outstanding Problem Duration by Priority - This is a process monitoring metric. Are problems being solved within the specified periods indicated by the priority. (At least, that's how we use internal priorities.)
Problem Identification by Phase Found - This one lets us know if too many bugs are getting out the door. It also lets us know how effective things like verification, and beta testing are.
Duplicate Problem Frequency - If this gets too high, its too hard for the reporting teams to identify a duplicate problem before raising a new one. Perhaps they just need training.
CM Tool Usage
Multi-site Bandwidth Requirements - The requirements for Multi-site bandwidth are measured by looking at the transactions to be sent across sites, or the transactions sent over the past while. There are two types of bandwidth - average (MB/day) and peak (peak MB/hour). By looking at what the behaviour has been, we're prepared to predict what will happen when someone decides to load in