In keeping with the season, I'll try to keep this month's article on the Light side (both Chanukah and Christmas are Festivals of Light). Not easy to do when talking about metrics. If you're serious about attaining SEI CMM Level 5 certification, or about improving your processes in an effective manner, metrics are critical. Changing processes based on gut feel, or even based on some other organization's best practices can lead you backwards. Metrics not only permit you to detect this, but give you the basic data you need to improve your processes.
Why do I Like Metrics?
I enjoy looking at metrics. Even more, I enjoy devising new metrics to deal with specific issues. So, why? Why would I want to spend so much time on non-core activities.
First of all, they're interesting. They teach me things I didn't know. They confirm or reject my suspicions. Sure Dept. A is delivering more lines of code than Dept. B. But Dept. B is delivering more Features. Less code to maintain too.
Secondly, I agree with the philosophy: What you measure (publicly), improves. Testers not productive enough? Put up a chart of the weekly number of problems uncovered by each tester or test group. People spending too much time surfing the net at work? Put up a chart each week of how much surf time occurs in each dept. during nine-to-five (if you can measure it!).
Thirdly, the need to tune processes. The biggest part of tuning a process, for me, is dealing with the most frequent cases and ensuring the process handles these well. Metrics point me in the right direction. The low frequency cases may have as big an effect, but I can deal with low frequency without automation - not so for high frequency.
Why else? Forecasting. Mostly, I need to be able to predict, accurately, when resources will be required and when product will be ready.
How about identifying change - my metrics tell me that something different is happening - that makes me want to isolate the cause. On the flip side, when I change my processes, I can identify the impact of the changes.
It's about process, it's about productivity, it's about accuracy. Metrics are important.
Making Metrics Work
What can you do to make metrics work well. First of all, compare apples to apples. Don't use a Java line count metric to compare to a Perl line count metric (unless you're studying the virtues of different languages).
Secondly, prime the pump. You're starting to pump out metrics regularly - you'll likely make some adjustments to the metrics and then the data will come in. Don't read too much into the first set of measurements. There will be bumps and anomalies along the way. Get samples across a significant part of your axis (usually time). Identify some of the bumps so that you know what to look out for in the future. For example, if 50 new problems were raised in the first week, don't push the panic button and assume the same number will occur in the second and third. If it does, then you have a reasonable measure, and perhaps cause for concern. But if the numbers settle down to 20, identify what caused the 50 and watch out for it in the future.
Thirdly, post your metrics. Make them visible. If there are anomalies, you may find others speaking up to tell you about them before you have the chance to see it. But be careful what you post - don't post a metric you don't want people spending time improving. And use political sense - posting a metric that will create competition is good, but one that will create division is not so good.
Finally, be careful about what you're measuring. For example, don't measure activity/feature checkpoints which vary wildly in size. If 50 of your 53 features are under 2 weeks effort and 3 are over 2 years effort, your metrics may give some false indications. Set some guidelines for feature sizes. For example, state that they must be of a size between 1 week and 2 months, and if not, they should