Metrics and Process Maturity

[article]

Finally, you need to be able to specify what you want to look at. Perhaps your queries are grouped by related areas. For example, metrics that are shown across development streams or metrics dealing with the administration of your CM tool (or your environment). Here's what I would recommend.

·         Specify the domain of your metric: what set of objects are you looking at?

·         Specify the attribute you want to measure

·         Specify the granularity of your measurement

·         Group your metrics: tag them if possible so that metrics may appear in more than one grouping

The good thing about a CM repository is that it has all of the history in it (I hope!). You shouldn't have to worry about saving your metrics because you can reproduce them at any time. OK, some metrics may take longer to produce than others - for example if you have tens of thousands of files and you're computing function point metrics, these are not going to pop up in a point-and-click response time,  at least not until we go through another 10 years of technology advances. So you may want to save some of your metrics.

Ideally, you want to be able to set an arbitrary context view and compute many of your metrics based on that. You want to be able to point to a specific stream and compute metrics for it. To a specific build and compute metrics for it. To a part of the organization and compute metrics for it. This is data mining, but in a revision-rich repository containing, hopefully, everything you've ever wanted to know about metrics but were afraid to ask.

Too much data? The next level of optimization is the ability to look through your metrics and tell you the concerns. This doesn't have to be a 22nd century artificial intelligence capability. It can be a simple set of checks, hard-coded for each metric, looking for large deviations from the norm and giving metric specific interpretations for these deviations. For example, department xyz has an actual to projected effort ratio which is 3 times that for the rest of the project. If your metrics are easy to specify, you won't mind spending a bit of effort to automatically interpret and hi-lite the results. If your metrics are hard to specify and implement, you'll likely not get past the specification step. If most of your metrics are easy, and a few really difficult, focus on the easy ones first. Again, the value of a good tool cannot be under-estimated. I generally have to specify a single line to add a new metric to my list - grant it, not something like computing function points, but more like the above cited metrics. If I had to write a program (or even worse, steal somebody's time to do it), I'd be thinking twice before looking at extending my metrics.

About the author

Joe Farah's picture Joe Farah

Joe Farah is the President and CEO of Neuma Technology and is a regular contributor to the CM Journal. Prior to co-founding Neuma in 1990 and directing the development of CM+, Joe was Director of Software Architecture and Technology at Mitel, and in the 1970s a Development Manager at Nortel (Bell-Northern Research) where he developed the Program Library System (PLS) still heavily in use by Nortel's largest projects. A software developer since the late 1960s, Joe holds a B.A.Sc. degree in Engineering Science from the University of Toronto. You can contact Joe at farah@neuma.com

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!