In his CM: the Next Generation series, Joe Farah gives us a glimpse into the trends that CM experts will need to tackle and master based upon industry trends and future technology challenges.
I find it rare to see a project with too many metrics. Quite the opposite is usually the case. The problem is that establishing metrics is a lot more work than identifying what you want to measure. It's hard to collect the data. It's hard to mine the data, and then to present the comparisons and trends which are the metrics is no simple feat.
However, as CM tools move more and more into the realm of ALM and beyond, we'll see some changes. As integration across the lifecycle becomes more seamless, the focus will move to dashboards that not only provide standard metrics, but which are easily customized to mine and present the data necessary.
The goal of software metrics is to have a rich collection of data and an easy way of mining the data to establish the metrics for those measures deemed important to process, team and product improvement. When you measure something and publish the measurement regularly, improvement happens. This is because a focus is brought on the public results.
However the interpretation of the measure must be clear. For example, are glaciers falling into the ocean a sign of global warming (weakening and falling) or global cooling (expanding ice pushing the edges into the ocean)? A wrong interpretation can be very costly. So a good rule is to understand how to interpret a measure before publishing it.
Looking at the software world, is the number of lines of code generated per month a reasonable measure? I want my team to create a given set of functionality using the minimal number of lines of code. So larger numbers are not necessarily a good thing and may be a bad thing. Lines of the amount of code generated is not a good measure for productivity. However, averaged over a project, and compared from project to project (using similar programming technology), it is a fair measure of project size and is also useful for overall "bugs per KLOC" metrics.
The point here is that you must select metrics that can clearly be interpreted. And when you publish these regularly, the team will tend toward improving the metric because they understand the interpretation and how it ties back to what they're doing.