Building a Meaningful Metric Mousetrap

[article]
Summary:

Metrics provide data points that can both benefit and endanger and organization.   Metrics can be used positively to build a better organization and can be used negatively to punish organizations and people therein.  Many times, those that use the metrics negatively do it purposefully, but other times, they are not aware of the way they are using them.  This is why it is important to have a metrics culture that apply metrics in a positive manner, provides an understanding of the metric, and then actually utilize metrics to manage an organization. 

 

Often times an organization collects numerous measures and generates numerous metrics. In many of these cases, it is unclear why certain metrics are being collected. In the long run, a majority of these are not actually used to manage an organization; they become ignored and deemed worthless. It is the combination of ignored metrics and the negative use of metrics that begins the downfall of the perceived value of metrics and, therefore, leads to a very negative metrics culture.

Constructing a Value-Added Metric
There are many challenges to establishing a positive metrics culture. One of the primary objectives to building a positive metrics culture is to ensure they are solving a problem and designed to be value-added.

It is important to gather input prior to establishing metrics. For input, consider the following:  It is necessary to uncover problem areas in the organization, the application team, or project. It is also important to identify the value-added metrics in use by other organization.. The possibility of effectively using metrics to alleviate the pain felt by groups in certain areas can be a big motivator to implementing metrics.

The key building blocks to value-added metrics are: 1) ensuring the organization understands the benefit of the metric and how it can be used to improve the organization; 2) understanding the level of effort to collect the metrics (both establish and maintain); and 3) providing clarity on who really benefits from and will use the metric. By assessing the benefit versus the effort, a value-rating can be assigned to each metric. Those that have high value-ratings can be implemented. Once a metric is in place, monitoring should then occur to verify if the metric is actually being used to manage the problem area or organizational change. If not, discard the metric. For the sake of this discussion, let us say that the application team has identified that build times are a major problem and impacts delivery times to test and worse to production. The application team is unclear of how long they should expect a build to take and build times are perceived to range drastically in duration. Let's explore the key building blocks in more detail by proposing a metric called "time to build the product" within the software configuration management (SCM) field.

Understand the Benefit of the Metric
When considering the benefit of the metric, first take a moment to describe it. In the case of "time to build the product", the description is, a metric that determines the average duration to build (e.g., initiate, compile, package, and smoke test) a product from start to finish.

Next consider what areas that the metric can be used for (a.k.a., the benefits). This can be a consensus-driven discussion where the problem it solves or the opportunities it gains are considered. Essentially, this drives the "why" (e.g., why would we want to establish this metric). Several examples of why the metric can be of benefit and how it can be used to improve an application team's effort if establishing a "time to build the product" metric are:

·         Identifying potential problems when there are large deviations from average build times or when build times get longer and longer.

·         Setting customer expectations for scheduling and planning. If it always takes 2 hours to build an application, and management tasks for a build in 30 minutes, it's much better to have hard numbers ready to explain why that's not possible.

·         Measuring expected build time gains (or losses) caused by implementing:

1.       New functionality in applications

2.       System change/improvements

3.       Build process changes/improvements

Determine Effort to collect the Metric
Now that folks are aware of the benefits and use of the metric for improvement, it is time to consider the level of effort needed to collect the metric. This is critical, as there are many cases where the effort to collect the metric outweighs the perceived benefit. If this is the case, then it is not worth proceeding forward with the metric. Total effort should include the setup effort and the ongoing maintenance effort. Here is an example of considerations in setting up a "time to build the product" metric:

·         Define the process

·         When the build is initiated, record a time stamp. At the start of each step in the build process (e.g., build, compile, package, and smoke test) a record time stamp, and at the end of the build process record another time stamp. The data should be written into a build log

·         Determine average times for each step in the build process

·         Automate process

·         Script the process so it can automatically run

·         Send output to a log (average times for each step)

·         Test the automated process

·         Adjust as needed

Identify Who Benefits most from the Metric
It is important to identify who benefits most from the metric. Engaging with these people will help give you an understanding if and how they may use the metric. Here is an example of those that may benefit most from the "time to build the product" metric:

·         SCM engineers for build time monitoring, scheduling, and for having a baseline of data to improve build times

·         Development for scheduling and awareness of build time

·         Project Management for scheduling and awareness of build time

Assess Value of the Metric
Is the metric a value to an application team or organization? This is a critical step. Once the benefits of the metrics are known, the effort to construct and maintain the metrics are identified, and it is clear who will benefit from the metrics, then a comparison of the benefits versus the effort should occur. It should include input from those that will use the metric.

Typically the values used to determine the effort of a build is measured in hours or days. The tricky part is to establish meaningful values to determine the benefit of a metric. This can be subjective from person to person. A tip is to consider how many hours in a given year are worth setting up and maintaining a metric. Is 50 hours per year reasonable or 100? Using this value can provide a basis to measure effort against. Then it is important to ask the people who may benefit from the metric if the benefit is worth the effort. Using the "time to build the product" metric as an example, let's compare the perceived benefit versus the effort.

Perceived Benefit
Working with the group that would benefit most from the metric (from the "who benefits most and uses the metric" section), it was determined that 100 hours was a reasonable amount of time to set up and maintain a metric of this nature. Using 0-100 as the range, ask each person to determine the perceived benefit of "time to build the product" metric (from 1-100 with 100 being the highest):

 


Role




Perceived Benefit




SCM Engineer #1




80




SCM Engineer #2




90




Lead Developer
(representing Development)




60




Project Manager




70




Average




75



 

Effort

Determine the effort to establish the "time to build the product" metric. This approach looks at total effort and includes the set-up time and the on-going effort to maintenance the metric. Calculate the hours:

 


Task




Hours




Set-up Effort
(define and automate process)




25




On-going Effort
(monthly)




12 (1 monthly)




Effort per Year




37



 

Value
Determining value is critical factor. If the perceived benefit exceeds the total effort hours in a year (e.g., the value-rating is greater than 1 when perceived value is divided by hours), then the metric may be considered a value-add to the organization or application. In this case, if we divide the perceived benefit of 75 by the total effort hours in a year of 37, we get 2.03 value-rating (rounded up). When using this approach, you will often find that the effort is greater than the perceived benefit (e.g., value-rating is less than 1 and should therefore not be considered as a potential metric).

Comparing a Potential Metric
When embarking on a metrics program, it is important to look at a number of potential metrics in order to determine their value-rating in relation to other metrics. As stated in the early section, gather input by a) uncovering problem areas in the organization, the application team, or project and b) identifying what value-added metrics are used in other organizations. Discuss the metrics with those benefitting from it and ensure they would, in fact, use the metric for improvements.

Of those potential metrics that remain, go through the steps described above (e.g., understanding the benefit of the metric, determine effort to collect the metric, identify who benefits most from the metric, and assess value of the metric). Then compare the potential metrics together. Here is an example of a metrics comparison table that uses the value-rating as its gauge.

Note: it includes the "time to build the product" metric discussed above:

 


Potential Metric Name




Perceived Benefit




Effort




Value-Rating




Time to Build
the Product




75




37




2.03




Build Errors




85




45




1.89




Code Volatility




45




60




0.75




Change Control
Volatility




55




45




1.22



 

In this case, we see four potential metrics. Of the four, the SCM metrics program may only choose to pursue the "time to build the product" and the build errors metric since it has the highest metrics value-rating.


Note: there will be times when the value-rating is below 1 but the organization still wants to proceed with a metric. In these cases, it is important to communicate the risk of the metric being perceived as not adding value.

Monitoring the Metric
As time goes on, any metric that has been established and is produced on an on-going basis should be revisited on a periodic basis. The reasons are two-fold: First, it is important to periodically check if the metric is actually being used. If it is not being used, then the metric should be discontinued. Second, the value-rating of the metric should be re-evaluated. Is the benefit still perceived to be high as time moves forward? Is the effort more or less than what was initially calculated? In some cases, once a metric is used and drives a positive change in the organization, then the metric has done its job and may no longer need to be generated (or generated less frequently then on a monthly basis).

It is also important to allow the data to collect for a few cycles prior to prescribing a target for a metric. It is important to identify the range of scores a metric produces over a reasonable time period to understand what "average" is and what are variations. Once this is done, then a target for improvement can be considered.

Summary
Constructing value-added metrics is a must. The metric must be perceived to solve problems or improve the life of those in the organization or application team. It is important to understand its benefit, determine the effort to collect the metric, identify who benefits most, and assess its value in the form of a value rating. When an individual metric (or set of metrics) is established, it is necessary to then monitor them to ensure the value rating remains constant and that they are actually used to manage the business. This also ensures a healthy and dynamic program. As old metrics lose their value-rating because they have either done their job in improving the organization or are not being used, then new steps can be considered to solve problems or improve the organization in new ways.

 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.