Two Measures of Development Effectiveness: Predictability and Optimization

[article]
Summary:

Nearly every CIO or VP of R&D is struggling to improve their time to market while increase the number of features delivered within stagnant or shrinking budgets. Two objectives of software development teams will address this need are to improve predictability and optimize productivity By combining views of predictability and productivity of the development activity, the team and its stakeholders can quickly and easily tell if the development is on track, if predictability is improving, and if team members are self-aware enough to improve their overall output.

The global recession has had a strong impact on the software development industry. These effects can be felt by anyone speaking to a development executive. The outcry is similar–“How can I do more with the same resources?” In order to address this need, there are two common objectives software development teams must embrace. 

The global recession has strongly impacted the software development industry, including companies that develop software to support their traditional services. Anyone speaking to a development executive can feel these effects. The outcry is universal: “How can I do more with the same resources?” The need to be innovative, competitive, and cost effective has never been stronger than it is today. If necessity is the mother of invention, then current world economy is the mother of necessity.   Nearly every CIO or VP of R&D that I speak with is struggling to improve their time to market while increase the number of features delivered within stagnant or shrinking budgets. Two common objectives of software development teams address this need:

  1. Improve predictability
  2. Optimize productivity

Perhaps a third common objective should be to increase innovation. While I will not address innovation directly in this article, there is an indirect relationship between predictability, productivity, and innovation. The more productive and predictable the team, the more capacity and latitude it has to put into innovative development.

Being able to accurately estimate software deliverables in terms of schedule, scope, and quality is a prized objective for software development teams and management. Any company that relies on software to help drive revenue, either directly or indirectly, needs to be able to trust the estimation capability of its software development group. Business leaders directly correlate revenue projections to software features, so delivering on time with committed scope and quality will provide better budget projections to the company and its stakeholders.

I’ve been involved with large software development companies whose business departments do not trust the development organizations, and it was not pretty. There is a lot of contention, blame, and general dysfunction in organizations like this.

Once your business stakeholders can count on your commitments, they will think that you are not doing enough (how could you if you are delivering on time?), and they will quickly focus on getting even more functionality to the end-users. Not only is this a significant challenge for most organizations, but available methods for improving predictability and productivity features over time not readily available to most technology leaders. I like to use the word "effectiveness" to describe both predictability and productivity. The more productive and predictable a team is, the more effective they are. Effective development organizations can accurately predict their delivery in time, scope, and total quality while continuously finding ways to improve their productivity. 

This article is designed to provide specific steps for understanding your development effectiveness. Getting this right will help move your software development group toward being a true business partner if it is not already.

Define Your Business Objectives
The first step to being effective is to define what it means to be effective from a business perspective. Sitting with your business leaders and gaining a deep knowledge of their objectives is critical to meeting their expectations. It has been my experience that the better a development organization understands “why” it is developing a product, the more likely it can effectively deliver the “how.” Being specific with your business leaders is important to effectively communicate the product vision.  

The Pressure is On
In today’s economic climate, many companies focus on reducing cost—or at least understanding the cost of delivering features to clients—so that a general ROI analysis can be done. The best measure of ROI is, of course, a bottom line dollar return for every dollar invested, however this is not always practical in the software world. Improving current customer satisfaction may be an objective for other firms.

If this is the case, then their ROI may be measured in other ways, such as customer retention, increasing margins within existing customers or business process efficiency for internal applications. I recommend providing a mechanism for your business leadership to assign value to their requirements, which allows the business to evaluate its requirements by real or perceived value in addition to priority. The objective is to keep the business stakeholders engaged in the development process rather than throwing requirements over the wall. Something as simple as a relative one-through-ten scale on high-level requirements will typically enhance the business’s interest and engagement throughout the development process.

Taking this step allows the development team to create a business value burn-up chart and communicate the accumulated value points delivered to the stakeholders. A business value burn-up chart can be a reasonable demonstration of delivering business value, which is challenging for many organizations to quantify. Whatever your business goals may be, defining effective delivery with your stakeholders (and the associated value of the delivery) is the first step to improving your software development effectiveness. I am often surprised when development leadership has not taken this step. When we understand and adopt the philosophy of the business, we become more effective business partners. 

Standardize Output Measures
The second step to improving effectiveness is to standardize your measure of output so that you have a consistent and objective measure of improvement from a baseline. Objectively understanding if the team is improving or regressing is a key objective for mature development teams. There are several different approaches to this. My favorite approach is to use story points of a very limited Fibonacci sequence.

I typically use a scale up to five with the top number being the most that a single developer can accomplish in half a sprint (typically one week I find that one-week stories are the most that a seasoned developer can accurately estimate. So a typical sequence contains the numbers one, two, three, and five. There are many other approaches such as tee shirt sizing (S, M, L, XL) or using larger Fibonacci sequences, that may work just fine

The key to making these numbers meaningful is to ensure consistent use of your scale. Creating a clear guideline or worksheet for assigning stories to these point values is often helpful. The guideline should be lightweight and able to identify stories that are more complex than your highest value and that need to be further elaborated upon. Your guidelines could have examples of stories for each category.

When encountering a story that has no frame of reference, the only option you have is to do your best to estimate the complexity based on similar work and adjust as you learn. Maintaining a story point guideline can typically only be done for one application or product. Despite the common desire to apply a guideline to several applications, the guideline cannot be applied to more than one application due to the various complexities of the technology and team makeup. Additionally, I highly suggest making your estimation guideline part of each sprint review or at the very least, part of your release review depending on how often you are doing releases. 

Taking this approach should form the basis of consistent story estimation. As the team matures, these guides become second nature and can be disposed of or used by new members only. Following this approach will provide a constant and trusted source of “gross output.” In other words, we can determine only the “amount” of work delivered. There are other aspects of productivity that need to be taken into account before we can truly measure “improvement” of the team. For instance, a team may have cut some important corners to deliver the stories, or they may have not tested at all. Both of these common situations result in significant rework downstream. 

The Two Key Metrics
OK, the reality is that my two most critical metrics are really indexes composed of several metrics. 

1. Predictability
As I discussed earlier in this article, predictability is absolutely key to becoming a more trusted business partner. Predictability is also a great way for team members to judge themselves against their commitments and objectives while making meaningful adjustments. The highest performing teams are those that want to measure their performance and use these objective metrics for discussions during regular internal reviews (retrospectives in the agile world). Having a predictability index allows for a simplified view of the team’s progress that is useful for management as it gains a high degree of transparency into the progress without a lot of complicated metrics. T

here are many factors that go into creating more predictable delivery. One of the most useful measures of predictability is scope variance. While velocity will indicate the number of story points delivered, the number of story points missed is rarely captured but directly impacts the team’s ability to deliver on their commitments. The other important group of metrics for predictability are quality related. The more time the team spends on addressing functional and technical defects, the less time they have for developing features. Capturing the demand from defect identification and remediation will help keep the team on a track of predictable deliveries.  

I like to view predictability as a series of variances from our committed deliverables in conjunction with escaped defects as an indicator of technical (or functional) debt. For an explanation of functional debt please see this article from the “Agile Insider.” I know what you are saying: “How can we be predictable if we are doing agile?” Well, I strongly feel that it is completely reasonable to say that you can most effectively predict delivery with agile since empirical evidence throughout the process enables constant adjustment based on progress. All software delivery must have a targeted release date and committed functionality. Otherwise, there is no way for the consumers of the application to accurately plan for consuming the application. The agile reality is that stakeholders will be able to have their highest priority requirements developed in the given time frame, assuming that there is a strong product owner who is involved in the process. 

In software product companies and companies that rely on their software for revenue, there is often a fixed release date. Unless significant trouble is discovered in the last sprint, or during release to production, the schedule is typically not the issue. The more common situation is “rapid de-scoping” prior to release. In order to capture this activity, we track both schedule and scope variance in the number of story points delivered. In order to tell if we are on track during an agile development process, we can track sprint velocity variance from our average velocity. Being consistently below average velocity indicates that we will need to adjust the development or delivery in the coming sprints. In a similar way, tracking sprint burn-down variance from target will give an indicator of issues before the sprint is complete. 

You may choose to create your own measures of predictability, but some of the ones that I like to use are shown in figure 1.

Figure 1: Ness Predictability Index
  • Scope Variance: The number of story points delivered / story points committed. Velocity charts are very helpful, but while velocity gives a measure of development capacity and delivery trends, this variance demonstrates the number of story points carried over from sprint to sprint or missed all together. Essentially, this is functional debt. 
  • Release Velocity Variance: The current velocity / average velocity. Similar to acceleration, this indicator gives a sense as to the overall pace of the team. If we are slowing down (negative variance), then there may be trouble ahead. 
  • Escaped Defects / Story Point: Also called defect density, this indicator will show whether the team is sacrificing quality for speed or quantity of output. 
  • Business Value Variance. If you are capturing business value as defined by your product owner or other business stakeholder, then this metric will indicate story selection tradeoffs. This metric will indicate if the group had to include more low value stories than expected due to technical or other valid reasons.

By averaging the variance metrics, we can create an overall index and then plot the predictability of each release as a trend, as shown in figure 2. Ideally, the trend will be positive and the teams will improve their predictability over time. Downward trends are opportunities for exploration. 

Figure 2: Predictability Index Trend

2. Productivity
Predictability alone is a useful metric, but it’s not ideal to be predictably slow. Marrying predictability with productivity measures gives the team and stakeholders a good indication if they are improving their delivery or if they are slowing down to be more predictable. The Cumulative Flow Diagram shown in figure 3 is a good chart for measuring productivity.

Figure 3: Cumulative Flow Diagram

The advantage of the Cumulative Flow Diagram is that we can easily see, in one chart, the relative work in progress of each of the major functions of an agile development team, as well as its total throughput. Story points that have been elaborated, developed, tested, etc., are all shown in different colors in the area chart. The slope of the curve can be a good measure of productivity or potential waste if one of the teams is “outpacing” the others.

As the lines diverge, there may be too large of a backlog in any area (potential waste). Lines coming together may indicate a potential bottleneck. Flat lines or low slopes on the chart indicate a stalling of productivity of that part of the development process. The slope of the bottom line is a measure of story points delivered, or throughput. The steeper the slope, the more effective the team in delivering stories. Be sure to limit the number of story point possibilities (we typically use 1,2,3,5) for each story in order to limit the possibility of gaming the system. 

Acceleration is another effective metric for productivity. Essentially, acceleration measures the current velocity against the mean velocity. Is the team doing better or worse than usual? Plotting acceleration on a curve will demonstrate the team’s trend. Using sprint and release velocity trend lines as a measure of acceleration is another approach that many companies take; this approach is perhaps more simple. Plotting the average and standard deviation from the average can show clear trends in output. 

By combining views of predictability and productivity of the development activity, the team and its stakeholders can quickly and easily tell if the development is on track, if predictability is improving, and if team members are self-aware enough to improve their overall output.

Tags: 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.