4 Balanced Metrics for Tracking Agile Teams

Whatever your feelings on metrics, organizations will expect them for your team. You don't want to measure only one aspect to the detriment of other information, but you also don't want to measure too many things and scatter your team's focus. Here are four metrics that balance each other out and help gauge an agile team's productivity, work quality, predictability, and health.

There are as many was to measure a project as there are to build it. Unfortunately, many of these metrics are useless. Eric Ries calls them "vanity metrics" because they look good and make you feel good but offer little in the way of actionable value.

Whatever your feelings on metrics, at the end of the day, organizations will expect and want them. With the yardstick of "helping the team to self-reflect and improve" and the caveat "your mileage may vary," here are my four go-to metrics for an agile team, along with some experiences on their effectiveness.

Four Interlocking Team Measures

Why are there four? If you only measure one key metric, it is easy to get tunnel vision. Be it the teams focusing on just making the metric better (often through gaming the system) or management using the measure to drive all decisions, you can end up with a product or organization that looks good but is really driving off a cliff.

Likewise, with as many as ten metrics it is more likely that different parts of the organization will focus on different metrics, driving a wedge into the efforts to align the organization. Humans best handle three to five concepts at a time, so four main metrics seemed like the optimal dashboard. 

Cycle Time
Cycle time is your direct connection to productivity. The shorter the cycle time, the more things are getting done in a given timebox.

You measure this from when work starts to when the feature is done. In software terms, I tend to think of this as "hands on keyboard" time. Measuring cycle time is best done automatically via your agile lifecycle tool of choice, though even measuring with a physical task board will give you useful data.

Escaped Defects
This measure is the connection between customer satisfaction and the team. The lower the defect rate, the more satisfied the customer is likely to be with the product. With a high escaped defect rate, even the most awesome product is going to have a lot of unsatisfied customers.

You measure this by the number of problems (bugs, defects, etc.) found in the product once it has been delivered to the user. Until a story is done, it is still in process, so focus on the story's execution is preferable over tracking in-progress defects

Planned-to-Done Ratio
This metric is a way to measure predictability. If a team commits to thirty stories and only delivers nine, the product owner has about a 30 percent chance of getting what they want. If, on the other hand, the team commits to ten stories and delivers nine, the PO has roughly a 90 percent chance of getting what they want.

Measuring is a simple exercise of documenting how much work the team commits to doing at the start of the sprint versus how much they have completed at the end of the sprint.

This is the team "health" metric. It creates awareness that puts the other three metrics into better context. If all the other metrics are perfect and happiness is low, then the team is probably getting burned out, fast.

Build this into your sprint retrospectives. Open every retrospective with the team writing their happiness scores on whatever scale you choose. Track these numbers from sprint to sprint to see the trends.  


User Comments

Todd Scorza's picture

Hey Joel,

Great article, your metrics will be put into use immediately.  I have one question regarding the expected trend, prediction information on the team metrics. I see the expected trend averaging the last two completed story points and I saw the note to average the worst 3 completed story points for worst case pediction. However, the expected prediction and worst case trend seem to be non functioning equations.  Are these useful and how can they be put to use, if they are useful?


May 6, 2017 - 10:51am
Joel Bancroft-Connors's picture



My apologies for totally missing this question. 


You've found a place where I'm in process of trying a new formula. Previously I used a rolling, last three sprint average. I'm trying to move to a Mean Average using the best and worst. Right now the formula is not working in the Google Doc. My apologies for the confusion.

July 19, 2017 - 2:49pm
Bill Donaldson's picture

Great article! Hopefully all teams will have a set of metrics but they must be visible to be useful.  See my post on Creating a Culture Change with Visual Management


I’ve used Planned to Done metric can be helpful to get teams who aren't meeting their commitments.  However, this metrics can introduce a lot of anxiety and unnecessary introspection especially when the problem is outside of the team’s control.  I’d recommend an alternative the SAFe Program Predictability Measure.  The benefit of this measure is a ratio of the business value delivered not the team.  To get this the Business/PO is involved at the start to set value and during the demo to assess value.  Now the larger team can have discussions the internal and external reasons for not meeting the expectations.

July 19, 2017 - 2:18pm
Joel Bancroft-Connors's picture


Interesting idea, I can definitely see value in this once the organization has moved into being able to apply business value to their stories. Do you see it being valuable in early stages when the product owner may not even be fully engaged and is still doing just rank order prioritization?



July 19, 2017 - 2:54pm

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.