Stop Re-estimating Your Stories for Every Iteration


Many agile practitioners recommend re-estimating stories at the beginning of each iteration to increase accuracy. Adrian Wible, however, argues that re-estimating stories within an iteration planning meeting actually distorts results and decreases predictability. See if you need to rethink your planning procedures.

Many agile practitioners recommend re-estimating stories at the beginning of each iteration to increase accuracy. I disagree with this practice.

It’s worse than a waste of time. This may seem counterintuitive, but I argue that re-estimating stories within an iteration planning meeting actually distorts results and decreases predictability. Stay with me.

First, the team almost always has a greater level of detail and clarity when the story is being slotted into the current iteration, particularly when breaking stories into tasks. This tends, in my experience, to cause estimates to grow.

Next, the team wants to meet commitments, knows velocity is being monitored, and may—perhaps subconsciously—inflate estimates at the point of commitment.

Why is this a big deal?

Let's try an example.

Say team Swiss Watch always delivers 40 points. They’re amazing. Velocity is 40, every iteration. You can count on them.

In each iteration planning meeting, the perfectly prioritized product backlog is trotted out with perfectly “ready” stories at the top, and the team begins peeling off those stories until the iteration is full.

The team re-estimates the stories as they peel them off, of course. When they get to 40 points, they declare the iteration planning done.

You revisit the product backlog and see that the aggregate estimates for those stories was actually 30. You might applaud the team for gaining more accuracy—after all, the original estimates were off by more than 30 percent. In fact, if you go back over the past ten iterations, you see this is a consistent pattern. Thank goodness for re-estimation. Or …

The business is trying to figure out when the rest of the minimally viable product will be done. The business looks at the product backlog and sees 120 points’ worth of stories. How many iterations are left? Simple math, you say. Team Swiss Watch always delivers 40 points, ergo only three more iterations to go. Very nice. Schedule the press conference.

All of a sudden, this historically “predictable” delivery engine stalls. The team takes a whole extra iteration to complete the minimum viable product. Everyone scratches their heads. The press conference is rescheduled.

What happened?

Here’s what happened. Two velocities are at play. The velocity of 40 is against the inflated, iteration-time points. The product backlog, however—the 120 points—is in noninflated product-backlog points. Our simple math was against different scales. It’s as if velocity was measured in inches, but the product backlog was sized in centimeters.

The simple math we should have done was to divide the 120 by 30, not 40. With consistent units, it’s obvious that there were four iterations left. Same simple math, but different results when you use consistent scales.

It’s not just that re-estimation provided no value. It actually eroded value.

If you regularly re-estimate at iteration planning meetings, make a note of the original versus updated estimates. See if they grow. Consider what impact this is having on the accuracy of your release planning.

I can hear you now: "My team's estimates don't inflate. Some go up; some go down." If this is the case, then your aggregate will probably even out, more or less. In the example from above, perhaps you go into the meeting, change estimates for every story, and end up with 40 re-estimated points’ worth of work that were originally booked in the backlog at 39. Your velocity is not materially impacted. You are still on track, with roughly three remaining iterations. So, the question is this: What value did that re-estimation provide? None.

Is there a time when re-estimation is helpful?

Yes. I’ve encountered situations where classes of stories with some cross-cutting concern end up being inaccurately estimated. For example, let’s say the team realizes that every time a user story requires a database change, the story ends up taking more effort than expected. In another case, perhaps original estimates were high due to some technical risk. When the risk gets eliminated, those original estimates turn out to be inflated.

When a cross-cutting concern imparts a material difference in estimated effort, it is useful to re-estimate the affected user stories with the new information. Realize that it is still important to embark on this re-estimation against the original estimates, to avoid the distortion illustrated earlier.

Wholesale re-estimation at the beginning of iterations is folly. At best, it is a waste of time; worse, it usually distorts. The only time I recommend re-estimating is when "aspects" of user stories are discovered, in practice, to cause user stories to require more or less effort than expected. We should be using aspect-oriented re-estimation, if at all.

User Comments

Andrew Webster's picture

Good article.  What I recommend to teams I'm coaching and training is that they work with the PO to update the product/release backlog at the beginning of every sprint.  Restimating the sprint backlog is a waste of time (although can be interesting if done quickly at the *end* of the sprint to learn something based on reality rather than guesswork.)

The learning from the last sprint, the "uncovering" will now help improve the course planning and estimation of the work coming up.  If the backlog is well groomed (i.e. high value low effort near the top) the velocity is still useful to help estimate how much value will be available by when.  Having epics estimated in large round numbers provides a "budget" of story points that can be adjusted as more is found out by doing the actual work. 

Being "finished" is rarely a case of completing the backlog, it's more often a case of reaching a point where ROI is satisfactory.  So re-estimating to improve tracking towards a statisfactory financial outcome is helpful indeed.

November 5, 2015 - 10:57pm
Adrian Wible's picture

Thanks for the feedback. All good points. The work to "update" the product/release backlog is, to me, an act of "grooming" and not the focus of my semi-rant. When it is done is unimportant, as long as any estimation is done with the same level of information as the other work in the backlog. My specific concern is with this re-estimation at the beginning of the iteration. Your point about re-estimation at the end of the sprint is, to me, not re-estimation, but simply revisiting the estimates against the effort exerted in order to improve estimation over time. Again, a useful endeavor, and more related to retrospecting than to estimating I think.

"Finished" is a squirrely term. We use "ready" to assert the notion that a user story is ready to be picked up in an iteration. We use "done" for an iteration, some use "done,done,done". We use deployable (not to be confused with deplorable ;) ). We use deployed. We use released. I find it helpful on each team with which I work to ensure a common understanding of what it means to be "complete" - yet another ambiguous term - across each of these dimensions.

November 6, 2015 - 11:38am
Leyton Collins's picture

Kudos to you Adrian! I whole-heartedly agree. 

November 6, 2015 - 9:46am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.