Many agile practitioners recommend re-estimating stories at the beginning of each iteration to increase accuracy. Adrian Wible, however, argues that re-estimating stories within an iteration planning meeting actually distorts results and decreases predictability. See if you need to rethink your planning procedures.
Many agile practitioners recommend re-estimating stories at the beginning of each iteration to increase accuracy. I disagree with this practice.
It’s worse than a waste of time. This may seem counterintuitive, but I argue that re-estimating stories within an iteration planning meeting actually distorts results and decreases predictability. Stay with me.
First, the team almost always has a greater level of detail and clarity when the story is being slotted into the current iteration, particularly when breaking stories into tasks. This tends, in my experience, to cause estimates to grow.
Next, the team wants to meet commitments, knows velocity is being monitored, and may—perhaps subconsciously—inflate estimates at the point of commitment.
Why is this a big deal?
Let's try an example.
Say team Swiss Watch always delivers 40 points. They’re amazing. Velocity is 40, every iteration. You can count on them.
In each iteration planning meeting, the perfectly prioritized product backlog is trotted out with perfectly “ready” stories at the top, and the team begins peeling off those stories until the iteration is full.
The team re-estimates the stories as they peel them off, of course. When they get to 40 points, they declare the iteration planning done.
You revisit the product backlog and see that the aggregate estimates for those stories was actually 30. You might applaud the team for gaining more accuracy—after all, the original estimates were off by more than 30 percent. In fact, if you go back over the past ten iterations, you see this is a consistent pattern. Thank goodness for re-estimation. Or …
The business is trying to figure out when the rest of the minimally viable product will be done. The business looks at the product backlog and sees 120 points’ worth of stories. How many iterations are left? Simple math, you say. Team Swiss Watch always delivers 40 points, ergo only three more iterations to go. Very nice. Schedule the press conference.
All of a sudden, this historically “predictable” delivery engine stalls. The team takes a whole extra iteration to complete the minimum viable product. Everyone scratches their heads. The press conference is rescheduled.
Here’s what happened. Two velocities are at play. The velocity of 40 is against the inflated, iteration-time points. The product backlog, however—the 120 points—is in noninflated product-backlog points. Our simple math was against different scales. It’s as if velocity was measured in inches, but the product backlog was sized in centimeters.
The simple math we should have done was to divide the 120 by 30, not 40. With consistent units, it’s obvious that there were four iterations left. Same simple math, but different results when you use consistent scales.
It’s not just that re-estimation provided no value. It actually eroded value.
If you regularly re-estimate at iteration planning meetings, make a note of the original versus updated estimates. See if they grow. Consider what impact this is having on the accuracy of your release planning.
I can hear you now: "My team's estimates don't inflate. Some go up; some go down." If this is the case, then your aggregate will probably even out, more or less. In the example from above, perhaps you go into the meeting, change estimates for every story, and end up with 40 re-estimated points’ worth of work that were originally booked in the backlog at 39. Your velocity is not materially impacted. You are still on track, with roughly three remaining iterations. So, the question is this: What value did that re-estimation provide? None.
Is there a time when re-estimation is helpful?
Yes. I’ve encountered situations where classes of stories with some cross-cutting concern end up being inaccurately estimated. For example, let’s say the team realizes that every time a user story requires a database change, the story ends up taking more effort than expected. In another case, perhaps original estimates were high due to some technical risk. When the risk gets eliminated, those original estimates turn out to be inflated.
When a cross-cutting concern imparts a material difference in estimated effort, it is useful to re-estimate the affected user stories with the new information. Realize that it is still important to embark on this re-estimation against the original estimates, to avoid the distortion illustrated earlier.
Wholesale re-estimation at the beginning of iterations is folly. At best, it is a waste of time; worse, it usually distorts. The only time I recommend re-estimating is when "aspects" of user stories are discovered, in practice, to cause user stories to require more or less effort than expected. We should be using aspect-oriented re-estimation, if at all.