Estimation Roller Coaster


At another point in the past year, we were surprised in a completely different way. Our product owner (PO) noted that, for several sprints, our velocity had been much higher than normal. The PO wanted to know if the business could count on velocity remaining at that super-high level or if something else was at play. Had we achieved some kind of bizarre breakthrough after our years of agile development? Were we purposely overestimating stories to look good? Or, was it just a series of weird coincidences?

We decided it was a weird series of events. Production support requests were at an all-time low, so more time could be devoted to new development. Some stories had simply turned out to be much easier than anticipated. We resolved to go back to basics, using a small,  “known” story as a basis for relative sizing of others and breaking stories that would take longer than a few days to finish into smaller stories. If a story turned out much larger or much smaller than anticipated, we had a quick retrospective about what happened. 

A More Agile Approach
Not long after this period of high velocity, our PO presented us with a highly difficult new theme that would eventually lead to rewriting a core part of our application. The existing code for this functionality was very old, poorly understood, and supported by very few tests. In fact, we had never successfully run the whole process in our test environment. We couldn’t even write the stories, much less estimate them. 

We decided to take an extreme agile approach. There wasn’t a hard deadline, as there is with many of our themes, but it was desirable to finish by the end of the year. Our business experts trusted us to figure out the right design and take the time to implement it in a sustainable and maintainable way, while accommodating business priorities. After several sessions where we talked through the desired system behavior and business problems to solve with the PO and other stakeholders, we did a spike to try out a proposed design. During the spike, which took two iterations to finish, the team did performance testing to make sure the design scaled. We felt good about the approach, but there were still a lot of unknowns. 

At this point, we worked with the PO to write and size user stories, and we started them. We did extensive exploratory testing on the code as it was written, learning more about special cases and issues with the technical implementation, requiring more changes to the code and to the database design. We found performance issues in the process and in reports, which required more tuning in the code and database. Complex new test fixtures were needed in order to automate adequate regression tests. Our original story estimates were way too low, and it took three iterations to complete the stories. We decided that next time we have a theme of this complexity, with so many unknowns, we will revisit the stories more often and keep breaking them into smaller increments as we learn more, rather than carrying the same story cards from iteration to iteration. 

Smoothing the Ride
Our business managers want story and theme (or epic) estimates to help them plan just enough ahead. Though we no longer spend time estimating stories that aren’t planned for the immediate future, I don’t see us getting away from estimating entirely. We continually get better at sizing user stories, but they will always just be estimates! 

We’ve been successful by applying agile values and principles. If a project has a hard deadline, we know we have to start as early as possible and work on the riskiest stories first. We accept that unexpected stuff will happen. We learn from it and keep improving. We spend time to manage our technical debt so it won’t drag down our ability to deliver in a timely manner. We’re careful to avoid overcommitting, so that we can work at a sustainable pace and continue to meet business expectations. 

We don’t ever get bored in supporting our business with valuable software, but we don’t need any adrenaline rushes either. The estimation roller coaster can be scary, but, over time, we can apply incremental and iterative agile development to smooth out the bumps and curves.

About the author

Lisa Crispin's picture Lisa Crispin

Lisa Crispin is the co-author, with Janet Gregory, of Agile Testing: A Practical Guide for Testers and Agile Teams (Addison-Wesley, 2009), co-author with Tip House of Extreme Testing (Addison-Wesley, 2002) and a contributor to Beautiful Testing (O’Reilly, 2009) and Experiences of Test Automation by Dorothy Graham and Mark Fewster (Addison-Wesley, 2011). She has worked as a tester on agile teamssince 2000, and enjoys sharing her experiences via writing, presenting, teaching and participating in agile testing communities around the world. Lisa was named one of the 13 Women of Influence in testing by Software Test & Performance magazine in 2009. For more about Lisa’s work, visit

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 22
Sep 24
Oct 12
Nov 09