with the end-to-end happy path. We prioritized stories based on risk, dependencies, and importance. Fewer stories “blow up” and take way more time than we anticipated, so we’re able to maintain a steady pace.
But software development is never certain. It seems that just as we are lulled into a sense of complacency, thinking we rock at estimating, our roller coaster track drops out from under us. Last summer, a five-point story that we thought we could finish in a week stretched into three two-week iterations. We hadn’t anticipated that its requirements necessitated a change in a basic model of our application, requiring not only code changes in other areas but also major database changes. The five-point size should have been a warning; when we get over three points on the Fibonacci-number scale, we enter risky territory.
At another point in the past year, we were surprised in a completely different way. Our product owner (PO) noted that, for several sprints, our velocity had been much higher than normal. The PO wanted to know if the business could count on velocity remaining at that super-high level or if something else was at play. Had we achieved some kind of bizarre breakthrough after our years of agile development? Were we purposely overestimating stories to look good? Or, was it just a series of weird coincidences?
We decided it was a weird series of events. Production support requests were at an all-time low, so more time could be devoted to new development. Some stories had simply turned out to be much easier than anticipated. We resolved to go back to basics, using a small, “known” story as a basis for relative sizing of others and breaking stories that would take longer than a few days to finish into smaller stories. If a story turned out much larger or much smaller than anticipated, we had a quick retrospective about what happened.
A More Agile Approach
Not long after this period of high velocity, our PO presented us with a highly difficult new theme that would eventually lead to rewriting a core part of our application. The existing code for this functionality was very old, poorly understood, and supported by very few tests. In fact, we had never successfully run the whole process in our test environment. We couldn’t even write the stories, much less estimate them.
We decided to take an extreme agile approach. There wasn’t a hard deadline, as there is with many of our themes, but it was desirable to finish by the end of the year. Our business experts trusted us to figure out the right design and take the time to implement it in a sustainable and maintainable way, while accommodating business priorities. After several sessions where we talked through the desired system behavior and business problems to solve with the PO and other stakeholders, we did a spike to try out a proposed design. During the spike, which took two iterations to finish, the team did performance testing to make sure the design scaled. We felt good about the approach, but there were still a lot of unknowns.
At this point, we worked with the PO to write and size user stories, and we started them. We did extensive exploratory testing on the code as it was written, learning more about special cases and issues with the technical implementation, requiring more changes to the code and to the database design. We found performance issues in the process and in reports, which required more tuning in the code and database. Complex new test fixtures were needed in