If you asked anyone on my team what agile practice is most responsible for our success over the past eight years, I bet they'd say retrospectives. At the start of every two-week sprint, we spend time talking about the previous sprint, identifying areas that need improvement, and thinking of ways to overcome obstacles. But I wonder if it's not so much the retrospectives themselves, as the "small experiments" (to borrow Linda Rising's term) we perform to try to address our problem areas.
Here's a recent example. Our product owner is awesome, but like many POs, he has many responsibilities and not enough time. Years ago, he came up with the idea of story checklists. Before each iteration, he prepared a checklist for each user story, following a template that included information such as mock-ups for new UI pages or reports, whether a new story affected existing reports or documentation, whether third parties needed to be involved, and high-level test cases. This helped us get off to a running start with each story.
As our PO was burdened with more responsibilities, he started to run late on preparing the story checklists. The downward slide started slowly. At our sprint planning, he'd say, "Oh, I am still working on the checklist for this one story, but I'll have it ready soon." Or, "I'm waiting to hear from the head of sales to get the final requirements for this, I'll let you know as soon as I know." We're agile, we're flexible, we have a lot of domain knowledge, so we felt we could cope.
But the one missing story checklist soon turned into two, then three—after awhile, we weren't getting any story checklists, ever. We discussed each story with our product owner at our sprint planning meetings and wrote requirements and high level tests on the whiteboard, but that whiteboard also had outstanding questions for each story. We'd start working on the story with the best information we had, but then there would be changes. We spent a lot of time going back and forth to look for the PO, ask questions, and update the requirements as they changed or were finalized. We still got our stories done, but it was costing the company more, and slowing us down.
The PO had no motivation to reverse this change. It wasn't even his fault, he was usually waiting for other people. We were still finishing the stories. But we could have done more work if we could have saved the time for all the back-and-forth over requirements.
Our frustration mounted. Finally, at a retrospective, we decided we had to do something about this problem. The company was spending extra money to finish each story, simply because the business people were not getting their ducks in a row before each iteration began. We decided to try an experiment.
We had recently begun to use a product called MercuryApp to record our feelings about the progress of the sprint every day. (That is another experiment, a way to keep better track of how things go so our retrospectives can be more productive, but that's the subject for a future blog post). This product lets you rate your feelings on a 5 point scale from a very sad face to a very happy face. This gave us an idea. At the end of our sprint planning meeting, we put a "rating face" next to each story on the whiteboard. If we didn't have any requirements, we put a very sad face. If we had all the requirements we needed to complete the story, we put a very happy face. Most were somewhere in between—a sort of sad or happy face, or a "meh" face.
The second (and possibly more powerful) part of our experiment involved pushing back on the business. We told the product owner that any stories that didn't have requirements by the second day of the sprint would be taken off of our task board and not done until the following sprint.
We neglected to specify the time on the second day of the sprint by which we needed all requirements, and our PO delivered some at 11 p.m. But he got them to us for all the stories! This was a great result.
At our next retrospective, we went through each story on the whiteboard, and talked about how we felt about each one now. Interestingly, some stories that had a sad face ended up going well, and some with a happy face turned out to be trickier than we had thought. This gave us a better understanding of what we really need to know about each story before we start working on it.
We couldn't do anything directly about our PO being overworked, or about the business people failing to provide information. But we could try this experiment to make our lack of requirements visible, and push back on the business to say, "We aren't wasting our time and yours if we don't have requirements at the start of the sprint." If this experiment hadn't helped, we'd have tried another one.
Have your retrospectives—but do take Linda Rising's advice, and try small experiments. I am betting you will find unexpected ways to improve how you work.