If you asked anyone in my team what agile practice is most responsible for our success over the past eight years, I bet they'd answer "retrospectives". At the start of every two-week sprint, we spend time talking about the previous sprint, identifying areas that need improvement, and thiinking of ways to overcome obstacles. But I wonder if it's not so much the retrospectives themselves, as the small experiments (to borrow Linda Rising's term) we try to address our problem areas.
Here's a recent example. Our Product Owner is awesome, but like many POs, he has many responsibilities and not enough time. Years ago, he came up with the idea of "story checklists". Before each iteration, he prepared a checklist for each user story, following a template that included information such as mock-ups for new UI pages or reports, whether a new story affected existing reports or documentation, whether third parties needed to be involved, and high-level test cases. This helped us get off to a running start with each story.
As our PO was burdened with more responsibilities, he started to run late on preparing the story checklists. The downward slide started slowly. At our sprint planning, he'd say, "Oh, I am still working on the checklist for this one story, but I'll have it ready soon." Or, "I'm waiting to hear from the head of sales to get the final requirements for this, I'll let you know as soon as I know." We're agile, we're flexible, we have a lot of domain knowledge, so we felt we could cope.
But the one missing story checklist soon turned into two, then three—after awhile, we weren't getting any story checklists, ever. We discussed each story with our product owner at our sprint planning meetings and wrote requirements and high level tests on the whiteboard, but that whiteboard also had outstanding questions for each story. We'd start working on the story with the best information we had, but then there would be changes. We spent a lot of time going back and forth to look for the PO, ask questions, and update the requirements as they changed or were finalized. We still got our stories done, but it was costing the company more, and slowing us down.
The PO had no motivation to reverse this change. It wasn't even his fault, he was usually waiting for other people. We were still finishing the stories. But we could have done more work if we could have saved the time for all the back-and-forth over requirements.
Our frustration mounted. Finally, at a retrospective, we decided we had to do something about this problem. The company was spending extra money to finish each story, simply because the business people were not getting their ducks in a row before each iteration began. We decided to try an experiment.
We had recently begun to use a product called MercuryApp to record our feelings about the progress of the sprint every day. (That is another experiment, a way to keep better track of how things go so our retrospectives can be more productive, but that's the subject for a future blog post). This product lets you rate your feelings on a 5 point scale from a very sad face to a very happy face. This gave us an idea. At the end of our sprint planning meeting, we put a "rating face" next to each story on the whiteboard. If we didn't have any requirements, we put a very sad face. If we had all the requirements we needed to complete the story, we put a very happy face. Most were somewhere in between—a sort of sad or happy face, or a "meh" face.