Our organization had fully committed to using Scrum. But after “Scrumming” for more than a year and a half, we were still having issues delivering features in a predictable manner.
We wanted to keep each increment directionally aligned to support the overarching “go to market” plan, which defined a minimal set of features that needed to be in place before a formal customer release would be considered. We groomed the product backlog. Sprint planning sessions were held where the teams selected stories they believed they could completely deliver by the end of the sprint. But as soon as we kicked off the sprint, something always happened. We missed our commitments. Things we thought we could do in a week took months. No one knew when we would be done.
Initial feedback from retrospectives suggested that the teams lacked an accurate understanding of their true capacity. Our project-tracking tool compounded the issue. The reports it generated made it hard to understand if we were on time or late. The teams adapted after each iteration as best as they could, and they adjusted their capacity for the upcoming sprint based on the data they had. In an attempt to improve predictability, the teams started to leave an additional buffer of hours in reserve to address any unplanned work that might arise.
Unfortunately, this did not improve the predictability, so each team started using sprint and release burndown charts to track their progress against the plan. The charts quickly flagged that there was a gap in the completed work and the committed work; moreover, the teams realized that they could not account for where the allocated hours had been used.
Ultimately, it would come down to significant tradeoffs, heroics, and extra efforts by all to get a formal release out the door. This lack of predictability was impacting team dynamics in a very negative way.
It was clear the teams were working hard, but we were still not achieving the results we planned for. We needed to address the mismatch between what was being committed and what was accomplished.
The Transparency Experiment
Our organization was focused on building components for a large cloud-based, software-as-a-service (SaaS) infrastructure. Two teams happened to be collocated, and individuals on these teams had similar skill sets and work styles, and a history of working well together. These teams did their best to put their work out there, gather feedback, and adapt each sprint, but like the other teams, they just weren’t making any significant progress in being predictable.
It was time to do some root cause analysis. What was contributing to the unpredictability?
We decided to conduct an experiment. Pulling a page out of Dale Carnegie’s book How to Win Friends and Influence People, we threw down a challenge. We wanted to see, between the two local teams, which team (and individual) could be the most transparent in the work they performed. The experiment was presented to the team as a fun and friendly competition with the goal of collecting more data, which then could be used to provide better estimates and, hopefully, improve our ability to deliver to commitments.
The plan was for team members to capture more data points documenting where their time was being spent so we could see how their actual time differed from their estimated time. The experiment would encourage individuals to take ownership and be more accountable for logging their time in the project management tool. They would then be able to use this increased knowledge to make realistic commitments.