Our organization had fully committed to using Scrum. But after “Scrumming” for more than a year and a half, we were still having issues delivering features in a predictable manner.
We wanted to keep each increment directionally aligned to support the overarching “go to market” plan, which defined a minimal set of features that needed to be in place before a formal customer release would be considered. We groomed the product backlog. Sprint planning sessions were held where the teams selected stories they believed they could completely deliver by the end of the sprint. But as soon as we kicked off the sprint, something always happened. We missed our commitments. Things we thought we could do in a week took months. No one knew when we would be done.
Initial feedback from retrospectives suggested that the teams lacked an accurate understanding of their true capacity. Our project-tracking tool compounded the issue. The reports it generated made it hard to understand if we were on time or late. The teams adapted after each iteration as best as they could, and they adjusted their capacity for the upcoming sprint based on the data they had. In an attempt to improve predictability, the teams started to leave an additional buffer of hours in reserve to address any unplanned work that might arise.
Unfortunately, this did not improve the predictability, so each team started using sprint and release burndown charts to track their progress against the plan. The charts quickly flagged that there was a gap in the completed work and the committed work; moreover, the teams realized that they could not account for where the allocated hours had been used.
Ultimately, it would come down to significant tradeoffs, heroics, and extra efforts by all to get a formal release out the door. This lack of predictability was impacting team dynamics in a very negative way.
It was clear the teams were working hard, but we were still not achieving the results we planned for. We needed to address the mismatch between what was being committed and what was accomplished.
The Transparency Experiment
Our organization was focused on building components for a large cloud-based, software-as-a-service (SaaS) infrastructure. Two teams happened to be collocated, and individuals on these teams had similar skill sets and work styles, and a history of working well together. These teams did their best to put their work out there, gather feedback, and adapt each sprint, but like the other teams, they just weren’t making any significant progress in being predictable.
It was time to do some root cause analysis. What was contributing to the unpredictability?
We decided to conduct an experiment. Pulling a page out of Dale Carnegie’s book How to Win Friends and Influence People, we threw down a challenge. We wanted to see, between the two local teams, which team (and individual) could be the most transparent in the work they performed. The experiment was presented to the team as a fun and friendly competition with the goal of collecting more data, which then could be used to provide better estimates and, hopefully, improve our ability to deliver to commitments.
The plan was for team members to capture more data points documenting where their time was being spent so we could see how their actual time differed from their estimated time. The experiment would encourage individuals to take ownership and be more accountable for logging their time in the project management tool. They would then be able to use this increased knowledge to make realistic commitments.
The teams decided to ask themselves on a daily basis:
- Were they working the committed hours?
- Were there certain tasks that were taking extra, unplanned time?
- Were there tasks added that were not planned?
- Were stories too big, too complex, or simply not ready for implementation?
- Was more research needed before planning?
- Were skill sets or needed experience missing from the team?
- Were they holding themselves accountable for honoring their commitments?
We defined the rules: Over the next fifteen workdays, whichever team logged more of their committed work hours (normalized to the number of team players) per day would win a point. At the end of the three weeks, the team that had the most points would win apool of reward points, redeemable for merchandise from the company store. There would also be an individual award for the one person who recorded most of their committed time.
To keep it simple, the expected committed hours were extracted from the team’s sprint planning sessions. A linear slope was used for identifying the expected hours per day. A scoreboard was posted in the main walkway for all to see, updated with the daily results.
During the experiment, team members were encouraged to add tasks if they spent time on something that was not originally anticipated during sprint planning. These tasks were tagged as “unplanned” so they could be quickly identified and considered for future estimation.
People became diligent about recording time spent on a defined task. They could now note how their actual hours deviated from their original estimate. The teams competed fiercely (but still amiably) against each other. Each morning, everyone eagerly reviewed the prior day’s results.
Senior management even took notice. They liked what they were seeing and sweetened the points pool to add even more excitement. The lead switched back and forth several times, and as the experiment played out, everyone was paying attention. It came down to the final day to determine the winners in both categories.
The key takeaway for everyone was realizing all the effort that was spent on unplanned tasks. For many different reasons, people were working on tasks that had not been expected. During the experiment, these tasks were identified, addressed, and mitigated as needed. These newly recognized tasks either had been overlooked in the sprint planning process or were misestimated in duration. The experiment improved task transparency and provided historic data for teams to better estimate future work.
As the estimation became more accurate, teams realized that some stories were too big to be addressed in one sprint. Where estimates deviated the most, the teams decided to spend more time in upfront investigation, prior to committing to implementation and delivery. As teams spent more time grooming stories, the stories became sized so that they could be delivered in one sprint’s time.
The challenge between teams and individuals heightened people’s senses of awareness and ownership. The scoreboard was front and center in our work area. People felt compelled to make sure their work log was accurate and timely, and it was clear that they were stepping up their contributions so they didn’t let the team, or themselves, down. We also observed a rekindled sense of pride in getting features delivered to commitments. Teammates took interest in other people’s progress and readily offered to adapt and help keep things moving. Everyone was a cheerleader, as opposed to in the past, when developers only focused on their assigned tasks, sometimes losing the motivation of the larger story.
Team members became more vocal and more realistic about the work they could deliver, and they owned their commitments. We learned that it feels better to deliver to commitment instead of over-committing and then coming up short. All this awareness was highlighted through transparency.