Writing independent user stories seems simple, but it is actually difficult to do well. There are often parts of some stories that are dependent on other stories' functionalities, so it's not easy to keep them separated. Kris Hatcher relates how his team wrote and scored stories to keep them independent but still meeting acceptance criteria.
When it comes to requirements, some teams have difficulty writing user stories that fit their specific necessary parameters.
The last team I was on, we had to fit our stories into a two-week sprint and make sure they each delivered value to our product owner, among a variety of other specifics. We kept struggling until our ScrumMaster introduced a mnemonic to help us remember a framework for writing stories. All we had to do was "INVEST" and make our stories:
On the surface, this seemed easy; as we dug into the acronym and started applying each bit, however, we discovered that it was much more difficult than it sounds. We found the “Independent” portion especially challenging, so we decided to experiment with how we applied that to our story-writing exercises.
Writing independent stories seems like a simple task, but it is actually really difficult to do well. The application we were working on had several reports, and we often implemented functionality on all of them, such as adding the ability to export the reports to Excel files.
Before we learned the INVEST trick, we would have written a story to implement the export to Excel functionality on one of the reports, then written separate stories for each of the other reports, each of the successive stories having a dependency on the first one being completed. If we are writing stories to be independent, that cannot happen. So, our first attempt to do things independently was to write, and score, each story so that it contained everything necessary to be completed.
This worked well for the first story in the group, which was ranked by our product owner. However, the remaining stories were then taking much less effort to complete than we had initially estimated, because the first story laid the groundwork for the rest of them.
As we talked about this issue and looked around for ideas and inspiration, our next attempt was to write two stories. The first one would implement the feature in question on one report, and the second one would implement the same feature on all the remaining reports.
The story card was left with a blank area for which report would be the first one, which our product owner would fill in when she selected that story. We would score those two stories, typically with very similar scores, keeping in mind that the first instance would be much harder because it would influence the other implementations coming after it.
While in theory we thought this would work well, in practice we found that our product owner rarely wanted to implement the functionality on all the remaining reports at the same time. We decided to see if there were any other ways to keep our stories independent and score them accurately.
That discussion resulted in the idea of “double scoring” our stories. We experimented with giving stories two scores: one for if it is played as the first one in the series, and another if other stories in the series are played first. In this scenario, we would write a user story for each instance of the new feature—say, one for each report—and score them the two ways.
We typically spend a little more time discussing these stories during grooming so that we have a better idea of what it will take to complete them. Then we score them once as if it were the first time we were doing that story, then again, this time imagining we have already completed one of the other stories in this particular sequence. On the score section of our story card template, we write the score as a fraction, showing the first story score on top and the subsequent story score on the bottom.
This way our product owner has the ability to select whatever story she wants based on where she feels she will see the most business value, and we do not have to re-evaluate our scores after the first story is played. We also do not have to adjust our acceptance criteria, because the functionality they lay out will need to be in place regardless of when the code was written.
Initially, we were concerned that the "subsequent story" score would be incorrect due to the lack of knowledge about the final solution, but we found that these estimates were actually pretty close to the work that it took to complete the story. So far, the experiment seems to be working for the team. There are still some bugs that need to be worked out, but we have decided to keep this practice going for the foreseeable future.
I hope you will be able to use these ideas to help your team develop better stories that can be played more independently!