product. Teams over-invest in less important parts of a system and under-invest in more important parts. To prevent this, we have to first clearly define what "appropriate quality" means and communicate that to everyone.
Although quality is often perceived as intangible, it isn't that hard to define. Gerald Weinberg defined quality as value delivered to some person [Weinberg91]. To specify quality, we have to identify the following two concepts:
- Who is that person? Or alternatively, who are the people affected by our work?
- What kind of value are they looking for from the system?
User Stories [Cohn04] apply a similar technique to ensure that each story delivers business value by requesting the writer to identify who is a stakeholder for a story and why they want it. In order to ensure successful delivery of milestones or entire projects, we need to define these aspects of our system not just on the low scope level (user stories), but also holistically for the entire project or a milestone of a project. In fact, I find that high level definition of quality much more important.
Effect Mapping facilitates this process because it requires us to clearly define the two aspects of quality–who the person is and what they expect–while drawing the map. They are effectively the second and the third levels of the map, the stakeholders and the stakeholder needs.
Prioritising based on business value
The hierarchical nature of the map clearly shows who benefits from a feature, why, and how that contributes to the end goal. This clear visualisation allows us to decide which activities best contribute to the end goal and where are the risks, which immensely helps with prioritisation. Once we have identified a clear goal, stakeholders and stakeholder needs, we can estimate how much we expect that supporting each one of them will contribute to the end goal. In the gaming system example, supporting invitations was clearly more important than supporting posting.
Effect Maps help us prioritise and invest appropriately in supporting activities depending on their expected value. In addition, this discussion provides a way to start thinking about how to measure whether a software feature has really delivered what we expect. Similar to the way how discussing how we can measure deliverables against a business goal, discussing how much a deliverable addresses a stakeholder need helps the team nail down what quality is and share the understanding.
Iterative product release planning
User stories are de facto standard today for managing long term release planning. This often includes an “iteration zero”, a scoping exercise or a user story writing workshop at the start of a milestone. During the “iteration zero” key project sponsors and delivery team together come up with an initial list of user stories that will be delivered. A major problem with the "iteration zero" approach is the long stack of stories that have to be managed as a result of it. Navigating through hundreds of stories isn't easy. When priorities change, it is hard to understand which of the hundreds of items on the backlog are affected. Jim Shore called this situation "user story hell" during his talk at Oredev 2010, citing a case of a client with 300 stories in an Excel spreadsheet. I've seen horror stories like that, perhaps far too often.
From my experience, project sponsors think about prioritisation in mid and long term in terms of the order of stakeholder needs they want to satisfy, not necessarily in terms of the order of system features. User stories try to address that but having too many stories upfront clutters the visibility, but having