Agile Adoption Patterns

[article]
Summary:
The Done State practice is a definition that a team agrees upon to nonambiguously describe what must take place for a requirement to be considered complete. The done state is the goal of every requirement in an iteration. It is as close as possible to deploying software as a team can come.

Note: This material will appear in the forthcoming book, Agile Adoption Patterns by Amr Elssamadisy (ISBN 0321514521, Copyright: Pearson Education). The material is being provided by Pearson Education at this early stage to create awareness for this upcoming book (due to publish in July 2008). It has not been fully copyedited or proofread yet; we trust that you will judge the content on technical merit, not on grammatical and punctuation errors that will be fixed at a later stage.

The Done State practice is a definition that a team agrees upon to nonambiguously describe what must take place for a requirement to be considered complete. The done state is the goal of every requirement in an iteration. It is as close as possible to deploying software as a team can come.

Business Value

Defining and adhering to a done state directly affects time to market and visibility. The closer you come to deployable software, the more confidence you have in your progress and the less you have to do to get ready to release. Cost is reduced because you pay for defect fixes early. Of course, the further your teams definition of done state is from deployable software, the more risky your estimates are because you are less confident of your progress and the more you have to pay in time and effort for correcting defects.

Sketch
Initially, the team agreed on a done state that included all automated developer tests passed and acceptance tests manually run by Aparna and Cathy and verified by the testing team. The first Iteration had many uncompleted stories because development was completed only a short time before the end of the time box, and Aparna and Cathy found several defects. The team wanted to count the stories 80 percent done. Caleb strongly discouraged this and was able

to convince the team not to do it even though they had only 20 percent completion. The completion percentage was discouraging, and Caleb played cheerleader to keep spirits high. Over the next two iterations, developers complete the stories in sequence instead of trying to do all of them at once. This resulted in stories being complete earlier, which left enough time for the feedback cycle with testing. The team averaged about 85 percent completion over the next few iterations.

Then, when the team picked up functional testing and started writing executable acceptance tests at the beginning of each iteration (test-driven requirements), the completion rate shot up very close to 100 percent because developers were able to fully test the requirements at their desk. The done state was changed from just passing the automated developer tests to passing both the automated developer tests and the functional tests.

Context

You are on a development team performing iterations; this implies that you need specific, measurable goals for the requirements to be met at the end.

Forces

  • Reporting on partial work done is error prone; at worst, we are 90 percent done 90 percent of the time.
  • The closer a requirement is delivered to deployable, the less uncertainty your team has about the true state of the system.
  • Only functionality that is delivered to the customer has real value.
  • The closer a requirement is delivered to a deployable state, the more defects you have found and eliminated.
  • Depending on your environment, it may be difficult to get close to deploying.
  • Integration in traditional software teams is error prone and difficult.

Your team should try to eliminate as much partial work done as possible every iteration. A requirement should be built all the way through, including integration, acceptance testing, and as close as possible to deployment. This will weed out most of the errors and increase your confidence in the teams true progress. Your team should be consistent, so agree on what it means to really be done; this is the done state. Your goal for each iteration will be to take each requirement to completion as defined by the done state. Finally, a done state is binary. Either it is met or it is not; there is no partial credit. If a requirement doesn't meet its done state at the end of an iteration, it goes back onto the backlog.

Adoption

Be aggressive in setting your done state as close as possible to deployment, but also be realistic. Your done state has to be something you can meet in the coming Iteration

  • Agree on your done state before the start of your first Iteration. Work to get commitment from team members; the whole team will have to work to get a requirement all the way through to your definition of done. Remember, a done state is a goal; therefore, it needs to be S.M.A.R.T. (specific, measurable, achievable, relevant, and timely).
  • The done state should push your team members without breaking them.
  • Expect getting to the done state to be painful initially. If your team consistently fails to meet the goals of the Iteration, use your retrospective to address the issue. Look to fixing other things before scaling back the done state.
  • Revisit your done state in your retrospectives when your team is comfortable meeting them. Try to push your done state closer to deployment.

The done state is an important part of having successful iterations. If the done state is set too low, your team wont see the benefits; if its set too high, your team will feel the pain and get discouraged.

  • A common occurrence is that teams fail to get through acceptance testing because developers complete their work near the end of the iteration and dont have enough time to do full acceptance testing and fix errors that arise. It will be tempting to scale back the done state, but consider working on requirements in series instead of in parallel. That is, don't start working on all the requirements at once, but take a few at a time and work to complete them—using Pair Programming to help—and then go on to the next requirements when those that you are working on have reached their done state.
  • If your team is not a cross-functional team, it will be difficult to get close to deployment.
    • If at all possible, go to cross-functional teams.
    • If it is not possible, your done state can go only as far as building, testing, and integrating the particular piece your team is working on. At the same time, plan for teams to be done with their interlocking parts as soon as possible for integration feedback.
  • If your done state is too lax—that is, it does not include integration and acceptance testing—your Iterations will be dysfunctional. Your team will miss many defects that will have to be fixed later at a higher cost. Important feedback will be missed.
  • If you have a customer who places a high value on look and feel and more subjective qualities like usability, it will be important to leave time in the iteration for the back and forth with the customer to finetune and reach the done state.
  • Some teams decide to have two done states —one for developers and done for the QA team. Therefore, a task is coded in one iteration and tested in the next. This usually creates a planning problem because the true done state is the one after testing, and it is hard to plan for an iteration because there will be an unknown amount of requirements that will fail from the QA done state and fall back to the development iteration.

Variations

The done state originally evolved out of the general idea of “working software” from the Agile manifesto. But what is working software? In XP, it was originally software that passed unit and acceptance tests. As the community gained more experience with continuous integration in larger projects, the constraints were different, and thus evolved the idea of “as close to deployable as possible.”

References

Beck, K. and Andres, C., Extreme Programming Explained: Embrace Change (2nd Edition) , Boston: Addison-Wesley, 2005.

Elshamy, Ahmed Elssamadisy, Amr. Applying Agile to Large Projects: New Agile Software Development Practices for Large Projects. In Agile Processes in Software Engineering and Extreme Programming, Proceedings of the 8th International Conference, XP 2007, Como, Italy, June 18–22, 2007, Springer, 46–53.

Larman, C., Agile and Iterative Development: A Managers Guide, Boston: Addison-Wesley, 2004.

Schwaber, K., and Beedle, M., Agile Software Development with SCRUM,Upper Saddle River, New Jersey: Prentice Hall, 2001. 


About the Author

Amr Elssamadisy is a software development practitioner at Gemba Systems, helping both small and large development teams learn new technologies, adopt and adapt appropriate Agile development practices, and focus their efforts to maximize the value they bring to their organizations. Gemba focuses on issues such as personal agility, team-building, communication, feedback, and all of the other soft skills that distinguish excellent teams. Amr's technical background and experience (going back to 1994) in C/C++, Java/J2EE, and .NET, allows him to appreciate the problems of and support development teams 'in the trenches.' Amr is also the author of Patterns of Agile Practice Adoption: The Technical Cluster, an editor for the AgileQ at InfoQ, a contributor to the Agile Connection and a frequent presenter at software development conferences.

 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.