Agile teams often employ user stories to organize project requirements. When using physical index cards to assemble requirements, teams use the backs of the cards to capture acceptance criteria—also called conditions of satisfaction, or just ACs. Acceptance criteria are the main points or general rules to consider when coding and testing the user story.
ACs are not acceptance tests. Professional testers can elaborate these criteria later to produce test scripts. Test scripts, whether automated or manual, are detailed and specific, and they certainly require more space then an index card!
As with the story on the front of the card, the finite amount of space on the back of the card limits writing. That is deliberate. Stories shouldn’t contain every last detail—far from it. In elaborating the stories and writing the tests, coders and testers should continue having conversations with each other and the business representatives.
Teams using electronic systems may also record acceptance criteria there, but without the limits imposed by physical cards, these teams have to resist the temptation to add more and more detail. Remember: A story is a placeholder for a conversation. Resist the urge to get detailed in ACs.
Acceptance Criteria and Testers
ACs expand on the initial story, so they are usually written by the same person who wrote the original story—probably the business representative or product owner (PO). However, when a PO is short on time, ACs are frequently dropped. That is not always a bad thing, but it may well be the sign of a problem. Testers might need step in to add acceptance criteria themselves.
Usually, testers begin their work with existing ACs. They may give feedback to the PO on how to improve the criteria, but their main role is to take the ACs and create actual tests from them. Hopefully these tests are automated, but if not, they will be natural language descriptions of how to perform the tests.
The story and ACs form the requirement; the tests form the specification. Requirements describe what the business wants to achieve, while specifications describe the detailed parameters within which the solution needs to perform. Specifications always need to be testable. Techniques such as specification by example and acceptance test-driven development make specifications themselves executable as automated tests.
If it helps product owners to talk to testers or programmers when writing stories and acceptance criteria, then they should. And if it helps testers to write tests by talking to the programmers and POs, they also should. Teams that encourage such collaboration sometimes call these discussions “Three Amigos” or the “Power of Three.” In these conversations, people representing requirements, testing, and coding come together to talk about the story. Not only will they discuss the story and ACs, but they also may change them, add to them, or split the story into multiple smaller stories.
The Level of Detail
Consider this example:
As a delivery company, I want to have the customer’s full postal address on packages so that I can deliver them correctly.
Such a story might have ACs like:
Customers must provide a valid postal address
Shipping labels must carry a correctly formatted address
Notice that a “valid postal address” is not defined. That kind of detail can be left until later, as ACs are not the place to elaborate. (An exception might be when some nonobvious detail is important, say, using all nine digits of the ZIP code.)
The amount of detail needed depends on the knowledge and experience of the programmers and testers. A tester might just use her existing knowledge to write the test script, or a programmer and tester may need to research what constitutes a “valid address.” In this postal address scenario, a tester who is testing software for a country she has never visited might ask for more details than the product owner expects. If that’s the case, there quickly comes a point where conversation between the product owner and tester would be better.
The Right Time to Define
I am frequently asked, “When should we write the acceptance criteria?” Sometimes programmers resist giving an effort estimate for a story unless they can see the ACs—sometimes detailed ACs, at that. However, there is little point in POs (and testers) spending time on ACs until stories are about to be scheduled. After all, if ACs take hours to write and the story is not scheduled, their time is wasted.
Also, if ACs are added but then the story doesn’t get scheduled for a year, by that time the story and the ACs may have changed. Because old criteria are in place, it can be easy to overlook these potentially important changes.
I would rather product owners did not write ACs until the last possible moment—even just before the planning meeting. At that point they should be fairly sure what they will request from the team. This would have the beneficial effect of forcing brevity on the PO.
Writing ACs inside the iteration neatly sidesteps the problem of postponed or canceled work. This means the team must be prepared to accept—and even estimate, if needed—stories without ACs. Because writing ACs might well be the first task in an iteration before any code or tests are written, any effort estimates given must be for the work required to “write ACs and deliver the code” rather than just “deliver the code.”
Another solution I have had some success with is writing the ACs within the planning meeting. At this point, teams know the stories to schedule. This will make the planning meeting longer, but on a small team there is unlikely to be many stories. A large team can split into smaller groups and work on stories separately.
(Test scripts based on ACs, however, are best created within the iteration, preferably before coding begins. Creating test scripts is part of the work of delivering a story.)
Acceptance Criteria in Action
Acceptance criteria can be helpful in expanding on and elaborating user stories. However, ACs should not be seen as a substitute for a conversation; they are not a route back to long documents and extremely detailed requirements.
Remember to use ACs sparingly to record key criteria at a high level. Defer details to conversations within the iteration and elicit specifications as needed.