Agile teams often employ user stories to organize project requirements. When using physical index cards to assemble requirements, teams use the backs of the cards to capture acceptance criteria—also called conditions of satisfaction, or just ACs. Acceptance criteria are the main points or general rules to consider when coding and testing the user story.
ACs are not acceptance tests. Professional testers can elaborate these criteria later to produce test scripts. Test scripts, whether automated or manual, are detailed and specific, and they certainly require more space then an index card!
As with the story on the front of the card, the finite amount of space on the back of the card limits writing. That is deliberate. Stories shouldn’t contain every last detail—far from it. In elaborating the stories and writing the tests, coders and testers should continue having conversations with each other and the business representatives.
Teams using electronic systems may also record acceptance criteria there, but without the limits imposed by physical cards, these teams have to resist the temptation to add more and more detail. Remember: A story is a placeholder for a conversation. Resist the urge to get detailed in ACs.
Acceptance Criteria and Testers
ACs expand on the initial story, so they are usually written by the same person who wrote the original story—probably the business representative or product owner (PO). However, when a PO is short on time, ACs are frequently dropped. That is not always a bad thing, but it may well be the sign of a problem. Testers might need step in to add acceptance criteria themselves.
Usually, testers begin their work with existing ACs. They may give feedback to the PO on how to improve the criteria, but their main role is to take the ACs and create actual tests from them. Hopefully these tests are automated, but if not, they will be natural language descriptions of how to perform the tests.
The story and ACs form the requirement; the tests form the specification. Requirements describe what the business wants to achieve, while specifications describe the detailed parameters within which the solution needs to perform. Specifications always need to be testable. Techniques such as specification by example and acceptance test-driven development make specifications themselves executable as automated tests.
If it helps product owners to talk to testers or programmers when writing stories and acceptance criteria, then they should. And if it helps testers to write tests by talking to the programmers and POs, they also should. Teams that encourage such collaboration sometimes call these discussions “Three Amigos” or the “Power of Three.” In these conversations, people representing requirements, testing, and coding come together to talk about the story. Not only will they discuss the story and ACs, but they also may change them, add to them, or split the story into multiple smaller stories.
The Level of Detail
Consider this example:
As a delivery company, I want to have the customer’s full postal address on packages so that I can deliver them correctly.
Such a story might have ACs like:
Customers must provide a valid postal address
Shipping labels must carry a correctly formatted address
Notice that a “valid postal address” is not defined. That kind of detail can be left until later, as ACs are not the place to elaborate. (An exception might be when some nonobvious detail is important, say, using all nine digits of the ZIP code.)
The amount of detail needed depends on the knowledge and experience of the programmers and testers. A tester might just use her existing knowledge to write the test script, or a programmer and tester may need to research what constitutes a “valid address.” In this postal address scenario, a tester who is testing software for a country she has never visited might ask for more details than the product owner expects. If that’s the case, there quickly comes a point where conversation between the product owner and tester would be better.
The Right Time to Define
I am frequently asked, “When should we write the acceptance criteria?” Sometimes programmers resist giving an effort estimate for a story unless they can see the ACs—sometimes detailed ACs, at that. However, there is little point in POs (and testers) spending time on ACs until stories are about to be scheduled. After all, if ACs take hours to write and the story is not scheduled, their time is wasted.
Also, if ACs are added but then the story doesn’t get scheduled for a year, by that time the story and the ACs may have changed. Because old criteria are in place, it can be easy to overlook these potentially important changes.
I would rather product owners did not write ACs until the last possible moment—even just before the planning meeting. At that point they should be fairly sure what they will request from the team. This would have the beneficial effect of forcing brevity on the PO.
Writing ACs inside the iteration neatly sidesteps the problem of postponed or canceled work. This means the team must be prepared to accept—and even estimate, if needed—stories without ACs. Because writing ACs might well be the first task in an iteration before any code or tests are written, any effort estimates given must be for the work required to “write ACs and deliver the code” rather than just “deliver the code.”
Another solution I have had some success with is writing the ACs within the planning meeting. At this point, teams know the stories to schedule. This will make the planning meeting longer, but on a small team there is unlikely to be many stories. A large team can split into smaller groups and work on stories separately.
(Test scripts based on ACs, however, are best created within the iteration, preferably before coding begins. Creating test scripts is part of the work of delivering a story.)
Acceptance Criteria in Action
Acceptance criteria can be helpful in expanding on and elaborating user stories. However, ACs should not be seen as a substitute for a conversation; they are not a route back to long documents and extremely detailed requirements.
Remember to use ACs sparingly to record key criteria at a high level. Defer details to conversations within the iteration and elicit specifications as needed.
I agree that writing ACs way ahead is a wasted effort. When the story is up for implementation I always find that the three and a half bullet points as ACs are all we have. If I take the example with the postal address way too many questions are left unanswered. I assume that the system already has a customer record in place, but I get back to that later. That example should also have been split into two stories because printing the shipping label is an entirely and vastly different set of functionality. That AC has no place in that story.
Valid address is vague and it might get less so when the team knows the business. If the application is only used for operating within the US we can employ the USPS rules for addressing. If we ship interntionally now or later the validity of an address is dependent on the country. For example, in the US the house number has to be before the street name, in Germany it is exactly the opposite (no offense, that also makes more sense).
There are also other questions that need to be answered: how long are the entries in each field allowed to be? Can we work with just one compounded address field or do we need separate fields? Do we need to split up street name, type, pre- and postfixes? That matters a lot if we ever want to use geolocation to get the exact position for an address. All this matters a lot when a team is asked to estimate how long it will take to implement a story. Without upfront details available any estimate is useless because it will be so inaccurate that it is worthless for any planning.
I work on an Agile team and we create a brand new application. At the beginning of the project the product owner crafted a list of all stories that need to be in what is considered a core set of features. We were given only the story titles and a few ACs for some stories. We also have varying levels of business and technical experience on the team. Additionally, stories for framework tasks and security were not even included. We started estimating how long it would take us to get the core functionality in place and we came up with 10 years. Now, two years later, we more than surpassed core feature implementation. We also stopped estimating anything because over the course of two years all estimates were horribly wrong. Reasons: lack of details and disruption by customer support issues that required the help of the experts on the development team (often not due to bugs, but other issues). What we do now is have the product owner put out a wish list for our three month iterations (yes, we ditched the unworkable three week sprints and with that Scrum altogether). After six weeks we have a hard look at the list and figure out what we might be able to finish, sometimes we remove stories, sometimes we add stories. In the end stories are done when they are done. We rather spend the time coding and testing than estimating. I think this is what Agile is about, eliminate process that yields no value. Knowing details upfront is incredible value, estimating and crafting hollow ACs is not.
As mentioned before, I assumed that a customer record is in place. Often a batch of stories is submitted to the team that cover all the functionality for a new type of record, object, or feature. Estimating only the individual stories without knowing the big picture and future desires is pointless. What if the customers want to keep multiple shipping addresses. The ACs in the example suggest that there is ever only one. A lot of things have to be done differently front and back when we no longer have a 1:1 relationship. Also, moving from 1:1 to 1:many is often tricky and requires fix ups for existing data that require incremental database schema changes. Getting more detail upfront with the big picture included will overall save the team a lot of work and reduce risk.
Lastly, asking for nine digits of a ZIP code is one way to approach this. The problem is that each time the ZIP needs to be displayed the nine digits need to be parsed so that the dash can be inserted before the last four digits. Asking for digits only also implies that we exclusively work with numerical values. The ZIP could be stored as an integer and then converted to a string. That also means that the user is not allowed to enter the dash. So what happens when they do it anyway? The UI field needs to be equipped to ignore non-numerical characters on entry. It cannot strip the dash later because we also have to limit the field to only 9 positions. It might be better to limit to 10 alphanumerical characters and then validate the input. That needs to happen anyway because we have to reject entries such as 00000-0000 because that is not a valid ZIP. We also need to know if ZIP is a required field, if it is, it has to be set to not allow null values in the database so that we get a database error in the worst case. 10 alphanumerical characters also work much better when the decision is to allow Canadian addresses. The validation for the postal codes is much different, but we do not have to revamp the UI and the database schema later. -> This is the discussion that is needed before any reasonable estimate can be given because it matters a lot for implementation and testing. Filling in the details later is in my opinion really bad advice!
"Filling in the details later is in my opinion really bad advice!"
Which begs the question, how much in advance do you want them?
The day before?
A week before?
A month before?
The further in advance you do them the more likely they are to change.
The later you do them the less likely they are to change. So we need to consider the "last responsible moment". When is a decision going to be made for which the details are required? I would sugges details can be postponed until that moment.
If that moment is prior to the sprint then that moment is in a previous sprint, it is taking effort from a previous sprint, looked at one way its a mini-waterfall.
That moment may well be the point at which coding occurs. If you are working without estimates then that is cool, when you come to do the work you first work out the details then do the work.
If you are using estimates then surely the time and effort to flush out those details should be included. To my mind estimating work without all the details in place means a) the estimate is for flushing out the details and then doing the work; and b) the estimate will by its nature be more variable.
That an individual estimate is more variable may be a problem for that story but is not a problem for a large forecasting system because there will be more data points and the work effort (should be) calibrated against past performance.
Now there is a catch. Even if you devote a lot of time effort to flushing out the details in advance you will rarely get everything. It is only by doing the work that we really understand the work. Therefore a system that is predicated on knowing all the details upfront will regularly encounter problem. However a system that assume "we don't know everything" will work in every case.
I also have a concern that "tell us what you want" is an old request from programmers and testers. We have had years of requirements and specification documents that show we can't know this in advance but such a stance invariable creates tension between the technical team and the business representatives.
Thanks for your response. There are a few items that I will need to mull about. In my experience estimates are taken at face value, when the team estimates these 10 stories will take a total of 3 months to be completed many take that as the work will be guaranteed to be done in three months. That is the danger of putting estimates out. Besides that, any project manager should double the estimates given because teams tend to estimate too aggressively.
I agree that the details cannot be worked out way in advance and really do not have to be. But they need to be available to some degree when estimates are requested. Including the time for finding out the details in estimates sounds to me like the chicken and egg problem. As pointed out in my first reponse, I cannot tell how much effort a test will be when I have little to no idea what the scope of work is. Just recently I faced that question, while I did not give an estimate, it would have been grossly wrong if I did. Reason: I expected a UI and user interactivity to be necessary for the requested feature. As it turns out, after coding started we got the important detail requesting a fully automated (and in this case a much dumber) process. That significantly simplified the feature, allowed me to use existing test data, write less tests, and get a much better understanding as to what the expected outcome is. So what good would an estimate be if I had given one? In this case it would be fine because in the end we needed less time, but it could have also been the other way around.
There is talk that we go back to estimating and if I do not get the details that I need to make a reasonable assessment I have no choice than to grossly overinflate my estimate. I did that in the past purely to safe my back, but what happened is that I was no longer asked for estimates. Instead, the lead dev was asked for an estimate in a hallway talk and that is what turned into the expected delivery date.
One last note about changing requirements, even with a IT delivery requirements change, often even after coding and testing is done. I'd be fine with that if it wasn't for changes that were known to the product owner for weeks already. That communication issue might be unique to our team and organizational structure. The product owner is not reporting to the same manager as the rest of the team and that injects office politics and sets expectations differently. Most of the time the product owner does not even attend the standups.
"In my experience estimates are taken at face value"
Unfortunately this is all to true. There are things you can do to come up with more meaningful estimates but in general the further out you try and estimate the greater the estimate is going to be wrong. That isn't necessarily because we are bad people - although we humans do have difficulty estimating - but in general the further out you think the more that will have changed by the time you get there.
"any project manager should double the estimates given because teams tend to estimate too aggressively."
Unfortunately that makes the problem worse. Once you allow more time we take more time.
We have estimates and deadlines round the wrong way:
The business representatives should tell estimate the value of a request and tell us engineers when they need it by, they set the deadlines. We can all then engage in a conversation about how value changes over time and we engineers can then work to create a solution in that model.
I'm going to talk about value estimates in a future piece in this series. As to effort estimates, that is quite a minefield and I'm not sure I can do it justice in a series such as this.
I do talk about it some in my Xanpan book, OK, I'm plugging my book but I do talk about it more there: www.leanpub.com/xanpan