Vague or ambiguous requirements can cause loops in development processes. Creating requirements that include acceptance tests cuts down on the looping and increases the flow of working software to the customer.
Many organizations create vague or ambiguous requirements. When this occurs, developers can interpret the requirements one way and testers another way. Sometimes, neither way matches the customer’s original intent. This misinterpretation causes loops in the development processes; however, creating requirements that include acceptance tests cuts down on the looping and increases the delivery of working software to the customer.
Typical Development Flow The flow in a typical waterfall development process is shown in figure 1. Customers only meet with developers at the beginning of the project to go over requirement details. Testers do not receive the system until the end of the development. Each of the dotted lines represents a feedback loop. Defect reports are feedback for the developers. Depending on the type of defect, a change may be required in the code, design, or requirements. These issues cause delays in producing acceptable working software.
Figure 2 shows an alternative development flow. In starting on a requirement, a triad—the customer, developer, and tester—collaborate. During this collaboration, requirements and their corresponding, specific acceptance tests are developed. The tests are recorded in the customer’s domain language and are understandable to the customer. The customer (or a subject matter expert) provides some initial information for the tests. To further clarify a requirement, the triad may create additional tests that cover such things as boundary values or behavior in exceptional situations. The developer uses these tests while creating the code. The implementation of the requirement is not complete until the tests have been successfully executed.
This process is called acceptance test-driven development (ATDD). Acceptance tests are required for all requirements. No coding begins until the tests are defined. The developer is not done with the implementation until it passes all the tests .
Requirements and tests are linked together. You can’t have one without the other. They are like Bonnie and Clyde, The Captain and Tennille, Bullwinkle and Rocky, Starsky and Hutch, and SpongeBob and Patrick. The tests clarify and amplify the requirements. A test that fails shows that the system does not properly implement a requirement. A test that passes is a specification of how the system works. Any test created after the code is written is a new requirement.
ATDD does not require that tests be automated. However, it does require that the results of the tests be visible to the customer. A developer entering test values with a debugger usually would not be considered visible to the customer. An xUnit test that is understandable to the customer may serve as a test. A document-based testing framework, such as the Framework for Integrated Tests (Fit), can be much easier for a customer to read.
If the ATDD tests are automated, they can serve as regression tests to ensure that changes have not affected the implementation of previous requirements. The tests can be automated using Fit, Selenium, Cucumber, or other testing frameworks (see the StickyNotes for more tools).
Frequently, there is a question about whether a test is an acceptance test (ATDD) or a unit test (test-driven development [TDD]). The difference is not in the form of the test—Fit or xUnit—but in the intent of the test. If the test is understood by the business, it is an acceptance test. If not, it is a TDD test. Many TDD tests are derived from ATDD tests. They specify the behavior of the modules in the system that support the behavior specified by the ATDD tests.
A requirements document that uses prose or free-form text can be vague or ambiguous. For clarification, developers and testers can ask the customer to give us an