Your developers are already working feature-by-feature in iterations, but your testers are stuck with manual tests. How do you make the leap to agile testing when the nature of agile's iterative releases challenges testers to test working segments of a product instead of the complete package? In this week's column, Johanna Rothman explains that the key challenge resides in bringing the whole team together to work towards the completion of an iteration. Only then will the testers--and the entire team--know how to transition to agile.
Johanna Rothman will be speaking at the following conferences:
Some test teams may be stumped on how to transition to agile. If you're in such a team, you probably have manual tests for regression either because you never have had the time to automate them or because you are testing from the GUI and it doesn't make sense to automate them. You probably have great exploratory testers who can find problems inside complex applications, yet they tend not to automate their testing and need a final product before they start testing. You know how to plan the testing for a release, but now everything has to be done inside a two-, three-, or four-week iteration. How do you make it work? How do you keep up with development?
This is a common problem. In many organizations, developers think they have transitioned to agile while testers are still stuck in manual testing efforts and unable to "keep up" at the end of the iteration. When I explain to these people that they are receiving only partial benefit of their agile transition, the developers and testers both explain that the testers are just too slow.
The problem isn't that the testers are too slow but that the team does not own "done," and, until the team owns "done" and works together to achieve it, the testers will appear too slow.
Know What "Done" Means
Agile teams can release a working product every iteration. They may not have to release, but the software is supposed to be good enough to release. That means that testing-which is about managing risk-is complete. After all, how can you release if you don't know the risks of release?
Testing provides information about the product under test. The tests don't prove that the product is correct or that the developers are great or terrible, but rather that the product does or doesn't do what we thought it was supposed to do.
That means the tests have to match the product. If the product includes calls to another system, some set of tests have to call that other system. If the product includes a GUI, the tests-at some point-have to use the GUI. But, there are many ways to test inside a system. From under the GUI, the way is to build the tests as you proceed, so you don't need to test only end to end and you will still receive valuable information about the product under test.
If the developers are only testing from the unit-level perspective, they don't know if a feature is done. If the testers can't finish the testing from the system-level perspective, they don't know if a feature is done. If no one knows if a feature is done, how can you call it done for an iteration? You can't. That's why it's critical for the team to have a team-generated definition of done. Is a story done if the developers have tested it? Is a story done if the developers have integrated and built it into an executable? What about installation? How much testing does a feature need in order to know if it's done or not?
There is no one right answer for every team. Each team needs to look at its product, customers, and risks, and say, "OK, we can say it's done if: all the code is checked in, reviewed by someone, or written in a paired way; all the developer tests are done; and all the system tests have been created and run for this feature under the GUI. We'll address GUI-based checking every few days, but we