As a test professional in waterfall, I was used to getting the code much later and buggier than I expected and being under tremendous pressure to finish my testing before the go-live date hit. Then one day, I found out that there was a better way. Testers could be involved much earlier in the lifecycle, they could participate in requirements and design decisions as they happened, and the code could actually be unit tested before I received it! Heaven? No, Daryl, it is called agile.
OK, actually, agile is not really all kittens and brownies. Testers I've known on projects tend to have a love-hate relationship with agile. In this article, we will explore the role of the tester and will talk about some unique approaches to software testing that fit the agile lifecycle like a glove.
A Short Agile Explanation
Agile projects are iterative and incremental. Iterative means that tasks in the project are executed over and over, increasing quality and completeness with each iteration. Incremental means that the application is broken up into slices, each of which provides noticeable value to the business users, and all of which are designed, built and tested consecutively.
The iterative and incremental nature of agile projects provides some unique challenges to the tester. Let me describe what it is like as an agile tester first and then discuss the issues at hand.
Agile projects are broken up into smaller mini-projects called sprints (incremental). Each sprint contains all the elements of a lifecycle—requirements, design, coding, testing. (Note: This isn't always true, but for the sake of simplicity, we will assume it is so.)
That means that in one short sprint, let's say two weeks, all those activities need to occur. Wow! you might be saying. These agile guys must work a ton of hours! Not necessarily. It depends on the scope. If the slice of the application is small enough, but still provides some value to the users, it can be done. Maybe it's just the happy path through one use case, designed, coded and tested—that is still value, isn't it?
OK, so here comes one slice of functionality through requirements (one path of a use case), design (just enough GUI, business logic, and data access to fulfill the use case path), development (just enough code to fulfill the design), and testing. How on earth do you test this kind of "application slice"?
Well, first of all, all this has to happen pretty fast. You might get the completed code on the second Tuesday of the two-week sprint and be expected to complete your work by Thursday night. In other words, two or three days.
Obviously, a new way of doing software testing is required.
Oh, and one other thing. The exact slice that your team will choose to work on during this sprint might not be known until the beginning of that sprint. You might not have weeks or even days to develop a test plan.
The Love-Hate Relationship
I think this is where the love-hate thing comes into play. Most testers I've known who are exposed to agile love that they are involved in each sprint, often being in the same room where the requirements are fleshed out and the design decisions are made. For a tester to have knowledge, and even input, into the "whys" of the earlier lifecycle activities is very desirable. Testers can make sure the requirements are testable as the requirements are created, and they can offer their opinions on designs that may work better or worse for the users.
But the "hate" aspect comes with the tester's reluctance to want to put together a suite of tests for a slice of the application that they know will change "iteratively" in the sprints to come. "Why don't you give me that code to me when you have something stable," is likely to come out of the tester's mouth.
But that is just not an option in agile. The tester must find a way to live comfortably with a code base that is constantly changing and growing. Fortunately, there are ways to deal with this seemingly impossible task.