Agile Testing as if People Mattered

[article]
Summary:
As a test professional in waterfall, I was used to getting the code much later and buggier than I expected and being under tremendous pressure to finish my testing before the go-live date hit. Then one day, I found out that there was a better way. Testers could be involved much earlier in the lifecycle, they could participate in requirements and design decisions as they happened, and the code could actually be unit tested before I received it! Heaven? Nope, agile.

As a test professional in waterfall, I was used to getting the code much later and buggier than I expected and being under tremendous pressure to finish my testing before the go-live date hit. Then one day, I found out that there was a better way. Testers could be involved much earlier in the lifecycle, they could participate in requirements and design decisions as they happened, and the code could actually be unit tested before I received it! Heaven? No, Daryl, it is called agile.

OK, actually, agile is not really all kittens and brownies. Testers I've known on projects tend to have a love-hate relationship with agile. In this article, we will explore the role of the tester and will talk about some unique approaches to software testing that fit the agile lifecycle like a glove.

A Short Agile Explanation
Agile projects are iterative and incremental. Iterative means that tasks in the project are executed over and over, increasing quality and completeness with each iteration. Incremental means that the application is broken up into slices, each of which provides noticeable value to the business users, and all of which are designed, built and tested consecutively.

The iterative and incremental nature of agile projects provides some unique challenges to the tester. Let me describe what it is like as an agile tester first and then discuss the issues at hand.

Agile projects are broken up into smaller mini-projects called sprints (incremental). Each sprint contains all the elements of a lifecyclerequirements, design, coding, testing. (Note: This isn't always true, but for the sake of simplicity, we will assume it is so.)

That means that in one short sprint, let's say two weeks, all those activities need to occur. Wow! you might be saying. These agile guys must work a ton of hours! Not necessarily. It depends on the scope. If the slice of the application is small enough, but still provides some value to the users, it can be done. Maybe it's just the happy path through one use case, designed, coded and testedthat is still value, isn't it?

OK, so here comes one slice of functionality through requirements (one path of a use case), design (just enough GUI, business logic, and data access to fulfill the use case path), development (just enough code to fulfill the design), and testing. How on earth do you test this kind of "application slice"?

Well, first of all, all this has to happen pretty fast. You might get the completed code on the second Tuesday of the two-week sprint and be expected to complete your work by Thursday night. In other words, two or three days.

Obviously, a new way of doing software testing is required.

Oh, and one other thing. The exact slice that your team will choose to work on during this sprint might not be known until the beginning of that sprint. You might not have weeks or even days to develop a test plan. 

The Love-Hate Relationship 
I think this is where the love-hate thing comes into play. Most testers I've known who are exposed to agile love that they are involved in each sprint, often being in the same room where the requirements are fleshed out and the design decisions are made. For a tester to have knowledge, and even input, into the "whys" of the earlier lifecycle activities is very desirable. Testers can make sure the requirements are testable as the requirements are created, and they can offer their opinions on designs that may work better or worse for the users.

But the "hate" aspect comes with the tester's reluctance to want to put together a suite of tests for a slice of the application that they know will change "iteratively" in the sprints to come. "Why don't you give me that code to me when you have something stable," is likely to come out of the tester's mouth.

But that is just not an option in agile. The tester must find a way to live comfortably with a code base that is constantly changing and growing. Fortunately, there are ways to deal with this seemingly impossible task.

Exploratory Testing
Many testers (agile or otherwise) describe their job as similar to piecing together a puzzle. They see testing as a process of examining a coded application and finding out how it reacts to this tweak or that. Well, let's take the puzzle metaphor a little further. Let's say you are at your kitchen table with the puzzle pieces scatter around and you're ready to start. What is your first step? Will you pick up a few pieces and try fitting them together? Or will you take a step back, look at the puzzle as a whole, and write a detailed plan of how you see the pieces coming together?

I'm going to go out on a limb and say you picked the first option. Who would build a detailed plan for a jigsaw puzzle? Yes, puzzles are a pasttime and not a living. But let me ask one more question: Is a detailed plan even possible? I can't imagine that it would be for any but the simplest puzzle.

But is this comparison relevant? It becomes more relevant on an agile implementation, where you may not have the option of detailed test plans based on requirements, because the requirements simply don't exist yet. You must start fitting puzzle pieces together one or two at a time. In agile, you don't even know what the final picture of the puzzle will be. Is it a sailboat or a landscape or a child's face? Those requirements might not be there yet.

This is not to say that you will need to test without requirements, but that the requirements will often be created shortly before you begin testing, not giving you enough time to do a detailed written plan beforehand. Relating all this to testing, this is the mindset of exploratory testing.

Exploratory testing (ET) was originally coined in the early 1990s by Cem Kaner and James Bach to describe how testing is often more effective with less planning. They were doing their research on projects that weren't necessarily agile, but did have tight timelines and changing environments that thwarted detailed planning efforts. Although Kaner and Bach were not focused on agile, everything they describe with ET fits agile very well.

Bach describes ET as simultaneous learning, test design, and test execution. The tester receives the code and has possession of the requirements and begins to explore the application.

But to back up a bit, ET starts with a charter. You could call a charter a type of test plan, but it isn't detailed. It is usually a set of general statements on what to test, things like "Test the shopping cart functionality," and "Test the interface to the G/L." That's the extent of it. The test manager might provide this charter to a tester. Then the tester begins ET.

The Process of ET 
ET involves thinking of a test, designing it, usually in your mind, and executing it. It's a good idea to informally keep track of what you do as you test. There's a good chance that you'll learn something as you test that will influence what you decide to test next. It is truly an exploration. How those first few puzzle pieces fit (or don't) has a big impact on what you decide to do next.

Now, I can feel all my detailed test planning friends blood start to boil. "What you're describing," they are saying, "Isn't testing at all! It's just ad hoc fooling around!"

That's a fair criticism. This approach might seem informal. But compare it to the agile iterative and incremental approach. There is iteration to improve the quality and incrementalism to adapt to changes. Same thing. You are testing the various functions of the application each time you get a new cut of the code, and you are breaking your tests into small increments, where one increment equals one test. Once a test is complete, you can pass it or enter a defect for it and then, with what you now know, decide what to test next. In agile that's called a retrospective. At the end of each and every test, you're looking back at what you've learned about this release of the code and then designing your next test right then.

I've seen testers on teams I've been coaching try to execute a full suite of detailed predesigned tests in an agile lifecycle and it generally doesn't work. It just causes frustration for everyone, especially the testers. It really isn't possible in agile to preplan and predesign all your tests. ET is, as far as I can tell, your only option.

Now let's take a step back and look at what is really different about ET. We are putting control of what is tested and how it is tested into the hands of the tester. The command-and-control structure is broken in favor of autonomous, largely self-directed testers.

What About Junior Testers?
It is pretty easy to see this working well with a stable of very experienced testers, but what about with junior team members? Won't they make all the wrong choices in deciding what and how to test?

Not necessarily. Agile has a fairly well-known practice called pair programming. But there is a lesser-known practice called pair testing. Two testers, perhaps one junior and one senior, can work together designing and executing tests. It can be very productive and pretty fun.

Also fairly common with agile testing is the idea of apprenticeship. A person who is new to testing, or maybe just new to agile testing, is matched with someone who is more experienced and can help them, answer questions and give advice.

Automated Testing and Agile
One more topic to tackle is automated testing. ET requires taking a new approach to automated testing. In a waterfall lifecycle, it is possible (and desirable) to construct end-to-end tests that execute an entire test scenario. In agile, this is neither desirable nor helpful. Think instead of a series of smaller test scripts that automate easily repeatable portions of the test scenarios. Maybe a quick automated script that fill in an entire input form with data. Essentially, the tester moves around the screens or pages and then presses a hot key when they need some brute force data entry or repetitive operation. Again, we are putting control of the testing into the hands of the tester instead of the automated script. It becomes less of a well-oiled machine and more of a human exploration of the software.

Agile testing is neither as crazy as some people think nor as intuitively easy as other proclaim. But it is possible and can be very productive and enjoyable, given the chance.

User Comments

1 comment

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.