Blending Test Automation Approaches

[article]

On the left side, we have all my friends who champion and lead in the use of exploratory test methods. That means humans thinking in the moment, learning about the software by using it, and using those learnings to create and run new experiments. On the right are the test automators, who find the work dull and repetitive, who would rather write code to generate fast feedback, and who want to confirm that the software as built meets the customer needs through code.

The automators have been disappointed in me for years because of my emphasis on exploratory and thinking methods, but I do like the idea of having the computer assist with repetitive tasks when possible. Meanwhile, when I talk about test automation, the exploratory folks assume I mean something creaky and slow that drives the user interface—that it will never work long term. They are worried about me.

So, please allow me to get specific about the blended approach I recommend, the system of forces around it, and, perhaps, a few tips about how this might be helpful to your organization.

The Method
For each minimal, viable feature, or “story,” we have a text description, but that is open to interpretation. Before anyone writes any code, we hold a kickoff meeting. The kickoff meeting is designed to get agreement on what the feature will be down to the detail level, so it needs to include everyone who might work on the story, including the customer, analyst, programmers, testers, and anyone else with an interest.

In addition to building a mental model and agreement on what we will build, we also create some examples. Here are some simple examples for a conversion feature:

Convert Fahrenheit to Celsius
Given FExpect C
0-17.8
320
212100
10037.8
500260
TwentyEXCEPTION
-1000EXCEPTION
1000EXCEPTION

One Approach to Automation
This is not a test plan; it is a list of examples. It is not comprehensive, and that is not our goal. Instead, we want to provide some examples to drive development—to let the programmers feel confident that the code is really, truly, actually ready for exploratory testing without wasting anyone’s time.

Once the kickoff is complete, the programmers build the automation to call the function, which is just a “stub.” The tests will run using a tool like FitNesse, SpecFlow, or Cucumber. At this point, they fail spectacularly. Now, the programmers make the test pass.

Notice what I am saying here: The automated, business-level checks you see above, which work below the GUI, are something the programmers do as they do the work, not after. These tests are all done and pass before a story is moved out of the “dev” column. Add a little bit of developer “poke” testing to make sure the system properly calls the function, and you can radically improve code quality before it gets to a second set of eyes to explore.

This distinction eliminates the classic role of “Bob the test automator,” who is isolated from the original code that tries to drive it in a black-box fashion after the code is “complete.” Notice that the programmers need to make all tests pass before they can call the story “done.” This eliminates the delta, or the difference between the code as it is and the code that the test automation expects to be testing.

But, Where?
If your product is one large, integrated application and most of the bugs are in the graphical layer, then this approach might not be for you. Likewise, if you have a lot of regression bugs, the framework may not catch them. It is designed to be light, cheap, fast, and easy to maintain, but certainly not comprehensive. The kind of teams I have seen achieve the most success with this approach had a large number of mostly isolated applications that were, for the most part, in maintenance mode.

Some friends of mine call this approach “acceptance test-driven development,” and I am inclined to agree. The problem comes when we compress a complex subject (this article barely scratches the surface) to four or five words. When I am in a hurry and try to compress a test strategy without discussing the environment, team, operational risks, and detailed tactics, the person to whom I’m explaining makes assumptions, often speculating that my environment is like his own. And, of course, those ideas may not be appropriate in that context, which can create needless arguments and confusion.

My main point is that it is time for us as a community to move the conversation forward and talk about strategy—not through patterns, abstractions, and labels, but by digging into details. We must understand the how, whys, and risks that make a strategy appropriate.

If you would like to join me and have a story to tell, then I would like to hear it. Let’s talk about what you are doing, why you are doing it, the tradeoffs involved, and how it is working for you. Please, leave a comment and tell me about it.

User Comments

6 comments
Brendan Clarke's picture

We are using the "blended approach" in our story testing in that we we are looking to use both automation and manual testing as part of our strategy. We want to get the benefits of automation (prompt feedback, reliable and repetitive coverage, reduced regression etc.) . We also want to get the benefits of human interaction with system – all the observation, cognitive thought process and ability to spot what’s missing. Overall or strategy is simple - use the most appropriate tools/techniques we can to deliver story.

Like Matt, we have story planning with all the key stakeholders to get agreement on the vision and detail of the story. We (PM, developers and QA) refine and add acceptance cases as the meeting progresses and add examples of data as required covering both positive and more importantly negative scenarios. We also have continuous integration running our existing automated regression pack of system tests.

Done is done when developer unit tests are complete, the agreed QA automated tests completed, the manual guys are done and we have specific scenarios identified for future regression. During the process the stakeholders have been involved at a business level and any refinements or corrections have either been included or scheduled into the next story.

However we are always looking to improve and to close any gaps that appear in the process. We are looking to apply blended approaches in both our manual and automated testing. For manual testing we use a blend of script based and ad hoc/exploratory testing, while using pairwise testers.

For automation we have the developers showing the coverage of the unit tests with the automation engineers and assisting the automation engineers with the system tests. These system tests can be below the GUI or through the GUI depending on what is best. The system tests are agreed and reviewed in terms of coverage with the manual QA guys. The system tests are written along with the production code so they fail first and then pass as the code evolves. This approach benefits the manual team as they understand the coverage of the automation and have confidence to focus their exploratory sessions on edge cases and the other juicy stuff they thrive on, knowing the simple stuff is covered.

We are looking to adopt the approach of running not just the new automation tests, but also subset of the existing ones, prior to check in so we can have faster feedback and greater confidence we won’t cause regression impact.

We have had the pain of flaky GUI driven tests in the past and the distrust of the manual guys towards automation. We are trying to use a blended approach throughout the team to make the most of everyone’s skills and overcome the demons that have haunted us in our past efforts. One size does not fit all, a single approach is not ideal. Mix and match – it’s the only way to go.

April 19, 2012 - 5:26am
Jim Hazen's picture
Jim Hazen

Matt,

I agree that 'testing' in general is a 'blended' approach to getting the job done, and thus I agree with the Context Driven approach in this sense. There cannot be a one or the other type of mentality if we as test professionals are to thrive in our work.

Being an "automation guy" I see the benefit of having the machine help with repetitive type task work, which is why automation is really only geared for execution of Regression type tests (or checks as some of you like to call them). I know a machine will never replace a human when it comes to cognitive thought process & decisions. If it was capable of that then it would be self-aware and SkyNet would have taken over (Watson is close enough).

Point I'm making is that automation is a tool only (and limited too), and it is the human computer between the ears that really does all the work. So through some insight and cognitive processing a blended approach is probably one of the best to utilize.

So get the salt on the glasses and let's make up some Testing margarita's!

April 2, 2012 - 11:23am
David Greenlees's picture
David Greenlees

"My main point is that it is time for us as a community to move the conversation forward and talk about strategy..."

Yes, yes, yes... I can give you a horror story of why we need this. About 5 years ago it started, automation through the GUI. It continues today, with almost zip in ROI! The focus was on replacing testers. It was seen as the biggest bang for buck. No thought was given to the complexity of the application, the amount of maintenance the scripts would need, even the best tool for the job didn't rate a mention! It's fair to say that it has not reduced testers number, but actually added to them.

Now, this could be worse case scenario (and of course there is a book that could be written about this one), but I'm sure this is not the only horror story out there that could have been avoided (even partially) by doing what you have suggested in you post.

I have little automation experience, but I have enough to know that you need to spend some coin planning it properly. Also, I'm a fan of using it to assist with testing, rather than it actually performing the testing (checking).

Good post dude.

April 2, 2012 - 9:55pm
Alex Kell's picture
Alex Kell

Excellent post, and a good encapsulation of the ATDD method (http://testobsessed.com/blog/2008/12/08/acceptance-test-driven-developme...), AKA BDD (http://dannorth.net/introducing-bdd/), or Executable Specifications (http://specificationbyexample.com).

It's especially important to note that some methods are not as appropriate, or perhaps inapplicable in certain contexts. By the same token, it's just as easy to want to cling to our existing methods. Sometimes to improve we have to make big changes, and not just incremental ones.

The key aspect of a method like this, and why it works, is the shared understanding it brings to the different roles on the team. Without that interaction, as you say, you have people working in isolation; analysts writing requirements, programmers programming and testers testing, each with their own interpretations of what it is the team is supposed to be producing.

Thanks!

April 2, 2012 - 10:41pm
Timothy Western's picture
Timothy Western

Matt,

I can't agree more. It's time that teams start having this conversation. What can be automated, what makes sense to automate, what can provide value without being a total money sink. I think a lot of PMs though still see automation as some mystical thing to pursue. Oh we'll have the testers write hundreds of scripts, then hand them off to the automation guy, and everything is hunky dory, that's why we bought this tool right? Of course the thought that the tool isn't one size fits all, and may not work for all test cases doesn't even come up in initial discussions. It needs too. I think there is value to be had in a middle layer of test automation. We need to leave the testing of the elements humans interact to humans or at least more than we do, because no machine can tell us how usable a product may be. Only humans can make that determination.

April 3, 2012 - 6:55am
Eric Jacobson's picture
Eric Jacobson

The “blended approach” (i.e., automated checking and exploratory testing) appears to be the current common practice for all the seasoned testers I know/follow, including you, Matt. Hopefully it’s because said seasoned testers actually perform the blended approach, and not because they think it’s the right approach to publicly associate themselves with.

What I really love hearing, are the people who admit to not using the blended approach. And since, by most ET definitions, it’s nearly impossible to do a purely automated checking approach, “non-blended” would mean “all human testing”. I suspect most seasoned testers are too embarrassed to admit to an “all human” testing approach. It just sounds like they gave up or something, right? I’m so grateful James Bach popularized terms like “exploratory testing”, “session-based test management” and “sapient” testing, because it gave us human-testers a way to sound more intelligent, whilst still trapped in this fancy-pants IT world.

What I find interesting about your post, Matt, is your frustration with getting hung up on sound bites. You’re dead on. We just don’t have the patience to grok complex Feature examples. Like most other examples, your Fahrenheit to Celsius conversion Feature is so ridiculously simple, we shouldn’t even begin discussing test strategy. In my experiences, there are no simple Features left. Everything simple already exists, right? Why would we ever want to program something simple that already exists?

But yes, let’s move the discussion forward and suggest a new test strategy. Let’s keep our minds open to everything. “No Testing” should be on the table. How about AI that performs ET? How about the user writes their own program? How about we kill a family member of the tester if a user is not happy with the product? How about the tester becomes a brand and their picture ships with each product they test; “Tested by Matt Heusser”. …Okay, now I’m getting stupid.

As always, thanks for jump starting my brain, Matt.

April 4, 2012 - 7:32pm

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.