Blending Test Automation Approaches

[article]

Some friends of mine call this approach “acceptance test-driven development,” and I am inclined to agree. The problem comes when we compress a complex subject (this article barely scratches the surface) to four or five words. When I am in a hurry and try to compress a test strategy without discussing the environment, team, operational risks, and detailed tactics, the person to whom I’m explaining makes assumptions, often speculating that my environment is like his own. And, of course, those ideas may not be appropriate in that context, which can create needless arguments and confusion.

My main point is that it is time for us as a community to move the conversation forward and talk about strategy—not through patterns, abstractions, and labels, but by digging into details. We must understand the how, whys, and risks that make a strategy appropriate.

If you would like to join me and have a story to tell, then I would like to hear it. Let’s talk about what you are doing, why you are doing it, the tradeoffs involved, and how it is working for you. Please, leave a comment and tell me about it.

User Comments

6 comments
Brendan Clarke's picture

We are using the "blended approach" in our story testing in that we we are looking to use both automation and manual testing as part of our strategy. We want to get the benefits of automation (prompt feedback, reliable and repetitive coverage, reduced regression etc.) . We also want to get the benefits of human interaction with system – all the observation, cognitive thought process and ability to spot what’s missing. Overall or strategy is simple - use the most appropriate tools/techniques we can to deliver story.

Like Matt, we have story planning with all the key stakeholders to get agreement on the vision and detail of the story. We (PM, developers and QA) refine and add acceptance cases as the meeting progresses and add examples of data as required covering both positive and more importantly negative scenarios. We also have continuous integration running our existing automated regression pack of system tests.

Done is done when developer unit tests are complete, the agreed QA automated tests completed, the manual guys are done and we have specific scenarios identified for future regression. During the process the stakeholders have been involved at a business level and any refinements or corrections have either been included or scheduled into the next story.

However we are always looking to improve and to close any gaps that appear in the process. We are looking to apply blended approaches in both our manual and automated testing. For manual testing we use a blend of script based and ad hoc/exploratory testing, while using pairwise testers.

For automation we have the developers showing the coverage of the unit tests with the automation engineers and assisting the automation engineers with the system tests. These system tests can be below the GUI or through the GUI depending on what is best. The system tests are agreed and reviewed in terms of coverage with the manual QA guys. The system tests are written along with the production code so they fail first and then pass as the code evolves. This approach benefits the manual team as they understand the coverage of the automation and have confidence to focus their exploratory sessions on edge cases and the other juicy stuff they thrive on, knowing the simple stuff is covered.

We are looking to adopt the approach of running not just the new automation tests, but also subset of the existing ones, prior to check in so we can have faster feedback and greater confidence we won’t cause regression impact.

We have had the pain of flaky GUI driven tests in the past and the distrust of the manual guys towards automation. We are trying to use a blended approach throughout the team to make the most of everyone’s skills and overcome the demons that have haunted us in our past efforts. One size does not fit all, a single approach is not ideal. Mix and match – it’s the only way to go.

April 19, 2012 - 5:26am
Jim Hazen's picture
Jim Hazen

Matt,

I agree that 'testing' in general is a 'blended' approach to getting the job done, and thus I agree with the Context Driven approach in this sense. There cannot be a one or the other type of mentality if we as test professionals are to thrive in our work.

Being an "automation guy" I see the benefit of having the machine help with repetitive type task work, which is why automation is really only geared for execution of Regression type tests (or checks as some of you like to call them). I know a machine will never replace a human when it comes to cognitive thought process & decisions. If it was capable of that then it would be self-aware and SkyNet would have taken over (Watson is close enough).

Point I'm making is that automation is a tool only (and limited too), and it is the human computer between the ears that really does all the work. So through some insight and cognitive processing a blended approach is probably one of the best to utilize.

So get the salt on the glasses and let's make up some Testing margarita's!

April 2, 2012 - 11:23am
David Greenlees's picture
David Greenlees

"My main point is that it is time for us as a community to move the conversation forward and talk about strategy..."

Yes, yes, yes... I can give you a horror story of why we need this. About 5 years ago it started, automation through the GUI. It continues today, with almost zip in ROI! The focus was on replacing testers. It was seen as the biggest bang for buck. No thought was given to the complexity of the application, the amount of maintenance the scripts would need, even the best tool for the job didn't rate a mention! It's fair to say that it has not reduced testers number, but actually added to them.

Now, this could be worse case scenario (and of course there is a book that could be written about this one), but I'm sure this is not the only horror story out there that could have been avoided (even partially) by doing what you have suggested in you post.

I have little automation experience, but I have enough to know that you need to spend some coin planning it properly. Also, I'm a fan of using it to assist with testing, rather than it actually performing the testing (checking).

Good post dude.

April 2, 2012 - 9:55pm
Alex Kell's picture
Alex Kell

Excellent post, and a good encapsulation of the ATDD method (http://testobsessed.com/blog/2008/12/08/acceptance-test-driven-developme...), AKA BDD (http://dannorth.net/introducing-bdd/), or Executable Specifications (http://specificationbyexample.com/key_ideas.html).

It's especially important to note that some methods are not as appropriate, or perhaps inapplicable in certain contexts. By the same token, it's just as easy to want to cling to our existing methods. Sometimes to improve we have to make big changes, and not just incremental ones.

The key aspect of a method like this, and why it works, is the shared understanding it brings to the different roles on the team. Without that interaction, as you say, you have people working in isolation; analysts writing requirements, programmers programming and testers testing, each with their own interpretations of what it is the team is supposed to be producing.

Thanks!

April 2, 2012 - 10:41pm
Timothy Western's picture
Timothy Western

Matt,

I can't agree more. It's time that teams start having this conversation. What can be automated, what makes sense to automate, what can provide value without being a total money sink. I think a lot of PMs though still see automation as some mystical thing to pursue. Oh we'll have the testers write hundreds of scripts, then hand them off to the automation guy, and everything is hunky dory, that's why we bought this tool right? Of course the thought that the tool isn't one size fits all, and may not work for all test cases doesn't even come up in initial discussions. It needs too. I think there is value to be had in a middle layer of test automation. We need to leave the testing of the elements humans interact to humans or at least more than we do, because no machine can tell us how usable a product may be. Only humans can make that determination.

April 3, 2012 - 6:55am
Eric Jacobson's picture
Eric Jacobson

The “blended approach” (i.e., automated checking and exploratory testing) appears to be the current common practice for all the seasoned testers I know/follow, including you, Matt. Hopefully it’s because said seasoned testers actually perform the blended approach, and not because they think it’s the right approach to publicly associate themselves with.

What I really love hearing, are the people who admit to not using the blended approach. And since, by most ET definitions, it’s nearly impossible to do a purely automated checking approach, “non-blended” would mean “all human testing”. I suspect most seasoned testers are too embarrassed to admit to an “all human” testing approach. It just sounds like they gave up or something, right? I’m so grateful James Bach popularized terms like “exploratory testing”, “session-based test management” and “sapient” testing, because it gave us human-testers a way to sound more intelligent, whilst still trapped in this fancy-pants IT world.

What I find interesting about your post, Matt, is your frustration with getting hung up on sound bites. You’re dead on. We just don’t have the patience to grok complex Feature examples. Like most other examples, your Fahrenheit to Celsius conversion Feature is so ridiculously simple, we shouldn’t even begin discussing test strategy. In my experiences, there are no simple Features left. Everything simple already exists, right? Why would we ever want to program something simple that already exists?

But yes, let’s move the discussion forward and suggest a new test strategy. Let’s keep our minds open to everything. “No Testing” should be on the table. How about AI that performs ET? How about the user writes their own program? How about we kill a family member of the tester if a user is not happy with the product? How about the tester becomes a brand and their picture ships with each product they test; “Tested by Matt Heusser”. …Okay, now I’m getting stupid.

As always, thanks for jump starting my brain, Matt.

April 4, 2012 - 7:32pm

About the author

Matt Heusser's picture Matt Heusser

The Managing Consultant at Excelon Development, Matt Heusser is probably best known for his writing. In addition to currently serving as managing editor of Stickyminds.com, Matt was the lead editor for "How To Reduce The Cost Of Software Testing" (Taylor and Francis, 2011). He has served both as a board member for the Association for Software Testing and as a part-time instructor in Information Systems for Calvin College.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03