Automation Test Suites Are Not God!

[article]
Summary:
In today’s age of tight deadlines and accelerating delivery cycles of software, test automation is surely favorable for the world of functional testing and critical to the success of big software development companies. But its various benefits have led to unrealistic expectations from managers and organizations. This article highlights the role and use of automation in an agile context and the irreplaceable importance of manual testing.

Working in an agile environment makes it essential to automate system testing to rerun tests in each iteration. But in the nascent stages of some systems, there are changes in the UI, product flow, or design itself in each iteration, making it difficult to maintain the automation scripts. The role of automation in agile context is repetition of regression and redundant tasks, while the actual testing happens at the hands of manual testers. The creativity, skills, experience, and analytical thought process of a human mind cannot be replaced by automated scripts. This belief has to be ingrained in every organization’s culture in order to achieve the best quality.

Talking about software testing today is incomplete without the mention of test automation. Automation has become an important part of testing tasks and is deemed critical to the success of any software development team—and rightly so, with all its benefits like speed, reliability, reducing redundancy, and ensuring complete regression cycles within tight deadlines.

But the common perception of team managers and policy makers is that automation tools are the complete package for testing activities, and they begin expecting the world out of them. A common misconception is that test automation is the “silver bullet” for improving quality, and organizations start to believe that investing once in an automation tool ends all other testing-related tasks and investments. Managers start expecting everything out of their automation suites—100 percent coverage, minimum run times, no maintenance, and quality delivered overnight. It’s basically expecting godlike miracles to happen! Hence, there arises a need to educate and understand the actual purpose of automation and the importance of manual tests in this context.

Working in an agile environment makes it essential to automate system testing due to the bulk of regression tests required in every iteration. But what makes test automation hard within an agile context is its very inherent nature of constant change. Because the system under test changes continuously, the automation scripts have to be changed so often that they actually become a task themselves instead of a benefit.

As tester James Bach wrote, Test Automation Rule #1 is “A good manual test cannot be automated.” According to this thought, it is certainly possible to create a powerful and useful automated test, which will help you know where to look and to use your manual exploration. But the maximum benefit thereafter will come out of using the experience and exploration techniques.

This is based on the fact that humans have the ability to notice, analyze, and observe things that computers cannot. Even for unskilled testers, for amateur minds, or in total absence of any knowledge, requirements, or specifications of the system under test, people can observe and find a lot of things no tool will be able to.

In a true sense, automation is not actually testing; it is merely the repetition of the tasks and tests that have been performed earlier and are only required as a part of regression cycles. Automation is made powerful by the various reports and metrics associated with it.

But the actual testing still happens at the hands of a real tester, who applies his creativity, skills, experience, and analytics to find and report bugs in the system under test. Once his tests pass, they are then converted to automated suites for the next iteration, and so on.

User Comments

2 comments
Doug Shelton's picture

Nishi:

You make a very good point, but what I would like to have seen in this article (or perhaps you could include as further articles on this topic), is more guidelines on "exactly what **extent**, or **degree** of automated testing should occur in Sprints - and at what point(s) in the Sprints and in the Release".

You did provide one example (and examples are GREAT guidelines), but I'd like to see more detailed information along these lines, unless your proposal is simply that one should **only** do "Happy Path" automation and not do the full regression suite until <When?  End of Sprint?  Last sprints of the Release?>. Do you see by that last question what I mean in terms of asserting that you've left this topic "hanging"?  

Of course, beware the detractors who will say you're propsing the creation of mini-waterfalls  - particularly if you leave "full" regression test automation until the last Sprints in a Release (and I'd hope you would **not** recommend that for reasons i shouldn't have to go into).  But if you don't - there isn't any "perfect approach" - many products run into later Sprint dependencies so **some** tests will have to be refactored nontheless as dependencies (and further knowledge) "reshape" what earlier code was suppossed to do, and thatat earlier sprint's code tests as well.

June 19, 2014 - 10:16pm
Kevin Dunne's picture

Doug, 

I can see where you are frustrated by not being provided a golden rule of automation coverage - but a lot of those variables related to the extent or degree of automated testing coverage needed in a sprint will vary. I think some (of many more) things you would want to consider are:

-Level of unit testing completed - how well is the application tested by the developer, and how much do you trust that developer to do testing?  At the CAST show last week, Trish Khoo (http://trishkhoo.com/) talked about agile groups she worked in at google where all of the testing was performed by the developers, and there were no dedicated testing resources

-Maturity of the application - while we can't assume that more mature applications will never have features break, we know that the probability is likely less.

-Development "nimbleness" - if something breaks, how quickly can we fix it?  Do we do builds frequently?  For example, at my company, many of our customers are in the cloud, so we can deploy a hot fix if something goes wrong.  However, if this happens with one of our On-Premise clients, the path to resolving the issue is much more difficult.  Therefore, much more automated testing is needed to confirm the stability of the on-premise build.

-Risk of Defects found in production - For someone like Facebook, a bug found in production has lesser impact - for the most part people do not rely on Facebook to perform any critical activities.  However, for someone making software that controls surgical instruments, the same cannot be said. 

There are many, many more factors to contribute - I would be interested to hear what others think are import factors to weigh when making this decision!  But at the end of the day, I think agile will require that you drop the dogma associated with waterfall of boilerplate test plans, test case formats, etc. and think at a higher level about what the mission of your testing truly is. 

Kevin

 

August 20, 2014 - 11:38am

About the author

Nishi Grover's picture Nishi Grover

Nishi is a full-time software tester, a zealous and passionate one at that! And in her spare time she likes about reading and sharing knowledge about Software Testing, Agile and various aspects of software testing processes and practices. She is CP-MAT and ISTQB certified, loves challenges at work, and is excited about her new found passion in writing about new topics of interest.

http://www.linkedin.com/profile/view?id=30519961

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03