Test Automation in an Agile Environment

Five Common Mistakes and Their Solutions

Many of us keep asking: If the benefits of automated testing are so vast, why does test automation fail so often? Artem Nahornyy addresses this common dilemma.

The dynamically changing IT industry brings forth new objectives and new perspectives for automated testing in areas that were brought to life in the recent decade, such as cloud-based, SaaS applications, e-commerce, and so on. The past five years saw an immense growth in the number of agile and Scrum projects. Additionally, the IT market has changed significantly, not only with various new tools (e.g., Selenium 2, Watir WebDriver, BrowserMob, and Robot Framework) but also with approaches that have also completely changed. For example, more focus has been placed on cloud-based test automation solutions both for performance testing and functional testing. Cloud-based testing of web applications is now replacing "classic," local deployments of testing tools.

Even though there are a vast number of benefits to automated testing, test automation can often fail. Some types of mistakes in test automation may include selecting the wrong automated tool, incorrectly using the tool, or setting the wrong time for test creation. It is worth paying special attention to the test automation framework and proper work scope division between the test automation and manual testing teams. The "Test Cases Selection" section of this article highlights many reasons why we must not automate certain test cases.

Let’s take a closer look at the five most common mistakes of test automation in agile and their possible solutions.

1. Wrong Tool Selection
Even though the popular tool may contain a commendably rich feature set and it's price may be affordable, the tool could have hidden problems that are not obvious at first glance. For example, there may be problems like insufficient support for the product and a lack of reliability. This occurs in both commercial and open source tools.

When selecting a commercial test automation tool for a specific project, it is not enough to only consider the tool’s features and price; it's best to analyze feedback and recommendations from people who have successfully used the tool on real projects. When selecting an open source freeware tool, the first thing to consider is the community support, because these tools are supported by their community only and not by a vendor. The chances to correct arising issues with the tool are much higher if the community is strong. Looking at the number of posts in forums and blogs throughout the web is a good way to assess the actual size of the community. A couple good examples include stackoverflow.com, answers.launchpad.net, www.qaautomation.net, and many other test automation forums and blogs.


User Comments

Dave Maschek's picture

Most Agile testing literature recommends that user stories are to be automated within the same iteration. I believe I've seen one post that actually bent to reality and suggested that automation can occur in the iteration following the user story's implementation. This article now states that some user stories should NOT be automated, or at least not automated until the application is close to its final form.

June 6, 2013 - 2:16pm
George Dinwiddie's picture

This article sounds like test automation in a waterfall shop, where testers are at the bottom of the heap, rather than at an Agile shop. See http://manage.techwell.com/articles/weekly/three-amigos for an alternative approach.

1. Wrong tool selection: Yes, but not for the reasons you state. Usually it's wrong because it can't be used by anyone, due to high license costs, and doesn't support writing the tests before implementation.

2. Starting at the wrong time: The mistake is starting too late, not too early. Testing starts while discussing the requirements.

3. Framework: No tasks have story points. Test frameworks, like application frameworks, should be "grown" to support current needs, not developed ahead of time.

4. Test Case Selection: No, not all tests need to be automated, but most orgs automate too few, delaying crucial feedback about the product.

5. Test Automation vs. Manual Automation: I'm not sure what "manual automation" is, but I see a problem is you're isolating your testers into multiple subteams. Put the entire development team together and they can work out what is best automated and what is best explored manually.

February 27, 2014 - 1:00pm
Violet Weed's picture

I'd like to write a longer comment, but why bother? You covered all the basics, just fine. :)

April 15, 2014 - 1:14pm
Madhava Verma Dantuluri's picture

Excellent article, it was so much useful and you had anayzed on utmost key factors.

February 27, 2014 - 9:36pm
Anubhav Bansal's picture

Interesting article with some fresh perspectives.


I wish to add a point to '4. Test Case Selection'

- To increase the value proposition of Automation, start with automation of use cases, sorted by priority; This sometimes may mean a disconnect between test cases and automation scripts; and fixing test cases to map to scripts is better here; automation set should grow from here like an organic growth; in surrounding areas of automated case. You can adopt this strategy at any time i.e. even if you have large number of legacy Test cases.

Please share your comments on this.

February 28, 2014 - 12:06am
Jeremy Carey-Dressler's picture

I would like to see your definition of framework.

In my vocabulary, I have spent more than a year developing framework, which includes: Wrapping Selenium's functions, Managing Properties, Handling JSON, HTTP GET/POST, HTTPS GET/POST, SSH, Curl, PGP, PHP Serialization, Random Numbers, Random Strings [Unicode, Ascii, Alpha-numeric, etc.], Database queries, DB Functions, DB Stored Procedures, XML parsing, custom Lists, custom Maps, and an array of other pieces.  I don't see how this could conceivably be done in a single sprint, even assuming you could use all off the shelf modules that all happened to work the first time.  Even just designing the framework took multiple iterations and was not cleanly done in a 'sprint' like fashion.  So maybe when you say "framework" you just mean getting a project setup with whatever tool you use to interact with the system under test?


February 28, 2014 - 12:26pm
Violet Weed's picture

AHA! I should have looked at the company the author worked for before I commented. Wouldn't use them come hell or high water. Yes I HAVE worked with them, that's how I know (better).

April 15, 2014 - 1:16pm
Satish  Taribalalu's picture

I have been in practical situation of following agile and agree with Author's comments on selection of test cases to ensure adequate coverage on regression and optimized cycle time for execution. If all test cases from sprint cycle are automated, automated tests (especially UI flows) grow too big enough that it will take too long to run one complete cycle of execution for a change introduced late in the cycle. it is typically 20% of workflows that are used by busness 80% of the time and they are most critical - automation should target to provide 100% coverage on those 20% frequently used, critical workflows, so those are not broken in a new release.

Most organization do automation for fancy and just put too many resources to target 100% coverage on all test cases, still have critical bugs slipping through to production. That is because, they don't pay attention to 'what part of the product functionality' is covered in automation. Most of the times, in amature product, vast majority functionality are not used by many customers OR they are not critical functionality for business - automating them and running them every sprint cycle is waste of money, effort and time for the organization. Instead, focus on business critical, frequenntly used functionalities for automation coverage - In fact, no matter how much you cover in automation, there will be issues as business uses it in many different ways - but, as long as issues found in production are not show stoppers, I think business goals are met, exceeding expectations.

If yoy automate everything you test in sprint for the first time, your execution cycle will take weeks and months, even if automated, as the product functionality grow.

Automating everything works for smaller products with 1-2 clients using - not for enterprise products and huge customer base - thats my 2 cents!

July 31, 2014 - 7:47am

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.