Orders of Magnitude in Test Automation

[article]
Summary:

Mike Kelly explains the following heuristic approach to help ensure your testing is roughly inline based on orders of magnitude across the various types of automation. It’s not a method for measuring effectiveness. Instead it’s simply a “smell” to tell you when you might need to take a little extra time to make sure you’re focusing your automated testing efforts at the right level.

Many teams focus on automation at the level that’s easiest to automate given the team makeup and the readily available tools. For some teams, that means lots of automated unit tests. For others, it can mean large suites of GUI-level automation tests. Developers tend to favor automation similar to their regular daily work, and testers tend to favor tools more closely resembling their normal test routines. Or, said another way, when the team looks to implement automation, they identify the closest hammer and start swinging.

It’s this well-intentioned but adhoc approach to automation that often leads to gaps in coverage and implementation inefficiency. Some tests are more expensive to create and maintain than they need to be and other tests are weaker than they could be, because the team ends up executing those tests at the wrong level in the application.

The following is a heuristic approach to help ensure your testing is roughly inline based on orders of magnitude across the various types of automation. It’s not a method for measuring effectiveness. Instead it’s simply a “smell” to tell you when you might need to take a little extra time to make sure you’re focusing your automated testing efforts at the right level.

The Thirty-second Summary of Test Automation
Leveraging automated unit tests is a core practice for most agile teams. For some teams, this is part of a broader test-driven development practice and for others, it’s how they deal with the rapid pace and the need for regular refactoring. These tests often focus on class- and method-level concerns, as well as support development; they are not necessarily performed to do extensive testing in order to  find issues.

In addition to unit tests, many teams also build out a healthy automated customer acceptance test practice. Oftentimes, these tests are tied to the acceptance criteria for the given stories being implemented within a given iteration. They sometimes look like unit tests (they don’t often exercise the user interface), but they tend to center more on the business aspect than unit tests. Automated acceptance tests are often created not only by developers but also by actual customers (or customer representatives) or people in the more traditional testing role.

As the product being developed grows over time, teams frequently develop other automated tests, which can take the form of more traditional GUI-level test automation. These scripts tend to span multiple areas of the product and focus more on scenarios and less on specific stories. In many cases, developers and testers create and maintain these tests.

A fourth type of automation emerges over time: A set of scripts for production monitoring. These can be really simple pings to the production environment in order to make sure the application is up and responding in a reasonable amount of time, or these could be more in-depth tests that ensure core functionality or third-party services are functioning as expected.

For example, if I were working on a Rails project, I might expect to see Rspec for unit tests, Cucumber for customer acceptance testing, Selenium for GUI-level automation, and perhaps New Relic for production monitoring. In a Java environment, I might look for Junit, Concordion, HP Quality Center, and OpenNMS. The specific tools aren’t as important as the intended focus of the automation.

Identifying Where You May be too Light or too Heavy on Coverage
For most teams, there’s rarely a conscious effort to balance the automation effort across these different focus areas. In my experience, when teams grow past five people, it’s hard to even get everyone together long enough to discuss big-picture test questions like automated-test coverage. When the team gets together, there are often more pressing topics to cover.

About the author

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!