A decade's worth of test automation history reveals that not much has changed. Experts still dwell on how much to automate and how to estimate different types of ROIs. Test automation's growth is stunted because it's not revered as a discipline different from manual software testing. In this week's column, Dion Johnson urges us to correct the situation so that test automation can develop into a more lucrative opportunity.
1997--Cem Kaner's "Improving the Maintainability of Automated Test Suites" white paper:
"When GUI-level regression automation is developed in Release N of the software, most of the benefits are realized during the testing and development of Release N+1."
1999--Bret Pettichord's "Seven Steps to Test Automation Success" white paper:
"We need to run test automation projects just as we do our other software development projects."
1999--Mark Fewster and Dorothy Graham's Software Test Automation book:
"If no thought is given to maintenance when tests are automated, updating an entire automated test suite can cost as much, if not more, than the cost of performing all the tests manually."
2001-Dion Johnson's Designing an Automated Web Test Environment white paper:
"With many, getting an automation tool is like a kid getting a new toy-they jump right in and start playing. And the resulting test suite, much like a child's new toy, just doesn't last."
2008-An unnamed test lead's automation request:
"I know that the application is still changing, but do you think you can start now and automate most of the tests so that they may be used during the acceptance testing in two days?"
Are you kidding me!? More than ten years have passed and we are still at a point where this type of request can be made with a straight face? Why has the industry as a whole not outgrown this? Why do we continue to ask the same questions that we largely asked over a decade ago? For its part, the IT industry and many of us within it have worked to raise the level of automation discourse through the introduction of new techniques, training, and publications. Somehow, this still seems not to have translated to the broader segment of the industry's population.
We are still preoccupied with questions such as:
- Is record and playback an effective automation approach?
- Is 100 percent automation possible?
- How do I calculate return on investment (ROI)?
- How early can test automation begin?
- Can test automation replace manual testing?
Don't get me wrong; there's nothing wrong with asking these and other questions, particularly if you are new to IT, software testing, or even test automation. The problem, however, comes when these questions linger, seriously delaying the effective implementation of test automation and, even worse, leading many down the wrong path with regards to test automation. In addition, the over-preoccupation with these questions that have been asked over and over again for more than ten years--despite the fact that some relatively widely accepted answers are available-is one of the major symptoms of test automation's stunted growth. This stunted growth is also evident both in the fact that shelfware remains prevalent and in the missed opportunity to address more pressing concerns. Over the years, we have failed to come up with comprehensive solutions to several important automation issues, such as:
- Detailed calculations for framework selection
- Detailed Calculations for Automated Test Development and Maintenance Times
- Making Risk-based (Quality-based) ROI Calculations More Acceptable
- Moving to a Fourth Generation Automation Framework
- Devising a Good Answer for an Acceptable Percentage of Tests that are Automated
Detailed Calculations for Framework Selection
Below is a formula that I often use to help define the level of complexity an automated test framework should have:
- AF = Automation Framework Definition
- AN = Number of applications expected to be tested by your organization
- VN = Number of versions/releases that each application is expected to have