Evaluating Test Automation Tools for Government

[article]

After being involved in limited, informal evaluations of test automation tools for various companies, I recently had an opportunity to do the same more formally for a Canadian government client, which I refer to here as "the Agency." This article will share that experience.

One of the Agency's business departments wanted to augment its manual testing with automated testing. Since they did not have anyone in-house with the expertise to do that, they brought in an outside consultant to evaluate the situation and make a recommendation.

The evaluation was performed in the following sequence of steps, which turned out to be very useful:

  • requirements development for test automation
  • criteria development based on these requirements
  • preliminary evaluation with three major vendors
  • analysis of the results of the preliminary evaluation
  • detailed evaluation with two vendors
  • analysis of the results of the detailed evaluation
  • prototype development using tools that won the evaluation
  • analysis of the results of the prototype
  • possible deployment of the tools

During the course of the evaluation, certain realizations stood out as key considerations in any tool evaluation in a government setting. They are discussed below.

Categories of Test Automation Tools
After interviewing some of the key players, the first thing that became apparent was that test automation was a nebulous concept in the organization. People were confused between automated function testing and load testing. People in the business department currently performing manual testing wanted the automation of function testing. The people in the IT group, on the other hand, who had a system-level perspective, wanted load testing. They all wanted one tool to do both. One of the first challenges I faced was convincing them that no one tool could do both.

Cost Differences between Function and Load Test Tools
There is a huge disparity in cost between function testing and load testing tools. While typical function testing tools range between $5,000 and $9,000 (Canadian dollars), load-testing tools typically start at $25,000 and can easily cost $60,000 or more.

Current Canadian federal government purchasing rules mandate that the purchase of any tool that costs over $25,000 must be through competitive bidding. The departments balked at this because of the sheer amount of work involved in bidding, and the consequent delays. Competitive bidding typically adds at least six months to the whole process. In addition, nontechnical criteria that have no bearing on the effectiveness of the tools will get introduced into the process.

Fundamental Difference between Function and Load Test Tools
There is a fundamental difference between function test tools and load test tools. Function test tools work at the GUI level while load test tools work at the protocol level. So, for a function test tool, the big question is: "Does it recognize all of the objects in the various application windows properly?" Object recognition for function test tools is never 100 percent. If object recognition is less than 50 percent, your test automation people will be forced to perform so many workarounds that it will defeat the objective of test automation.

For load testing tools, this question is irrelevant. The big question here is: "Does it recognize the client-server communication protocols properly?" For example, if your multitier client/server application uses IIOP (a CORBA protocol called Internet Inter-ORB Protocol), you'd better ask whether the load test tool can handle this protocol. Even if this protocol is listed as supported in the tool specifications, verify it in your environment.

There Is a Business/IT Divide
Evaluations tend to be skewed in favor of macro-level criteria if the consultant has an IT background. Those with actual automated test script development backgrounds tend to focus on the micro-level criteria like object recognition, language syntax, etc. The IT group at the Agency put great emphasis on factors such as a detailed financial analysis of the vendors' income statements, while not caring at all about object recognition issues. They actually ridiculed some of the evaluation criteria that related to object recognition. While both sides of this divide make valid points, not paying adequate attention to implementation details will come back to haunt when it is time for rollout.

Vendor Tactics Need Active Management
When you call the vendor for evaluation copies, one of the first things they will ask is the number of potential seats or licenses involved. If that number is high, they will be very interested. Also, if load testing is involved, they are all invariably interested because of the potential for a big sale. Competition among vendors needs to be actively managed and monitored. In fact, one of the vendor representatives in our situation employed anticompetitive practices (such as trying to scuttle the entire evaluation phase).

Number of Tools under Evaluation Needs to Be Limited
One of the challenges in any evaluation is limiting the number of tools involved. One of the solutions is to do it in two stages. The first stage can involve a large number of tools. Perform a limited evaluation based on high-level criteria such as the presence of user discussion lists, costs, and vendor financial strength. Based on this, select a limited number of tools for the more detailed second stage evaluation. In the case of the Agency, evaluations were done separately for function test tools and load test tools.

Proof of Concept Is Very Important
Vendors invariably will claim to support your particular environment. The only way to confirm the claim is to perform prototyping. Select five of the most important test scenarios for the application and go about developing scripts for them. Amazing lessons can be learned from this exercise.

Existence of an Organization for Test Automation Is a Key Determinant
Many Canadian government departments lack a software testing organization. They are big on quality assurance, but weak in testing and quality control. The Agency fit this pattern. It had no dedicated testing organization at all. If there is no one dedicated to test automation, the long-term prospects for test automation in that organization are very dim. As a consultant, you might as well recommend not going forward with it.

At the Agency, the decision was to not go forward. The Proof of Concept clearly proved the technical feasibility of test automation for the application. However, the lack of a testing organization to support function test automation scuttled its prospects. In this case, even though the ultimate decision was to not pursue automation, the experience did shed light on the organization's strengths and weaknesses in regard to application testing, and the evaluation saved them from buying an expensive tool that wouldn't be used.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.