Evaluating Test Automation Tools for Government

[article]

to focus on the micro-level criteria like object recognition, language syntax, etc. The IT group at the Agency put great emphasis on factors such as a detailed financial analysis of the vendors' income statements, while not caring at all about object recognition issues. They actually ridiculed some of the evaluation criteria that related to object recognition. While both sides of this divide make valid points, not paying adequate attention to implementation details will come back to haunt when it is time for rollout.

Vendor Tactics Need Active Management
When you call the vendor for evaluation copies, one of the first things they will ask is the number of potential seats or licenses involved. If that number is high, they will be very interested. Also, if load testing is involved, they are all invariably interested because of the potential for a big sale. Competition among vendors needs to be actively managed and monitored. In fact, one of the vendor representatives in our situation employed anticompetitive practices (such as trying to scuttle the entire evaluation phase).

Number of Tools under Evaluation Needs to Be Limited
One of the challenges in any evaluation is limiting the number of tools involved. One of the solutions is to do it in two stages. The first stage can involve a large number of tools. Perform a limited evaluation based on high-level criteria such as the presence of user discussion lists, costs, and vendor financial strength. Based on this, select a limited number of tools for the more detailed second stage evaluation. In the case of the Agency, evaluations were done separately for function test tools and load test tools.

Proof of Concept Is Very Important
Vendors invariably will claim to support your particular environment. The only way to confirm the claim is to perform prototyping. Select five of the most important test scenarios for the application and go about developing scripts for them. Amazing lessons can be learned from this exercise.

Existence of an Organization for Test Automation Is a Key Determinant
Many Canadian government departments lack a software testing organization. They are big on quality assurance, but weak in testing and quality control. The Agency fit this pattern. It had no dedicated testing organization at all. If there is no one dedicated to test automation, the long-term prospects for test automation in that organization are very dim. As a consultant, you might as well recommend not going forward with it.

At the Agency, the decision was to not go forward. The Proof of Concept clearly proved the technical feasibility of test automation for the application. However, the lack of a testing organization to support function test automation scuttled its prospects. In this case, even though the ultimate decision was to not pursue automation, the experience did shed light on the organization's strengths and weaknesses in regard to application testing, and the evaluation saved them from buying an expensive tool that wouldn't be used.

About the author

Jayan Kandathil's picture Jayan Kandathil

Jayan Kandathil is a consultant based in Ottawa, Canada. He has spent the last seven years in various capacities as manual tester, systems engineer, test automation specialist, test team lead, QA manager, and test tools consultant in both the United States and Canada. He is a certified Software Quality Engineer (CSQE) by the American Society for Quality. Email him at jayan.kandathil@eastmansoftware.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!