Buying a GUI (graphical user interface) test automation tool is a daunting task. If you're evaluating tools for the first time, it's hard to know what to look for in a tool. Even if you've evaluated GUI testing tools before, the tools available may have changed significantly since the last time you looked around. Which do you choose? Do you really need all the features touted in each vendor's marketing literature? You know that you don't want to succumb to the slickest sales pitch. You aren't sure what features you'll need six months from now. So you're torn between buying a high-end tool that might be overkill for your purposes and buying a low-end tool just to get started with something.
Your first step is to establish the decision criteria you will use in evaluating tools. Some criteria may be obvious: you want to buy from a reputable vendor, the tool you choose needs to support the operating system(s) on which you test, and ease of use either is or is not important to your organization. This article isn't intended to tell you about the features that you already know you need. Instead, we'll talk about the GUI test automation tool features that you'll discover you need a few months after your first purchase. Consider it a "heads up" of things to come.
To start, consider a high-level diagram of an automated test system. If you look at test development as a simple matter of creating tests that exercise a GUI-based software application, then your model of test automation looks something like Figure 1.
Your tests will resemble this diagram when you use record and playback exclusively. But this model has limitations. Since the tests work directly with the user interface (UI), almost any change in the UI means a change in every test that uses that part of the UI. In addition, if there are common actions that most of the tests must perform (logging in, for example), then every test must include those steps. Finally, since all the test data is embedded in the tests, you have to edit the test code to change even little things like the name to use on a login form.
As a result, routine maintenance is difficult and major changes for localization or UI overhauls are a nightmare. It is not uncommon for test systems that look like this to fall apart completely within a single release. In other words, the tests might work for 1.0, but you'll need to recreate them for 2.0.
To address each of these shortcomings, let's add a few more elements. Each will be explained in more detail later. First, add an abstract layer between the software under test and the test scripts. The abstract layer maps UI elements to logical names that the tests will use. Next, add a reusable library of functions to encapsulate common actions. Finally, add test data files to hold data that would otherwise be hard-coded into the scripts. Now the model looks like the one in Figure 2.
Even if you don't plan to use all the elements in this diagram, you'll want to find a tool that can support them all. You'll need these features sooner than you think.
Why? While you may create some tests that are quick and dirty and designed to be disposable, your automated test effort is unlikely to pay off unless the majority of your tests are:
You will only achieve that goal if you treat test automation as seriously as software development. Test automation really is programming. So a good GUI test automation tool will have many of the same features as a good development environment.
"Oh, sure," you might be thinking. "I'll start programming my tests in my copious free time." You probably barely have enough time to finish your current tasks. Automation is supposed to make your life easier, not add a whole new programming task. Unfortunately, if you don't treat test automation as a programming task, you'll end up redoing it and redoing it and redoing it. Worse, if a last-minute change breaks the tests at the end of the project, then the automated tests won't run-just when you need them the most. Even if you don't think you'll have time to follow good development practices on most tests, buy a tool that supports them. Consider it an insurance policy.
So how can you be sure that you've identified a tool that will enable you to architect a system and implement it using good programming practices? Let's look at twelve features that are important in any good tool.
A prerequisite to all the other features described in this article is that the tool must have a scripting language of some kind that contains the usual programmatic constructs. At the very least, it should:
You get an added advantage if the tool uses a common language like Visual Basicor C: it's easier to find books and training courses on the language, and many people in your organization may already know it.
The more powerful the language, the more control you potentially have. Sophisticated scripting languages enable you to create more sophisticated scripts. Of course, having a sophisticated language also makes it possible to create automated tests that are more complex than the software being tested. So look for a language that gives you the power and flexibility that you need, and design your tests to use the sophisticated features judiciously.
UI element identifiers
In order to write test scripts that actually test something, you'll want to make sure that the test tool can identify the elements on your UI as objects rather than trying to point to them by coordinate.
If you're testing a Windows application and your developers are using MFC (Microsoft Foundation Class library) controls, this isn't a problem for most of the test tools available. However, if your application is written using Java Swing Controls (a.k.a. JFC, or Java Foundation Class library), some tools will work better than others. During your evaluation, make sure that the tool can identify the UI elements in a variety of representative windows.
It is true that some UI elements aren't really controls at all, just bitmaps that do something when you click on them. Software that uses UI elements that are bitmaps rather than real controls won't behave well with any automated testing tool. If that's the case for your software, involve your developers in the tool evaluation process so they can see first hand why it's important to use standard controls to improve the testability of the software.
Imagine that you're testing an application that allows you to search for records in the database. Many of the product's features work only when there is a set of search results available, so most of the tests include the sequence of steps necessary to perform a search. Now imagine that the sequence of steps changes slightly: you need to update every script.
The alternative is to create a function or subroutine that performs the search. That function becomes part of a reusable library. Each script calls the function rather than redefining the steps. You'll make all your scripts more maintainable if you define a sequence of events in one place-the function library-rather than in every script that needs to perform those actions.
There are two important things to look for in a tool that supports reusable libraries. First, make sure that any script you create with the tool can easily call the functions you put in the library. It isn't sufficient if the tool only allows you to call subroutines created within the current script. Second, make sure that the functions can take parameters. For example, if you create a login function, you'll want to specify the user name and password at the time that you call the function (rather than embedding that information in the function itself).
In addition to creating your own libraries, you'll often find it useful to access outside libraries. In Windows, this means that you want to be able to call into .dll files. As an example, consider a client/server system built to work with a relational database. The software under test uses the database's proprietary API (Application Program Interface). If the automated tests can use the same API, they can be more powerful. They can make checks the user interface doesn't allow. For example, they can check that a changed value was actually written to the database, not just changed on the screen. They can check whether a transaction was correctly and completely logged, even if the UI gives no access to the log. In general, these tests can determine "pass" or "fail" more accurately than by checking the value through the user interface.
If you're testing on a Windows system, you'll also want access to the Windows API. The Windows API enables you to get system information that would be difficult or impossible to obtain in any other way. For example, it's very useful to be able to get or set the value of a registry key from within your automated scripts.
An "Abstract Layer" enables you to define logical names for physical user interface elements. Some tools call this a "test map" or "GUI map" while others call it a "test frame." Regardless of the name, the purpose of the abstract layer is to make it easier to maintain your tests.
As an example, imagine a login dialog box with fields for name and password. Within the program, the programmer named those fields "Name" and "Password." You create an abstract layer that also identifies the fields as "Name" and "Password" and proceed to use those identifiers in all 500 of your scripts. But with the next version of the software under test, the underlying identifiers of the name and password fields become "username" and "pword." Instead of changing all 500 of your scripts, you change the UI identifiers in one place-the abstract layer.
Several test tools offer features, such as window recorders, specifically designed to support the creation of an abstract layer. These features are very useful, but not absolutely necessary if you're willing to program the abstract layer manually.
If you are testing multi-user software, you need to be able to create tests that involve multiple simulated users. For example, you might want to create a test in which one user on one machine locks a file while another user on another machine tries to open the same file. How do you automate this sort of test? It's very difficult if the test tool you choose doesn't have distributed test capability.
In a distributed test, the automated testing tool enables you to specify the machine on which to execute a given command. This is a little different from the ability to launch a test on a remote machine (also a good feature). In launching a test on a remote machine, the remote machine executes that test from beginning to end. But if you need to coordinate the activity on two different machines, then you want to do more than launch a test and let it run. You need to be able to create a test that waits for an action (such as locking a file) to be complete on the first machine before beginning an action (such as attempting to open the file) on the second machine.
File I/O (input/output) means that the tool provides functions that enable you to open a file on the hard disk (usually an ASCII file) programmatically, read from it, write to it, and close it.
File I/O functions are central to "data-driven test automation." In a data-driven automated test, the script uses test data from a file to drive the test activity (note the role of "Test Data" in the test automation architecture of Figure 2). Data-driven testing makes it possible to automate a large number of tests with a minimal amount of test automation code.
If you are testing on a Windows system, it's particularly useful if the tool provides functions for handling Windows .ini files. For example, if the software under test needs to know which server to use, then it's a good idea to specify the server name in an .ini file. Then you can change the test server without having to change the automated scripts.
Last night, before you left for the evening, you started a long automated test run of 250 tests. This morning when you came in, you discovered that the tests ran for exactly five minutes before dying horribly on the second test because an unexpected dialog appeared. This scenario is frustrating and not at all uncommon.
Tools that have a good error-handling system make it possible for other scripts to execute even after one script fails. The tool can stop the failed script, then reset the software to its initial state before starting the next script.
It's particularly useful if the error-handling capability of the tool is customizable. For example, perhaps your product has known error conditions that require a certain amount of cleanup to fix. Your automated tests will be even more robust if you can extend the error handling system so that it recognizes these errors and performs the required cleanup.
There is nothing more frustrating than the feeling that "It should work, darn it!" You finished your test and ran it on your machine successfully. Now you try to run the test on someone else's machine, and it doesn't work. Having a decent debugger enables you to find the problem much more quickly than a trial-and-error approach.
The debugger is built into the test script development environment. Debuggers generally enable you to step through your script line by line, set "break points" (a place where the debugger will stop executing the script and wait for further instructions), and inspect the currently defined variables and their values. It's preferable if the debugger enables you to put a break point on any executable line, whether it's in the script under test or in the supporting code (in the reusable libraries, for example).
Source control is a fundamental tool for any kind of software development, and test automation is no different. In general, source control systems allow you to check files into and out of a master repository, roll back to previous versions, find differences between versions, and track several projects simultaneously. These features make it possible for multiple people to work on multiple versions of source code files.
Rather than looking for a test tool that includes source control features, it's actually best if you use the same source control system that the software developers use. The practical advantage to using the same source control system is that you can take advantage of the fact that there is already an established way of working. There's also a psychological advantage to using the same system: others in your organization see that test automation is "real programming."
Even if you are currently the only person automating tests in your group, you'll still want to make sure that all the parts of the test system you build-from test data files and test scripts to the abstract layer-can go in source control. Fortunately, integration with source control is straightforward: if the test automation files are saved in ASCII, you will be able to use all the features of your source control system. Test tools that store any part of the automated tests in a binary format interfere with your ability to use source control with your tests. You can still put the binary files into source control, but you won't be able to compare one version of the file to another to determine what changed. (If you aren't sure, you can tell if the files are in ASCII by opening them in a text processor such as Notepad in Windows. If you see just text characters and you can make some sense out of them, the file is ASCII. If instead you see smiley faces, hearts, blocks, or other strange characters, the file is probably binary.)
In addition, if the test tool requires that you place all the files in a central location and dictates the file structure, you will need to experiment (preferably during the evaluation period) to determine the best way to use source control with the centralized file location.
Command line script execution
The ability to run scripts from the command line makes it easier to set up tests that reboot the machine and restart the tests automatically after the machine comes back up. It also makes it possible to automatically kick off automated tests after each build.
The user community
The final feature to look for can't be found in the software box: look for a tool that has an established user community. Discussion groups, users' web sites, and local user groups are all great places to learn about the ins and outs of your new tool. Members of the user community often share libraries of common functions or other useful bits of source code-this can be a huge help in developing your own internal reusable libraries.
BUY A LITTLE BEFORE YOU BUY A LOT
For most organizations, the cost of switching tools is prohibitively expensive. It isn't just a matter of buying new software. There is also the question of whether to recreate existing tests in a new tool or continue to pay the software maintenance fees for the existing tool. Buying one or two licenses for a limited pilot can be a good way to try out a tool without a lot of risk. If the pilot works well, then buy more licenses and jump in with both feet. If the pilot doesn't work well, at least you don't have dozens of licenses and potentially hundreds of thousands of dollars going to waste.
This also means that if you currently have a tool that is working perfectly well but doesn't have all the features in this article, don't run right out and buy a new tool. Switching tools may be more expensive than you think.
Automated testing tools are especially difficult to evaluate because most of the test tool vendors stress ease-of-use features rather than programming features when they're trying to make the sale. You don't discover that the tool doesn't work with your source control system or that the scripts are difficult to maintain until six months down the road. Finding the tool that has the right balance of the features you need now, features you'll need in the future, and the bottom-line cost that meets your budget is certainly a challenge.
One of the challenges lies in separating the reality from the marketing collateral. In your evaluation, make sure to spend time investigating the more advanced features of the tool hands-on. In other words, don't rely on bulleted-feature-list comparison shopping to determine the best tool for your needs. Get your hands on the product and use it to automate real tests.
The important thing to remember when buying new tools is that no matter how easy the vendors like to make test automation look, it really is programming. Choose your tools accordingly. You'll know it was worth it when you update your test suite for the latest release and discover that you have a fully functional test suite more quickly than you ever thought possible.
Editor's note: This article gives some useful general-purpose requirements. Hendrickson's article "Evaluating Tools" (Volume 1, Issue 1, page 38) provides information on how to approach the tool purchasing process, with an emphasis on product requirements and tool evaluation. Eileen Strider's article "Packaged-Software Indigestion" (Volume 1, Issue 2, page 48) focuses on evaluating the vendor rather than (just) the product. Together, these three articles provide a good broad picture to help you through this process.