Case Study in a Synthesis Compiler Test System

[article]
Uses of Test Automation Tools

Testing is an essential and time-consuming part of software development. Manual testing is often required, but many software groups try to minimize it. Test automation has advantages in most cases, especially when software is expected to have many releases. Automated testing is more reliable, repeatable, less time consuming, less boring, and less expensive. Because of these advantages, companies are spending more money and employee time on test automation.

After a software group decides to automate tests, it should choose whether to buy a test tool or develop it in-house.

Our test system consists of a set of main scripts that do general setup and then call test drivers. Each test has its own driver script, which tests specific setup, runs the test, and determines if the test has passed or failed. Then the main script gets information from the test drivers and collects, reports, and analyzes the results. Main scripts can also distribute tests between machines and can handle hung tests. This system allows a lot of independent tests. Each test can have its own flow and pass/fail criteria. If some of the tests temporarily don't work, it is easy to turn them off. If a lot of tests have the same flow and pass/fail criteria, a standard test driver can be called by other test-specific drivers to simplify development of the test suite.

I have used this test system for several years. It has demonstrated some significant advantages compared to other systems. This strategy has worked well for us and I believe it can work in other environments as well.

Buying a Tool
Buying a tool requires researching all the tools available on the market. It is important to have realistic expectations regarding these tools. Many of them are unable to live up to a designer's expectations. "Record and play back" tools are typical examples of unrealized expectations. The formula of "just install it, play with software, and the tool will record everything you do and you will have test cases" never works. Choosing the correct tool and learning how to use it in your environment takes time.

Developing a Tool In-House
If a software group decides to develop a tool in-house, it needs to treat this development as a project: write a specification for it, budget time to develop it, document it, test it, let everybody in the group know how to use the tool, and encourage the developers (not only QA) to use it. This is difficult, considering that the test system is an internal project-it doesn't go to customers and therefore doesn't bring an immediate reward.

No matter how a company chooses to test software, it should be prepared to spend money on the testing. What follows is a description of a test system developed internally to test a synthesis compiler. It worked well for us and may be used for testing other software products.

Description of Test Tool
Suppose you have a number of independent tests. Some of the tests are positive, some negative. It is necessary to run them on each build of your product. You expect the product you are testing to have many more releases. So you expect to add more tests, which will be suitable for new features. You want to develop a system that can work for you now and in the future for the life of the tested product. Your automation system should be reliable, repeatable, maintainable, easy to use, and very flexible. It should allow adding tests and new test scenarios with minimum effort and headache. It should allow turning some tests off. We all know how annoying it is to have tests that fail because we don't have time to fix them or because of a known bug that will be fixed later.

The test system I describe in this article has all the attributes and features mentioned in the preceding paragraph. It was not our first test system. The one before it used a lot of "if then/else if/else" statements. At some point, we needed to add a couple more test scenarios. We could add a few more "if/then/else" statements, but one of my coworkers proposed to change the system. Each test can have its own test drive, a script that runs the test and decides if it passed or failed (gave the expected or unexpected result). It allows a lot of flexibility.

The test system consists of a set of main scripts that do general setup. General setup can be very easy or very complicated. It can have some or all of the following features: check version of software you test, report if any software or setting required for running the tests is missing, define environment variables, determine where results would be stored, check for available disk space on the system for running the tests, and store the results. The main script can also have a GUI, which allows the tester to choose what group of tests to run and/or how to report the results, what machines to use, etc.

If this setup sounds too complicated, omit some features. You can also start with doing all the setup manually before running the tests and then adding setup later. For example, originally our test system didn't check the software version. We added it later. And we never added checking for available disk space. Even so, we knew it would be very useful.

After the original setup, the main script calls test drivers one after another. Each test has its own driver script, which tests specific setups (e.g., copying files from storage to a working directory), runs the test, and determines if the test has passed or failed. Each test can have its own flow and its own pass/fail (expected/unexpected result) condition. So it is easy to have positive and negative tests. If some of the tests temporarily don't work, it is easy to turn them off by renaming the test driver. Chances are, a lot of your test drivers are similar to one another or you have few different kinds of test drivers. In order not to have similar scripts, create a standard test driver collection. Each test driver can call one of a few of the standard test drivers.

If you have a lot of tests, divide the tests into a few test suites (pools, groups), so each group of tests doesn't take too long to run.

The main script gets information from the test drivers (pass, fail), collects, reports, and analyzes the results. This system allows you to choose how to report test results.

You can keep only one general report or reports for each test or keep a general report and failed tests' reports.

You should choose what to include in your general report. Before you can decide how and what you want to keep, you need to ask yourself what you need to know to reproduce the failed tests with minimum effort. How much history do you want to save? How much disk space do you have? Can you dump test results on CD or a tape after you are done with the test cycle?

Main scripts can also handle timeouts. Tests shouldn't hang out the whole suite. There should be a default timeout, but some test drivers can overwrite defaults.

Test distribution between different machines can be supported, too. If you have a test lab with more than one machine, you can either distribute tests manually by dividing them into a few groups and running different test groups on different machines, or you can write scripts that distribute tests. In this test system, you have one test server and a few test clients. The main script is started on the server. A "client_ready" script is started on each client. The client_ready script does the original setup on the client computer and then takes "orders" from the server script. A script run on the server sends the test driver to the client. The client runs the test driver and passes the pass/fail (expected/unexpected) result to the server. The server script gets information from the client and reports the results. It also checks to see if the client machine is available to start a new test.

If the software you are testing is supported on multiple platforms, your test system should run on multiple platforms as well. Our synthesis compiler, for example, was working on the UNIX, Win95, Win98, WinNT, Win2000, and WinXP platforms.

Problems We Experienced Developing and Using Our Test System

  1. We didn't budget for the changes. We were just "sneaking the changes in." So when we hit release mode, the test system was not ready and was very unreliable. It began breaking in several places and it was a disaster. Management was very disappointed. It took us some additional time to clean things up.
  2. Only three people in our software group used the system. That was a huge mistake! More people could be familiar with the test system if we had better documentation sooner. And the system should have been available to developers so they could use it for their testing.

The system I described in the paper is very flexible. I have used this test system now for several years. It has demonstrated (after it became stable) significant advantages compared to other systems.

References
If you want to implement the system I described in the paper, learn about two tools: 

  1. Software Testing Automation Framework (STAF) at staf.sourceforge.net. STAF is an open source, multi-platform, multilanguage framework designed around the idea of reusable components.
  2. askMaster, developed by my former coworker Phil Tomson.

Acknowledgements to Phil Tomson for important contribution to this work.

Tags: 

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.