it is to have tests that fail because we don't have time to fix them or because of a known bug that will be fixed later.
The test system I describe in this article has all the attributes and features mentioned in the preceding paragraph. It was not our first test system. The one before it used a lot of "if then/else if/else" statements. At some point, we needed to add a couple more test scenarios. We could add a few more "if/then/else" statements, but one of my coworkers proposed to change the system. Each test can have its own test drive, a script that runs the test and decides if it passed or failed (gave the expected or unexpected result). It allows a lot of flexibility.
The test system consists of a set of main scripts that do general setup. General setup can be very easy or very complicated. It can have some or all of the following features: check version of software you test, report if any software or setting required for running the tests is missing, define environment variables, determine where results would be stored, check for available disk space on the system for running the tests, and store the results. The main script can also have a GUI, which allows the tester to choose what group of tests to run and/or how to report the results, what machines to use, etc.
If this setup sounds too complicated, omit some features. You can also start with doing all the setup manually before running the tests and then adding setup later. For example, originally our test system didn't check the software version. We added it later. And we never added checking for available disk space. Even so, we knew it would be very useful.
After the original setup, the main script calls test drivers one after another. Each test has its own driver script, which tests specific setups (e.g., copying files from storage to a working directory), runs the test, and determines if the test has passed or failed. Each test can have its own flow and its own pass/fail (expected/unexpected result) condition. So it is easy to have positive and negative tests. If some of the tests temporarily don't work, it is easy to turn them off by renaming the test driver. Chances are, a lot of your test drivers are similar to one another or you have few different kinds of test drivers. In order not to have similar scripts, create a standard test driver collection. Each test driver can call one of a few of the standard test drivers.
If you have a lot of tests, divide the tests into a few test suites (pools, groups), so each group of tests doesn't take too long to run.
The main script gets information from the test drivers (pass, fail), collects, reports, and analyzes the results. This system allows you to choose how to report test results.
You can keep only one general report or reports for each test or keep a general report and failed tests' reports.
You should choose what to include in your general report. Before you can decide how and what you want to keep, you need to ask yourself what you need to know to reproduce the failed tests with minimum effort. How much history do you want to save? How much disk space do you have? Can you dump test results on CD or a tape after you are done with the test cycle?
Main scripts can also handle timeouts. Tests shouldn't hang out the whole suite. There should be a