For automated testing, expected results are generated using a test oracle. Here is a look at how heuristic oracles can strike a balance between exhaustive comparison and no comparison at all.
heuristic: adj. 1. serving to point out or solve problems by experimenting, evaluating possible answers or solutions, or by trial-and-error. 2. Computers, Mathematics: pertaining to a method of problem solving employing approximations, used when an algorithmic approach is impractical.
oracle: n. 1. (esp. in ancient Greece) any authoritative or wise pronouncement, especially in response to an inquiry. 2. the agency, medium or god giving such responses. 3. a place at which such responses were given. 4. an alternative program or mechanism used for generating expected results.
Capture and comparison of results is one key to successful software testing. For manual tests this often consists of viewing results to determine if they are anything like what we might expect. It is more complicated with automated tests, as each automated test case provides a set of inputs to the software under test (SUT) and compares the returned results against what is expected. Expected results are generated using a mechanism called a test oracle.
The term oracle may be used to mean several things in testing-the process of generating expected results, the expected results themselves, or the answer to whether or not the actual results are what we expected. In this article, the word oracle is used to mean an alternate program or mechanism used for generating expected results.
It is often impractical to exactly reproduce or compare accurate results, but it isn't necessary for an oracle to be perfect to be useful. Several categories of oracles are described in comparisons, it's important that you actually compare equivalent units. Table 1. In this article, I'll describe some ideas associated with what I call heuristic oracles.
A heuristic oracle provides exact results for a few inputs and uses simpler consistency checks (heuristics) for the rest. Regardless of the complexity of the SUT, known or easily-computed result values can be chosen for the exact comparisons. The heuristic oracle can usually be built into the test case or verifier to simplify testing. This approach can have substantial advantages. Furthermore, the same heuristic oracle or simple variations are often reusable across broad classes of software.
As a simple example of the idea, consider the sine function (see Figure 1). An implementation of sine could be tested against a separately implemented routine that uses a different computational algorithm. That separate routine is a True Oracle. Such an oracle is very flexible-it can be used with as many test inputs as you have time to generate, it can accept any inputs the SUT can, and it has a high likelihood of identifying errors.
Note that it won't necessarily find all errors because it might share some with the SUT. For example, the same hardware or operating system fault might affect both (such as the "Pentium bug"), or both might use the wrong units. In such cases, both the SUT and the oracle could produce the same wrong answer. Unfortunately, this independent oracle is expensive both to create and use, often costing as much or more than the SUT to develop and using equal or greater machine resources. It also has a high likelihood of having its own errors because its complexity often rivals the SUT.
The other extreme is to have no oracle at all. I've reviewed automated tests that were proudly created and run, sending thousands or millions of test values to the SUT-and confirming nothing more than that the test does not crash the system or provide some other spectacular notice to the tester. That's not expensive, but it's also rarely useful and certainly tells us nothing about whether the answers from the SUT are correct.