Understanding Both Sides of the Test Tool Fence

[article]

Your company has decided to invest in test automation and you have been asked to decide what tools to buy and to determine which technologies match your organization's needs.

But how do you get beyond the marketing brochures? How do you tell if the tool is really what you need to purchase? For that matter, how do you tell whether the tool vendor really understands the testing being automated?

Looking at the fence separating the prospective user from the tool's developers is a good place to start.

Why is There a Fence?
The first thing to understand is why a fence exists between the prospective user and the tool's developers. To begin with, there is a marked difference between test tool development and usage. At a fundamental level, this partition is necessitated by the inherent difference between defining a problem and generating an algorithm to solve it. The differences are more profound than that, of course, but this basic disparity illustrates the inevitability of a fence.

The fence also defines the relationship between the developer of the tool and the user of the tool, providing a balancing point between the needs of each.

For instance, tools developed in-house have very low fences because the customers and developers are essentially one team. Typically, such tools are highly specialized and have limited scopes of applicability. The specific definition of the problem allows developers to produce something explicitly targeted to the need. Because users of in-house tools commonly have identical needs, such tools may not require a high degree of flexibility. In some cases, when the user and developer is the same person, there is no fence at all.

On the other hand, test automation tools that sell in high volumes ("shrinkwrap" tools) have tall fences and a clear dividing line between the tool's developers and users. There are many customers being served, with many diverse problems being addressed, so the tool's developers must be more circumspect in their approach to an algorithmic solution.

The foundation of a shrinkwrap tool's development is reliance on solid test theory and use of flexible algorithms to solve a range of issues within the problem domain. Such tools are generally designed to perform a common task, following accepted testing practices without incorporating many specialized techniques.

With such high-volume tools, developer-customer communication is often unidirectional-flowing from the developer to the customer. The primary communication medium is the formal documentation that is released with the tool. Customer support, providing clarifications and a modicum of customer-to-developer contact, offers a secondary (often inadequate) communications channel.

Low-volume commercial tools, more often customized according to the needs of the user, should have relatively low fences. Communication surrounding such tools is more bi-directional, with the tool's higher price putting the customer in the driver's seat. Because the user is buying a custom solution to their specific problem, developers must be more responsive to the customer's expressed needs.

Complexity's Role
The complexity of the problem being solved also influences the height of the fence. For instance, consider that test execution automation is of relatively low complexity. The issues being addressed with a test execution automation tool are

  1. Generating test results (called "actual results")
  2. Comparing the actual and expected results
  3. Producing a test report indicating the pass/fail status of the test case(s)

The user must generate and script the tests that the tool will execute.

As an example, let's look at the hypothetical AutoExec tool, which provides a graphical interface for test execution and report preparation. AutoExec allows the user to create a test driver program (using C) in which the test cases, the expected results, and the pass-fail criteria are all defined. The language-sensitive editor in AutoExec is fairly powerful, but it does not provide template test program files or automatically create an executive function invoking each test.

Clearly, the user is still required to perform the more strenuous mental exercises of test definition and scripting (creating the executive function), but AutoExec relieves the user of the burden of compilation, linking, execution, results comparison, and report generation. You can see how low complexity can allow for a higher fence because the task being automated is generally straightforward and generic. As the level of specificity in the task increases, so does the need for freer communication between vendor and customer.

Test scripting tools, where the user supplies the test parameters used to formulate the test script, are of moderate complexity. In this case, the tool is responsible for creating the scripts that run the tests, in addition to performing the relevant test execution tasks. The user must still determine the test parameters, create the test cases, and feed them into the tool.

As an example, consider the hypothetical AutoScript companion to AutoExec. AutoScript provides a front end to feed test cases into AutoExec. Using AutoScript, the user need only provide a database of the variables, the input settings for each test case, and the expected results.

In return, AutoScript

  1. Imports the test cases and creates the test driver
  2. Validates the test cases
  3. Executes the test cases using AutoExec

AutoScript takes care of formatting and feeding tests into the automated execution tool, while the user retains the responsibility for test case definition. Moderate complexity would suggest the need for a lower fence height than low complexity. In our hypothetical case, for example, the developer and customer must agree on how to load test cases into AutoScript.

Test definition tools that use few user-supplied parameters are of the highest complexity. Such tools utilize their native understanding of test theory to create appropriate test cases according to the design under examination. The user is responsible only for providing the design to the tool. Because the tasks performed by this type of tool are so complex, the fence between customers and developers is generally lower. The complexity and specificity of the tasks require the developer to work more closely with the customer to ensure that the methods of specifying the design and data dictionary are both adequate and understood.

The bottom line: Higher tool complexity empowers a tool's user (a specific problem is addressed by a specific solution). Lower complexity empowers the developer (a general solution is preferable to a highly tailored one).

Looking Across the Fence
All fences have two sides. In context of evaluating a test automation tool (and its vendor), the view from both sides must be examined.

From the customer's side, the view tends to be panoramic, encompassing many tools and vendors at once. Customers typically shop around for the best value, factoring in their own key questions and data points. The specific criteria applied vary in accordance with the nature of the tool and the degree to which the customer is able to shape it.

For high-volume tools, where the developer is empowered, user-customization needs are generally accommodated by customer development of support utilities that help fit the tool into the process and development environment. The tool itself is not likely to be malleable and there is probably not much direct communication between tool developer and user. The main power granted the users is the ability to "vote with their feet" during the tool evaluation and selection phase. Once a tool is purchased and deployed, it becomes more difficult to walk away, especially if the tool is an expensive investment such as a high-end load testing tool. High-volume tool users tend to value open architecture and ease of use.

On the developer's side, the higher fence of high-volume tools allows development to follow standardized testing practices because the tool is designed to solve an industry-wide problem and be amortized across many customers. Flexibility is of utmost priority in order to maximize the general usefulness of the tool without extensive customization. The developer therefore is able to determine the shape and function of the tool according to standard practices, rather than accounting for numerous specific variations desired by individual customers.

With low-volume tools, where the customer is empowered, the user's needs are explicitly expressed in a specification and set of acceptance criteria. The developer responds to these needs, creating the required capabilities in the tool. In the end, however, the customer pays for this empowerment with a higher price tag. Low-volume tool users typically place the highest value on accommodation of their specific situation.

Developers of low-volume tools enjoy greater latitude in using innovative methods to fulfill the customer's needs. In this arena, tools are created to fill a specific need, with generalization as a low-priority goal. Customization is the key to success.

View From the Other Side
Understanding the view from one's own side of the fence is important, but it is equally important to know the view from the opposite side. The grass may or may not be greener on the other side-but potential customers should try to understand the mindset of the tool's development staff by looking at several areas, including

  • Documentation: Should be adequate to explain the tool's usage and installation, as well as illustrating the testing philosophy implied by the tool
  • Intuitive use: Given that the customer understands the testing being automated, the steps performed by both user and tool should seem natural and progressive

In order to understand the developer's point of view, a potential customer may wish to find out the background of the tool's development staff, including

  • Has the development staff personally performed the type of testing being automated?
  • Do the developers know and understand any applicable standards or government regulations?
  • Does the team understand the purpose of the testing that the tool will perform?
  • Can the development or support staff help educate a customer in any or all of these areas?

As a tool developer, understanding the customer is critical to success. With all tools, the documentation should address all of the areas in the above questions. For high-volume tools, this may be enough information to ensure that the tool adequately performs its assigned tasks.

In low-volume tools, though, more information is needed. There are several additional areas that the developer should examine regarding the potential customer, including

  • Does the customer appreciate the tradeoffs between flexibility and capability?
  • Can the customer adequately express their requirements?
  • Can the custom requirements be translated into an effective algorithm?

The developer's job, when the customer doesn't understand something, is to educate.

A potential user must peer over the fence to evaluate the methodology and flexibility built into the tool, plus the development team's knowledge about the type of testing being automated. These areas will help establish the potential value of the tool beyond the glossy-printed brochures used to market it.

The developer, on the other hand, must understand the industry being served, know the type of testing being automated, and understand how this type of testing fits into the customer's overall needs. The developer must also understand the limitations of the testing being automated (after all, no test is a "silver bullet").

Finally, both the customer and developer must understand scalability-knowing that there is a point where enough testing has been performed. (Agreeing on where to draw that line is another issue entirely, one that is unlikely ever to be settled.)

There Will Always Be a Fence
There is always a fence between commercial test automation tool users and developers. This fence is shaped by the nature of the testing being performed, the complexity of the operations performed, and the sales volume of the tool itself. High-volume "shrinkwrapped" tools have high fences, empowering the developer to produce a generalized solution to a common problem. Low-volume, highly customized tools have lower fences, empowering the customer to pursue a specific solution to their specific issue.

For the developer, understanding the customer is a must. This lays the foundation for success, no matter what type of tool is being produced. Customers should try to understand the developer's viewpoint. At its essence, use of a commercial test automation tool is an expression of trust by the customer. Understanding the developers of a tool is an important facet in establishing that trust, especially with high-volume tools where communication is primarily one-way.

Now that you have been chartered with selecting the perfect tools for your organization's test automation effort, feel free to climb over the fence and see the view from the other side. It will help as you contemplate which tools provide the greatest value in your situation. While you're there, take a look at the grass, too, and see if it's more RGB 0, 150, 0 on the other side.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.