Understanding Both Sides of the Test Tool Fence

[article]

and the pass-fail criteria are all defined. The language-sensitive editor in AutoExec is fairly powerful, but it does not provide template test program files or automatically create an executive function invoking each test.

Clearly, the user is still required to perform the more strenuous mental exercises of test definition and scripting (creating the executive function), but AutoExec relieves the user of the burden of compilation, linking, execution, results comparison, and report generation. You can see how low complexity can allow for a higher fence because the task being automated is generally straightforward and generic. As the level of specificity in the task increases, so does the need for freer communication between vendor and customer.

Test scripting tools, where the user supplies the test parameters used to formulate the test script, are of moderate complexity. In this case, the tool is responsible for creating the scripts that run the tests, in addition to performing the relevant test execution tasks. The user must still determine the test parameters, create the test cases, and feed them into the tool.

As an example, consider the hypothetical AutoScript companion to AutoExec. AutoScript provides a front end to feed test cases into AutoExec. Using AutoScript, the user need only provide a database of the variables, the input settings for each test case, and the expected results.

In return, AutoScript

  1. Imports the test cases and creates the test driver
  2. Validates the test cases
  3. Executes the test cases using AutoExec

AutoScript takes care of formatting and feeding tests into the automated execution tool, while the user retains the responsibility for test case definition. Moderate complexity would suggest the need for a lower fence height than low complexity. In our hypothetical case, for example, the developer and customer must agree on how to load test cases into AutoScript.

Test definition tools that use few user-supplied parameters are of the highest complexity. Such tools utilize their native understanding of test theory to create appropriate test cases according to the design under examination. The user is responsible only for providing the design to the tool. Because the tasks performed by this type of tool are so complex, the fence between customers and developers is generally lower. The complexity and specificity of the tasks require the developer to work more closely with the customer to ensure that the methods of specifying the design and data dictionary are both adequate and understood.

The bottom line: Higher tool complexity empowers a tool's user (a specific problem is addressed by a specific solution). Lower complexity empowers the developer (a general solution is preferable to a highly tailored one).

Looking Across the Fence
All fences have two sides. In context of evaluating a test automation tool (and its vendor), the view from both sides must be examined.

From the customer's side, the view tends to be panoramic, encompassing many tools and vendors at once. Customers typically shop around for the best value, factoring in their own key questions and data points. The specific criteria applied vary in accordance with the nature of the tool and the degree to which the customer is able to shape it.

For high-volume tools, where the developer is empowered, user-customization needs are generally accommodated by customer development of support utilities that help fit the tool into the process and development environment. The tool itself is not likely to be malleable and there is probably not much direct communication between tool developer and user. The main power granted the users is the ability to "vote with their feet" during the tool evaluation and selection phase. Once a tool is purchased and deployed,

About the author

Steve Morton's picture Steve Morton

Steve Morton is an automated test tool developer by trade, operating in an arena of low volume and high expectations for the past eight-plus years. He has primarily worked on an automated, low-level structural analysis and test definition tool tailored for safety and mission-critical software development markets such as the aerospace and medical devices industries. His ongoing efforts to bridge his developer’s viewpoint into the viewpoint of a tool’s customers has led him to acutely examine the nature of the relationship between developers and users of automated software testing tools. In addition, Steve has used both commercial and homegrown testing tools throughout his career, enabling him to stand on both sides of the automated software test tool fence.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01