Understanding the Logic of System Testing

[article]

an argument consists of two parts–a conclusion, and premises offered in its support. Premises, in turn, are composed of an implication and evidence. Implications are usually presented as conditional propositions, for example, ( if A, then B ). They serve as a bridge between the conclusion and the evidence from which the conclusion is derived [1]. Thus, the implication is very important for constructing a logical argument as it sets the argument's structure and meaning. In the following, I will apply this concept to software testing and identify the argument components that can be used in testing to construct valid proofs.

In software testing, we derive and report conclusions about the quality of a product under test. In particular, in system testing a common unit of a tester’s work is testing a software feature (also known in Rational Unified Process as test requirement); the objective is to derive a conclusion about the feature's testing status. Hence, the feature status, commonly captured as pass or fail, can be considered a conclusion of the logical argument. To derive such a conclusion, test cases are designed and executed. By executing test cases, information is gained, i.e., evidence is acquired that will support the conclusion. To derive a valid conclusion, also needed are implications that in system testing are known as a feature's pass/fail criteria. Finally, both the feature's pass/fail criteria and the test case execution results are the premises from which a tester derives a conclusion. The lack of understanding of how system testing logic works can lead to various issues–some of the most common of which I will discuss next.

Common Issues With Testing Logic Issue 1: Disagreeing About the Meaning of Test Cases
Software testers frequently disagree about the meaning of test cases . Many testers would define a test case as the whole set of information designed for testing the same software feature and presented as a test-case specification. Their argument is that all test inputs and expected results are designed for the same objective, i.e., testing the same feature, and they all are used as supporting evidence that the feature passed testing.

For other testers, a test case consists of each pair–input and its expected result–in the same test-case specification. In their view, such a test-case specification presents a set of test cases. To support their point, they refer to various textbooks on test design, for example [4, 5], that teach how to design test cases for boundary conditions, valid and invalid domains, and so on. Commonly, these textbooks focus their discussion on designing test cases that can be effective in finding bugs. Therefore, they call each pair of test input and its expected output a test case because, assuming a bug is in the code, such a pair provides sufficient information to find the bug and conclude that the feature failed testing.

Despite these different views, both groups actually imply the same meaning of the term test case : information that provides grounds for deriving a conclusion about the feature’s testing status. However, there is an important difference: The first group calls test case the information that supports the feature's pass status, while the second group calls test case the information that supports the feature's fail status. Such confusion apparently stems from the fact that all known definitions of the term test case do not relate it to a feature’s pass/fail criteria. However, as the discussion in the sidebar shows, these criteria are key to understanding the meaning of test cases.

Issue 2: Presenting an Argument Without a Conclusion
This issue is also very common. As

About the author

Yuri Chernak's picture Yuri Chernak

Yuri Chernak, Ph.D, is the president and principal consultant of Valley Forge Consulting, Inc. Yuri has worked for a number of major financial firms in New York, leading QA governance committees in IT and helping clients improve their software requirements and software testing practices. Yuri is a pioneer in implementing a new discipline—aspect-oriented requirements engineering—for financial applications on Wall Street. He is a member of the IEEE Computer Society, has been a speaker at several international conferences in the US and Canada, and has published papers in the IEEE publications and other professional journals. Contact Yuri at ychernak@yahoo.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03