way, you should have a good idea as to the quality of your production product. The best you can add to that is to have a responsive support team. Such a team can turn a product flaw into an opportunity to show your customers that they are important.
Another crucial component of test case integration with your CM environment involves the tracking of test results, or test run data. A Test Run might be defined as a set of Test Cases to be run against a particular build (or perhaps a related series of builds). Typically a test run is completed over some period of time, such as a few days or weeks. The more automation, the faster. As well, a test run is typically completed by multiple testers.
I like to break Test Runs down into Test Sessions. Each Test Session identifies a particular tester executing a subset of the Test Run's test cases against a particular build. All of the test session considered together form the test run. Note that variant builds might be tested under the same test run, but only a single build should be used by a given test session. Test sessions can be tracked as actual sessions (i.e. time periods spend by each tester) or they may span time periods for a given tester for a given build.
Your test run data, along with your test case repository should allow you to get answers to some basic questions?
- When did this test case last pass or fail?
- What problems arose out of testing...by tester...by variant... ?
- Show me the pass fail ratios across all test runs for a given stream?
- Which test cases were not run as part of a specific test run?
- Which area of test cases give me the best return for my money?
- Which failed test cases have been fixed (i.e. are expected to pass) in the new build?
I'm sure you can add to this list. The point is that tracking test cases goes beyond the test cases themselves into the running of the test cases. Typically the test case tracking will make it easy to identify passed test cases (as these should be in the majority). As well, an integration with your test environment should make it easy to upload failure results directly into the test run data base.
A more tricky capability is to enable a single problem report to be spawned for a multitude of test case failures, all caused by a single problem. For this reason, it is recommended that problem reports from failed test cases either be raised as a result of investigating the cause of the failed test case, or else be raised in a problem report domain different from the development problem domain. Especially with novice testers and those improperly instructed, it is more likely that problem reports raised from test results are going to have a lot of duplication. A failed test case should be treated as a symptom, not a problem. The problem could be a bad test bed, a bad test case, a problem in the software. The first and last of these could easily cause multiple test case failures for a single problem. Special education emphasis and care is needed to avoid having to deal with the administration, and possible rework, from multiple test case failures rooted in the same problem.
Beware of test case management systems which track test results (test run information) directly against the test case. Ask these questions:
- Can I have parallel test runs going on at the same time?
- Can I relate