the product, missing all the important pieces of functionality from Customer B's perspective. Because the coverage is so slight, it's likely that Test System B is a low fidelity test system when considered across the whole customer base. Testers using this test system will miss most must-fix bugs in the product, and therefore the programmers will not be able to repair the must-fix bugs before the system ships. Furthermore, the test results reported to the project management team will mislead them about the state of system quality, leading to ill-nformed decision-making and mistaken perceptions. All these factors contribute to a waste of the testing investment.
Let me sum up the distinction. High fidelity test systems focus on tests for key customer scenarios, in likely customer configurations, emphasizing problems that customers would consider important. In other words, testers mimic customer usage by applying high fidelity test systems. Low fidelity test systems test the wrong features, run on the wrong configurations, or report the wrong problems. Of course, it's important to remember that high fidelity and low fidelity aren't binary states. There's a spectrum from the perfect test system to the perfectly useless test system. A team of wise test professionals will work to develop a test system that, within schedule and budgetary constraints, has the highest fidelity possible.
A Cautionary Case Study
Once upon a time there was a test manager who managed a test team that had a fancy automated testing system. The system under test was a multi-OS, multidatabase query and reporting system, and the test team had created a test system that sent canned queries and reports into the system under test, then automatically compared the results against baselines. They could test over a dozen OS/database combinations in a couple days and could run thousands of tests. Sound impressive? Well, these testers were wasting time and money.
Why? The customer pain wasn't that the product tended to return incorrect query results. For the most part, that logic was solid and the test system rarely found bugs in one database or OS that didn't occur on all of them. Customers disliked the product because it was hard to install and because the ancillary and supporting tools and utilities didn't work. The test team, though, largely ignored these other aspects of the product and focused on the perpetuation and propagation of their automated test system across many different OS/database combinations, thus creating a low-fidelity test system that aligned poorly with customer usage.
The Next Step
The test manager in that case study was me, over a decade ago. From that mistake, I learned the importance of aligning testing with customer usage. I first started to think about the value of high fidelity test systems. This article has hopefully inspired you to think about that topic, too. Once you start to see the customer as the focal point of your test effort—the person whose experience of quality you are trying to predict before the product ships—you are on the first step of a journey that leads to intelligent management of quality risks.
However, while it is all well and good to assert that you should build high fidelity test systems, that is somewhat like telling someone that they must take an airplane to get from Austin to Aachen in one day. For experienced travelers, that suggestion is helpful. However, many people would remain confused about the journey and the destination. In the next article, I'll show you a couple analytical techniques you can use to establish a high fidelity test system.
|High Fidelity Test Systems||24.22 KB|