Every test manager's nightmare is finding out that not enough testing has been done. And the paradox lies in the fact that enough testing often comes with a heavy tag in terms of cost and efforts. The real challenge is to get the optimum quality at the lowest cost.
Every test manager's nightmare is finding out that not enough testing has been done. And the paradox lies in the fact that enough testing often comes with a heavy tag in terms of cost and efforts. If you under-test the product due to time and cost limitations, eventually, the cost of support and maintenance will increase. The real challenge is to get the optimum quality at the lowest cost.
Let us understand the problem little bit in depth. Let's consider the case of a web-based product for which around 1000 test cases have been written. Let us assume that this product is supported on 2 database servers, 3 Operating Systems, 2 web/app servers and 2 web browsers. This makes a total of 24 combinations of the supported platforms.
Executing 1000 testcases on each of these combinations would give any test manager a shot of confidence about the quality of the product by having looked at every nook and corner of the product. But that would cost efforts of around 22 man-months (assuming an average tester can execute around 50 testcases a day)!! This is too high and practically impossible. It would further delay the time to hit the market.
The first thing that comes to mind is Automation. Automation can surely solve the problem. But, creating the automated testing environment will also require time and efforts. Also, if the product under test is not stable, automation will require lots of maintenance efforts. So, it is not practical to employ automation for such products. Once the product is stable, automation is a must for regression testing. For the products that are not yet stable, the only option remaining is to take a risk, and in a calculated way. How?
First things first. The manager has to lay down the goals for the exercise. The goal should be to discover ALMOST ALL the defects within the permissible time & cost limits. Here is an approach to resolve the problem.
- Know the features of the application under test well. This will help you to identify the dependency of each and every feature with the platform under test. For example, if a particular web-page is static in nature, you do not require testing it for different databases or OS. You can choose to test it for different web browsers though as different browsers may render such pages differently. You need to know what component of the supported platforms would have affect what feature.
- Know the supported components and their intricacies well. This would lead towards identifying the probable defects due to change in the platform component. For example, if Windows XP & Red Hat Linux are two supported OS flavors, knowing the difference between the two OS would be beneficial to identify how these differences can affect the product under test. Yours tests can be aligned towards looking out how these changes affect your system under test. For eg. one difference between these two OS is that the file names are case sensitive in Linux, while they are not in Windows. Let's say that your product uploads files to the server running one of these OS, you can check for windows that if you upload the same file by changing the case of the file name, it would either overwrite or prompt for duplication. In Linux, it would simply store it as another file.
- Come up with the list of the most important platforms. This means that you will require knowing what the most popular platforms with your customers are. Even the least severe defect visible to everyone will be more critical than the