approaches the pre-defined stable minimum criterion.
Figure 2 illustrates when the process for fixing detected errors is not under control, or a major shift in design may have occurred as a result of failures detected.
Failure intensity drops, spikes, and then makes a gradual decline. The spike shows that while fixing the known errors, some new errors were introduced. This graph identifies two potential problems. The process for fixing errors may be inadequate and there may be weak areas in the development process itself.
Apart from analyzing functionality failures, resource utilization by the software should also be given importance. A tool should keep running in the background which keeps track of resource utilization like, Memory, CPU…etc. The tool should log the report at regular intervals during the test. A sample log file is shown below:
The CPU and Memory utilization must be consistent through out the test execution, if they keep increasing, other applications on the machine can get affected, the machine may even run out of memory, hang or crash, in which case the machine needs to be restarted. Memory buildup problems are common among most of the server software. These kinds of problems require lot of time and effort to get resolved.
In any enterprise scenario, the software product is assumed to be up and running with minimal failures. Testing for Software Reliability plays a vital role in ensuring the product meets the customer's expectations. Reliability testing needs proper planning and execution. Automation would help in executing reliability tests. Reliability test results would help in deciding whether the software has met the desired quality and help make a good decision to release.
|Validating Mission Critical Server Software for Reliability||46 KB|