Validating Mission Critical Server Software for Reliability

[article]
Member Submitted
  • effects after the repeated transactions
  • The server software scales up with the number of processors for repeated transactions (This implies that the future needs of the software can be met by increasing the system resources)

All participants in the development process need to be aware of these reliability issues and their priorities to the customer. If possible this awareness should come from direct contact with the customer during the requirements gathering phase. Ideally, reliability issues should be determined up front before design begins. The reason being, most of the reliability defects require design changes to fix them.

3. Planning and Executing Reliability tests
Test cases shall be designed to satisfy the operational requirements of the software and shall be run on the complete software. Depending on the form of the reliability requirement, either the 'time to failure' or the 'number of transactions to failure', or similar shall be recorded. When the software fails, the fault that led to the failure shall be filed into defect tracking system and notified to concerned software development team. Testing shall continue until sufficient measures are available to predict that the required level of reliability has been achieved. The software's reliability shall be measured using the records of the testing.

Testing can be done in two ways. On one hand, testing can be seen as a means of achieving reliability: here the objective is to probe the software for faults so that these can be removed and its reliability improved. Alternatively, testing can be seen as a means of gaining confidence that the software is sufficiently reliable for its intended purpose: here the objective is reliability evaluation. In reliability testing, defect density (normally represented in 3 sigma, 6 sigma quality levels) and critical factors are used to allocate important or most frequently used functions first and in proportion to their significance to the system. This way reliability testing would also exercise most of the functionality areas of the product during the test execution.

Reliability testing is coupled with the removal of faults and is typically implemented when the software is fully developed and in the system test phase. A new version of the software is built and another test iteration occurs. Failure intensity is tracked in order to guide the test process, and to determine feasibility of release of the software.

While planning the test cases for reliability, one needs to consider the various sub-systems failures and design the test cases. For example, in a server with two network interface cards, if one card fails the software should still continue to run with very less degradation on performance utilizing the other card.

Quite often mechanisms like tunable parameters and caching which are used to improve the performance of the product may have side effects impacting the reliability. These parameters need to be included in testing the reliability of the software.

The best results of reliability testing can be achieved by “setting up a task force” with developers, test engineers and designers, as defining the reliability requirements, reproducing the reliability defects are often very difficult. More over analyzing any reliability behavior requires continued effort collecting and analyzing the data by all the members of the task force.

The testing procedure executes the test cases in random order but, because of the allocation of test cases based on usage probability, test cases associated with events of greatest importance to the customer are likely to occur more often within the random selection.

Once the product meets minimum entry criterion, the reliability testing starts, different milestones are set to do a more effective testing on the product. These milestones

About the author

TechWell Contributor's picture TechWell Contributor

The opinions and positions expressed within these guest posts are those of the author alone and do not represent those of the TechWell Community Sites. Guest authors represent that they have the right to distribute this content and that such content is not violating the legal rights of others. If you would like to contribute content to a TechWell Community Site, email editors@techwell.com.

About the author

Srinivasan Desikan's picture Srinivasan Desikan

Srinivasan Desikan is the director of quality engineering at Siebel Systems, Inc., in Bangalore, India. Srinivasan has more than seventeen years experience in test automation, test management, test processes, and in setting up test teams from scratch. Srinivasan has delivered talks on testing at several international testing conferences, and delivers lectures and tutorials regularly at several universities and companies in India. Srinivasan has co-authored the book titled "Software Testing--Effective Methods, Tools and Techniques" published by Tata McGraw-Hill, India. He is currently co-authoring "Comprehensive guide to software Testing" to be published by Pearson Education in July 2005. Some of his interesting articles can be accessed at www.StickyMinds.com. Srinivasan holds a post-graduate degree in computer applications (MCA) from Pondicherry Engineering College and a bachelor's degree in computer science from Bharathidasan University.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!