Validating Mission Critical Server Software for Reliability

[article]
Member Submitted

depend on various parameters like, response time for a request, number of clients used, resource utilization and the test execution time. Let us take the example of Client-Server application again to explain what are these milestones. The parameters are represented using a graph as below:

The A-axis represents the average response time for the client’s request, B-axis represents the number of clients used, C-axis represents the number of hours the test is executed and D-axis represents the resource utilization during the time period of the reliability test. In turn, the graph depicts four important aspects involved in reliability testing namely, A-axis for performance, B-axis for scalability, C-axis for reliability and D-axis the investment for reliability. According to the entry criterion for reliability testing, the server software should have a minimum response time consistently for specified number of hours and with specified number of clients. Milestones are set on this basis. For example, in the Client-Server scenario, a server product is running on a highend machine, which responds to all clients' requests with minimal response time. To start with, the first milestone would be to have a minimum number (say 25) of clients used to shoot requests against the server for a specific period of time (say 24 hours). Once the server is able to handle this load with minimum response time, next milestone (say 50 clients) is set and executed. So, the number of clients keeps increasing until a specific criterion is met or the server fails to handle the situation. This kind of testing also helps in analyzing the performance of the product. For example, we can find out the response time of a request in proportion to the number of clients used. The table below illustrates an example with different milestones set for a reliability test:

The test execution process for reliability is repetitive and time consuming; the best approach for testing reliability would be automating the test cases. Without automation the desired results of the reliability testing cannot be achieved at all. The automation would reduce the test engineer's intervention with the product to a great extent, saves a lot of time, since there is less intervention, there will be less human errors and finally, the test can be executed repetitively. Automating the test cases for reliability is not a simple task. There are many requirements for automating the test cases:

  • Configuration parameters, which are not hard coded
  • Test case/suite expandability
  • Reuse of code for different types of testing
  • Coding standards and modularization
  • Selective and Random execution of test cases
  • Remote execution of test cases
  • Reporting scheme

And also, if the test stopped in between for some reason, the test suite should report the reason for failure or it should resume the test execution by ignoring it. These are some essential requirements for a test suite.

4. Collecting Reliability Data and Interpreting
The results are collected and a report is generated which helps in predicting when the software will attain the desired reliability level. It can be used to determine whether to continue to do reliability testing. It can also be used to demonstrate the impact on reliability, of a decision to deliver the software on an arbitrary (or early) date. The graph below illustrates a typical reliability curve. It plots failure intensity over the test interval.

The failure intensity figures are obtained from the results during testing. Test execution time represents iterative tests executed. It is assumed that following each test iteration, critical faults are fixed and a new version of the software is used for the next test iteration. Failure intensity drops and the curve

About the author

TechWell Contributor's picture TechWell Contributor

The opinions and positions expressed within these guest posts are those of the author alone and do not represent those of the TechWell Community Sites. Guest authors represent that they have the right to distribute this content and that such content is not violating the legal rights of others. If you would like to contribute content to a TechWell Community Site, email editors@techwell.com.

About the author

Srinivasan Desikan's picture Srinivasan Desikan

Srinivasan Desikan is the director of quality engineering at Siebel Systems, Inc., in Bangalore, India. Srinivasan has more than seventeen years experience in test automation, test management, test processes, and in setting up test teams from scratch. Srinivasan has delivered talks on testing at several international testing conferences, and delivers lectures and tutorials regularly at several universities and companies in India. Srinivasan has co-authored the book titled "Software Testing--Effective Methods, Tools and Techniques" published by Tata McGraw-Hill, India. He is currently co-authoring "Comprehensive guide to software Testing" to be published by Pearson Education in July 2005. Some of his interesting articles can be accessed at www.StickyMinds.com. Srinivasan holds a post-graduate degree in computer applications (MCA) from Pondicherry Engineering College and a bachelor's degree in computer science from Bharathidasan University.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!