Understanding Software Performance Testing, Part 4

[article]

function and performance model for performance testing. Under this process, the scripts may need to be more flexible than if a single login or rendezvous checkpoint is used in the test prior to execution. I find this approach uncovers some types of problems easier than if a large number of users are running.

If the results of the usage-based tests using typical or average load are acceptable, other types of tests now can be run. It will be necessary to restore the environment prior to the next set of tests, as the tests will have altered the database. What other types of test we execute will depend on what we defined at the early stages of the planning process.

  • Stress tests
    • Busy hour (1.6x average) or other defined value
    • Busy five minutes (4x average) an extreme peak
  • Bounce and load variation tests
  • Breakpoint tests

For stress types of tests, it is best not to try to push too fast. If there is a problem, you may pass the critical point before you realize it and may end up obscuring the critical data. The higher you push volume, the more repeatable the scripts need to be as well.

Before the actual test starts, be sure that you have started all monitoring processes and tools, especially those on the servers and network. Once the load has stabilized, notify those who will be doing or monitoring the response time processes.

Depending on the load tool capabilities, a series of several test types can be run. After the initial load is generated-usually the average load-you can increment the load up to other factors to stress the system at various loads. This can be done as a single test using a series of increments. I typically use this approach for the final sequence of tests that will be used for reporting results.

For example, a single load test may incorporate several different increments in single run. Many load tools allow you to select an initial load level, a ramp up factor, time duration to run the load increment, a scaling factor for the next increment, and a total duration time.

Figure 4

When the load has stabilized for each increment, you can gather the response time measures as well as other system measures, as shown in figure 4. It is critical that all resource monitors and the response-time process coordinate their times with the load in order to ensure that the correct measurements are generated. Measuring during the ramp-up and ramp-down times can alter the results. The test team has to decide whether to include these measures.

Once the test run has finished, be sure to wrap up, including the following:

  • Stop the test if not automatic.
  • Advise all team members of end of test.
  • Stop the monitoring tools.
  • Archive results from test tools, monitoring tools and instrumentation. Check twice that results are saved.

Depending on what was defined at the initial requirements stage, the performance report needs to show:

  • Tests that were executed
  • Loads and mixes were used for each test type
  • The internal (white box) and external (black box) measurements that were collected
  • Identification of potential problem areas.
    • Testers identify problems, they do not correct them. That's why technical people must be part of the team

All output from the tools must be gathered and reported together. Most commercial tools have some reporting capabilities. Some can only show the data they gathered, others can integrate data from several sources. Reporting is a capability that should be investigated prior to the purchase of a tool. Many tools allow the data collected to be exported

About the author

Dale Perry's picture Dale Perry

With more than thirty years of experience in information technology, Dale Perry has been a programmer/analyst, database administrator, project manager, development manager, tester, and test manager. Dale's project experience includes large-systems development and conversions, distributed systems, and online applications, both client/server and Web based. He has been a professional instructor for more than fifteen years and has presented at numerous industry conferences on development and testing. With Software Quality Engineering for eleven years, Dale has specialized in training and consulting on testing, inspections and reviews, and other testing and quality-related topics.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!