Agile Performance Testing

[article]
Summary:
Approaching performance testing with a rigid plan and narrow specialization often leads to testers' missing performance problems or to prolonged performance troubleshooting. By making the process more agile, the efficiency of performance testing increases significantly—and that extra effort usually pays off multi-fold, even before the end of performance testing.

Approaching performance testing with a rigid plan and narrow specialization often leads to testers' missing performance problems or to prolonged performance troubleshooting. By making the process more agile, the efficiency of performance testing increases significantly—and that extra effort usually pays off multi-fold, even before the end of performance testing.

While it is definitely better to build-in performance during the design stage and continue performance-related activities through the whole software lifecycle, quite often performance testing happens just before going live with very little time allocated for it. Still, approaching performance testing formally, with a rigid, step-by-step plan and narrow specialization often leads to testers' missing performance problems altogether or to the prolonged agony of performance troubleshooting. With a little extra effort to make the process more agile, efficiency of performance testing increases significantlyand this extra effort usually pays off multi-fold, even before the end of performance testing.

Pitfalls of the Waterfall Approach to Performance Testing
As computer systems become more and more complex, the number of people involved in the performance area grows significantly. Performance testing probably is the most growing area related to performance. Most large corporations have performance testing and engineering groups today, performance testing becomes a "must" step to get the system into production. Still, in most cases, it is preproduction performance validation only.

Even if it is only a short preproduction performance validation, performance testing is a project itself, with multiple phases of requirement gathering, test design, test implementation, test execution, and result analysis. So in most cases, software development methodologies could, with some adjustments, be applicable to performance testing.

The waterfall approach to software development is a sequential process in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing, integration, and maintenance. [1] Being a step in the project plan, performance testing is usually scheduled in the waterfall way, when you need to finish one step to start next. FOr example, typical steps could be:

  • Get the system ready
  • Develop scripts requested (sometimes offshore)
  • Run scripts in the requested combinations
  • Compare with the requirements provided
  • Allow some percentage of errors according to the requirements
  • Involve the development team if requirements are missed

At first glance, the waterfall approach to performance testing appears to be a well-established, mature process. But there are many serious pitfalls to this approach. Here are some of the most significant:

  • It assumes that the system is completely ready. At minimum, all functionality components are included in the performance test. The direct result of waiting until the system is "ready" is that testing must occur very late in the development cycle. By that point, fixing any problem would be very expensive. It is not feasible to perform such full-scope performance testing early in the development lifecycle. If we want to do something earlier, it should be a more agile or explorative process.
  • Performance test scripts that are used to create the system load are also software. Record/playback load testing tools may give the tester the false impression that creating scripts is quick and easy. In fact, correlation, parameterization, debugging, and verification may be pretty challenging tasks. Running a script for a single user that doesn't yield any errors doesn't prove much. I have seen large-scale performance testing efforts at a large corporation where none of the script executions actually got through log on (single sign on token wasn't correlated), but performance testing was declared successful and the results were reported to management.
  • Running all scripts together makes it very difficult to tune and troubleshoot. It usually becomes a good illustration of the Shot-in-the-Dark anti-pattern [2], "the best efforts of a team attempting to correct a poorly performing application without the benefit of truly understanding whythings are as they are." Or you need to go back and disintegrate tests to find the exact part causing problems. Moreover, tuning and performance troubleshooting are iterative processes, which are difficult to place inside the waterfall approach. And, in most cases, it is not something you can do offlineyou need to tune the system and fix the major problems before results will make sense.
  • Running a single large test (or even a few of them) gives minimal information about the system behavior. You cannot build any kind of model (either formal or informal); you will not see any relationship between workload and system behavior. In most cases, the workload used in performance tests is only an educated guess, so you need to understand how stable the system would be and how consistent results would be if real workload were somewhat different.

Doing performance testing in a more agile, iterative way may increase its efficiency significantly.

Pages

About the author

Alexander Podelko's picture Alexander Podelko

For the past fifteen years, Alex Podelko has worked as a performance engineer and architect for several companies. Currently, he is consulting member of technical staff at Oracle, where Alex is responsible for performance testing and optimization of Hyperion products. He blogs at http://alexanderpodelko.com/blog and can be found on Twitter as @apodelko.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01