An analysis of the queuing model results shows that the linear model accurately matches the queuing model through step 6 where the system CPU utilization is 87 percent. Most IT shops don't want the system to be loaded more than 70 to 80 percent.
This doesn't mean that we need to discard queuing theory and sophisticated modeling tools; we need them when systems are more complex or where more detailed analysis is required. But in the middle of a short-term performance engineering project, it may be better to build a simple, back-of-the-envelope, type of model to see if the system behaves as expected. Most experienced performance engineers build such models subconsciously and, even if they don't write it all down, they still observe if the system doesn't behave as expected.
Running all scripts together makes it difficult to build a model. While you still can make some predictions for scaling the overall workload proportionally, it won't be easy to find out where the problem is if something doesn't behave as expected. The value of modeling increases drastically when your test environment differs from the production environment. In this case, it is important to document how the model projects testing results onto the production system.
Making Performance Testing Agile
Agile software development refers to a group of software development methodologies that promotes development iterations, open collaboration, and process adaptability throughout the lifecycle of the project . The same approaches are fully applicable to performance testing projects. Performance testing is somewhat agile by its nature; it often resembles scientific research rather than the routine execution of one plan item after another. Probably the only case where you really can do performance testing in a formal way is when you test a well-known system (some kind of performance regression testing). You can’t plan every detail from the beginning to the end of the project—you never know at what load level you'll face problems and what you would be able to do with them. You should have a plan, but it needs to be very adaptable. It becomes an iterative process involving tuning and troubleshooting in close cooperation with developers, system administrators, database administrators, and other experts .
Performance testing is iterative: You run a test and get a lot of information about the system. To be efficient you need to analyze the feedback you get from the system, make modifications to the system and adjust your plans if necessary. For example, you plan to run twenty different tests, and after executing the first test, you find that there is a bottleneck (for example, the number of web server threads). Therefore, there is no point in running the other nineteen tests if they all use the web server, it would be just a waste of time until you find and eliminate the bottleneck. In order to identify the bottleneck, the test scenario may need to be changed.
Even if the project scope is limited to preproduction performance testing, approaching testing with an agile, iterative approach, you'll meet your goals faster and more efficiently and, of course, learn more about the system along the way. After we prepare a script for testing (or however the workload is generated), we can run one, a few, and many users (how many depends on the system), analyze results (including resource utilization), and try to sort out any errors. The source of errors can be quite different—script error, functional error, or a direct consequence of a performance bottleneck. It doesn't make much sense to add load until you figure out what is going on. Even with a single script you can find many problems and, at least partially, tune the system. Running scripts separately also allows you to see how many resources are used by each type of load and make some kind of system "model."
Using the waterfall approach doesn’t change the nature of performance testing; it just means that you probably do a lot of extra work and still come back to the same point—performance tuning and troubleshooting—much later in the cycle. Not to mention that large tests using multiple use cases are usually a bad point to start performance tuning and troubleshooting. Symptoms you see may be a cumulative effect of multiple issues.
Using an agile, iterative approach doesn't mean that you need to redefine the software development process, but rather find new opportunities inside existing processes. I believe that most good performance engineers are already doing performance testing in an agile way but they're just presenting it as waterfall to management (some kind of guerilla tactic). In most cases, you need to present a waterfall-like plan to management, and then you are free to do whatever is necessary to properly test the system inside the scheduled timeframe and scope. If opportunities exist, performance engineering may be extended further, for example, to early performance checkpoints or even full software performance engineering . But don't wait until everything is properly in place, make the best possible effort and then look for opportunities to extend it further .