This is a no-holds-barred discussion of common load testing errors and consequences. Load testing can and should be done long before a system has a stable or complete user interface. One reason that people often schedule load testing as a final step in a test or development plan is the confusion linking load testing with functional testing.
A lot more people talk about it than actually do it. People define it differently. No amount of reading can compare to personal experience. Size does matter.
We are speaking, of course, of load testing.
This article outlines thirteen common load testing mistakes that I have encountered in my work with clients. The focus of this article is Web applications-not Web sites that are just static files and images, but sites that are user interfaces to backend applications (such as a stock trading or credit card authorization system). We also do not discuss "native" client/server systems, though many of the same observations apply.
This list of mistakes is in no particular order (and is certainly not in order of importance).
Confusing Load Testing with Something Else
Load testing is about verifying the performance of a system under a simulated multi-user workload.
Load testing is not functional testing. In fact, those two are usually far distant from each other in many dimensions: the goals are different; the necessary skills are different; the test scripts are usually different (and far fewer, in the case of load testing); the appropriate tool technologies are usually different; and the appropriate stages in the product life cycle are usually different.
Load testing is not about verifying single-user performance. Obviously, end-to-end response time is different when a system is used by just a single user than when used by many-that is why load testing is necessary, after all. But as shown in Figure 1, the slowdown per component can vary considerably as the users are increased. The slowest component in the single-user case may be completely different from the component causing the slowdown when there are multiple users. Furthermore, single-user performance can be dominated by delays in processing which occur on the client machine-those might be of concern, but since there is a client machine for every user, those client-side contributions are irrelevant in scalability.
Load testing is not primarily about finding multi-user errors. When multiple users hit a server concurrently, there may be no performance problem, but the server might do the wrong thing-for example, it might sell multiple users the one remaining inventory item, or it might just crash. Whether you want to do so or not, you'll end up doing some such testing in at least an ad hoc fashion as a side effect of the load testing. But to do "real," full-scale, multi-user testing typically requires an approach more akin to single-user functional testing, including a way to reliably reproduce race condition scenarios.
Confusing the Web Server with the Web Application
We started this article saying that we want to test a Web application, not a Web server. The Web server receives http requests and quickly translates those into other kinds of requests to downstream applications. The amount of time spent in the Web server itself should be negligible: well under a millisecond per request. All the time is really taken up by the downstream application(s) behind it.
You shouldn't completely bypass the Web server in your testing (in fact most systems aren't designed to have a clean layer underneath that would make that possible anyway). And you do need to make sure the Web server layer can handle the requisite number of concurrent connections; this number is increased if the size of responses is high, the clients have low-bandwidth network connections, the client browsers implement http "keep-alive" but not "pipelining," or the backend is slow.
But in the end, you are testing the application behind the curtain, not the curtain. This means that while load testing a Web application,