I use an e-commerce sample starter kit for a popular content management system as a source for many of my demos. This starter kit is a wonderful example because it lets me model out a great many use cases, such as the following:
- An anonymous user navigates directly to an article on the site’s blog.
- An anonymous user navigates directly to an article on the site’s blog and leaves a comment.
- An existing user navigates to his account profile and instigates a support ticket.
- A new user hits the home page and browses to a number of different products while reading a bit about each.
- An existing user hits the home page, logs into his profile, browses to a product, adds it to his shopping cart, and checks out.
- An existing user hits the home page, browses to a product, adds it to his shopping cart, checks out, and is required to log on.
- An existing user hits the home page, browses to a product, adds it to his shopping cart but does not check out, and, instead, leaves his browser open for the rest of the day.
Note the mix of similar but subtly different use cases. Take it a step further and perhaps consider data driving the checkout process so that users are shipping to different locations with different shipping rates. Ensure that you’re modeling all types of credit transactions, too.
Spend time building an accurate, complex mix of your use cases. You’ll find your system behaves very differently in such situations, and this is critical information to have.
Monitoring Across Different Bandwidths
One focus of load testing is to find out where your system tips or fails—where it stops responding after you’ve thrown an extreme number of concurrent users at it. Ensuring that your site doesn’t fail under expected load is an obvious, critical situation to protect against.
A subtler, but just as important, situation to guard against is poor user experience. The Aberdeen Group’s often quoted study from 2008 points out that sites will lose 10 to 11 percent of users for every second of delay. This translates directly to lost revenue for many sites.
Good load testing teams have long understood the need to validate things like response time under load. However, there’s a next step to this critical set of data. Do you know how users’ experiences are when they’re behind different bandwidth pipes?
Load testing is generally done on a stable, solid network. The results you get for response times and other end-user experience metrics can vary dramatically from what a user will truly see at his home or office. Subtle and not-so-subtle performance differences show up when you start to gather metrics for user agents behind simulated slower links. The huge pages that loaded fairly responsively when grabbed over a 1 GB Ethernet LAN switch likely aren’t going to show up so well when viewed from a low-cost neighborhood DSL endpoint.
Finding tools that will support these sorts of measurements will give you a much greater level of awareness of your site’s behavior for all your users, not just the optimal cases.
Don't Settle for Simplistic Load Scenarios
Getting started with load testing is an important step for your projects. However, speeding through your load efforts can leave your team and stakeholders with an inaccurate picture of your site’s true performance under load.
Take the time to think through what sorts of baseline data sets you need to work with, and spend the time to find or build sets that match your needs. Ensure that you’re working with a realistic mix of use cases for your load scenarios. Make sure that you’re building up a mix of use cases, roles, and user actions. Finally, think about how to validate end-user experiences as those users actually see them, and not just about metrics that reflect a pristine environment in your network center.
Nailing down these challenges will ensure that you’re giving your team and stakeholders the best possible information on your site’s performance health.