Brute force performance and scalability testing, also known as "load it up and see what happens," is expensive, slow, and fairly subjective. Conversely, by analysis and modeling, we can predict an application's behavior and generate test cases that can be run to validate the model. So why is it so hard to convince software engineers that this analysis is needed?
Analyze, Model and Predict Before You Test
Warning: the following is an inflammatory opening sentence:
Why is it that many developers don't want to analyze their architectures to predict behavior, but instead expect testers to tell them how the application behaves?
Image you want to know how many marbles will fit in a jar. Seems easy to buy a bunch of cheap marbles, fill the jar, and then count how many it took to fill, right? What if I tell you the jar is the size of Manhattan, the marbles range in size from softballs to basketballs, and each marble costs $50-$1000? Now it's completely infeasible to use "testing" or "observation" to get your answer. So how do you do it? Well, to start, you’d have to calculate the volume of the jar, the volume and diameter of each marble, the number of each size of marble, etc. Then factor in that the round shape of each marble precludes you from filling the entire volume of the jar. Yes, it's complicated, but that’s the best way to get the answer. After some analysis, you could create a model of the jar and marbles, and write a program to give you an answer for any size jar, any size marble, and any mix of marble sizes. It's kind of like asking, “How much change can I get for a dollar?" Well, you can have 4 quarters or 10 dimes or 20 nickels, or 100 pennies, or you could have 3 quarters, 2 dimes and 1 nickel, or 2 quarters and 5 dimes, etc. A model could spit out an answer that’s as specific as you need it to be.
Now let's talk about software. How many concurrent users does an application support? How many transactions per minute can be processed? These are difficult questions to answer because they depend on many factors and involve deep knowledge of the code. What is each user doing and when are they doing it? In other words, do you have usage profiles (do you have a typical mix of each size of marble?) How does a use case or usage profile translate into transactions? What hardware/platform/processor/memory/disk space/network latency/bandwidth do you have (what is the size/volume of the jar?) To imagine that a tester will make up this information and then "try it to see what happens" is only feasible for the very simplest of applications. The brute force method is just not going to work here.
So how do you get the answers you need? Hey, what about analyzing the code? Can you look at how you've designed the system, and predict what the code can support? Is your application completely multi-threaded? Do you lock transactions for certain functions? Have you done profiling to understand where time is spent, how memory is used? If you use a database, have you analyzed it to understand and predict performance levels? If you've ever asked discussed this with a developer, you'll understand my next question. Why is it so hard to convince people that modeling is easier than brute force testing and provides a better answer? Is this analysis so very difficult, or do developers just not know how to do it? Do our universities teach this in Computer Science and Software Engineering curricula? If not, can someone please start?
Don't get me wrong, there's plenty of work for a tester in this effort. After prediction comes sampling, testing, validation of your hypotheses. Setting up those test cases and test environments, running the experiments, collecting and analyzing the results—all of this is clearly in the realm of the