2) Making sure that the system under test is properly configured and the results obtained may be used (or at least projected) for the production system.
Environment and setup-related considerations can have a dramatic effect. Here are a few:
- What data are used? Is it real production data, artificially generated data, or just a few random records? Does the volume of data match the volume forecasted for production? If not, what is the difference?
- How are users defined? Do you have an account set up with the proper security rights for each virtual user or do you plan to re-use a single administrator id?
- What are the differences between the production and the test environment? If your test system is just a subset of your production, can you simulate the entire load or just a portion of that load? Is the hardware the same?
It is important to get the test environment as close as possible to the production environment, but some differences may still remain. Even if we were to execute the test in the production environment with the actual production data, it would only represent one point in time; other conditions and factors would also need to be considered. In "real life," the workload is always random, changing each moment, including actions that nobody could even guess.
Performance testing isn't an exact science. It is a way to decrease the risk, not to eliminate it completely. Results are as meaningful as the test and environment you created. Usually performance testing has small functional coverage, no emulation of unexpected events, etc. Both the environment and the data are often scaled down. All these factors confound the straightforward approach to performance testing, which states that we simply test X users simulating test cases A and B. This leaves a lot of questions, for example: How many users can the system handle? What happens if we add other test cases? Do ratios of use cases matter? What if some administrative activities happen in parallel? All these questions require some investigation.
Perhaps you even need to do some investigation to understand the system before you start creating performance testing plans. Performance engineers sometimes have system insights that nobody else has, for example:internal communication between client and server if recording used, timing of every transaction (which may be detailed up to specific requests and set of parameters if needed), and resource consumption used by specific transaction or set of transactions.
This information is actually additional input to test design, often the original test design is based on incorrect assumptions and needs to be corrected based on the first results.
There are very few systems today that are stateless systems with static content using plain HTML—the kind of systems that lend themselves to a simplistic record/playback approach. In most cases, there are many stumbling blocks in your way to create a proper workload. Starting from the approach you use to create the workload, the traditional record/playback approach just doesn't work in many cases.  If it is the first time you see the system, there is absolutely no guarantee that you can quickly record and playback scripts to create the workload, if at all.