automated testing tools but may have little or no knowledge about how the application under test actually works and what sets of data are necessary to construct automated scripts. This is especially true of ERP applications such as SAP R/3 where there are multiple environments to go log and the data that works in one environment may not work in a different environment. The automated tester may have difficulty creating automated scripts if he/she has no access to the subject matter experts or the opportunity to discuss with them what are the valid data sets that are necessary or how to navigate a particular business process that needs to be automated. It’s imperative that the QA manager coordinates with other team leads to get the test engineer support from the SMEs when questions arise for the creation of automated scripts.
Testing on an unstable environment
Performance/stress/volume testing are not tests to ascertain that the functionality of the application is indeed working properly. The system under test should have been thoroughly testing for functionality thoroughly before engaging on a performance/stress/volume test. The performance/stress/volume test will help the test engineer discover bottlenecks, perform capacity planning, optimize the system’s performance, etc when emulated traffic is generated for the application under test. But the functionality of the application should be robust and stable before initiating a performance/stress/volume test.
Many test managers incorrectly assume that because a tester has experience creating automated scripts on an automated tool that automatically confers on the tester the ability to conduct a performance/stress/volume test. This assumption is erroneous since a performance/stress/volume test is an art that requires lots of hands on experience and should be led by an experienced tester. Stress/Volume/Performance testing should be left be conducted by an experienced tester who understands how to generate traffic within an application, understand the risks of the test such as crashing an application, has the ability to interpret test results, monitor the test, and coordinate the testing efforts with multiple parties since performance/stress/volume testing does not take place in a silo.
Not knowing what will be monitored
Many projects head into a stress test without knowing what will be monitored during the stress test.
Every project has applications with their own nuances, customizations, and idiosyncrasies that make them unique from other projects. While it would be very difficult to generate a generic list of all the components of an application that need to be monitored during a stress test it is fair to say that at the very minimum the following components should be monitored: the database, the project's infrastructure (i.e. LAN), the servers, the application under test, etc. I advise companies to have meetings with various managers, owners and stakeholders of the application under test and discuss all potential risks and areas that need to be monitored before conducting a stress test. Also create a point of contact list with names of the individuals, their phone numbers, and their tasks during the stress test after all the areas that will be monitored during the stress test have been identified. Every person associated with the stress test should have his/her role and responsibility clearly defined.
No formal testing definitions
Many projects assume that a performance test is the same as a stress test as a load test as a volume test, as a soak test, etc this is a faulty assumption. Test managers should understand the scope of each test that they are trying to perform for instance a stress test may find your applications breaking points but a performance test may help a test engineer conduct benchmarking of