Lessons Learned in Performance, Stress, and Volume Testing

[article]
Member Submitted

automated scripts were developed.

The execution environment for the automated scripts in the aforementioned project had some configurations that were different from the development environment where the scripts were created. Having environment differences between the two environments caused many of the automated scripts to fail and some scripts that actually did run could only do so for just a few emulated end users since all the data in the test scripts came from the development environment and some of this test script data was not recognized in the execution environment.

I have found that a more pragmatic approach before jumping into a full-blown performance/volume/stress test is to conduct 2 trial runs for the automated scripts where the various support personnel are monitoring the system under test with a load that is first equal to 10% of the expected peak load in the production environment for the first trial run, and then with a load that is equal to 25% of the expected peak load in the production environment. With this proposed approach the various support personnel can first validate via proof of concept trial runs that they have the ability to monitor the various components of the system under test including the LAN, and second the test engineer can validate that the emulated end-users can playback the recorded business process. Lastly, with trial runs one can also validate that the application under test receives data from interface programs that run as batch scheduled jobs and generate traffic within the application.

After the trial runs are successfully completed with a partial load of emulated end-users as described above then a decision can be made to initiate the execution of the performance/stress/volume test.

Unnecessary risks in the live production environment
I saw in a project that the testing team had plans to test the application in the live production environment to simulate real-world conditions. This practice should be avoided at all costs unless you have the luxury of bringing down an entire live production system without affecting end-users or financially affecting a company’s bottom line.

Testing in a production environment can: crash the production environment, introduce false records, corrupt the production database, violate regulatory constraints, you may not be able to have the production environment a second time to repeat the test if needed, if you have shared production environment and you happen it to crash it for one application you can bring down other applications sharing the same production environment, and finally you may have to create dummy user’s ids to emulate end users in the production environment which may compromise your application’s security. These reasons are not all-inclusive and are only meant as risks for not conducting a performance/stress/volume test in a production environment.

A project is better off re-creating an environment that closely mimics or parallels the production environment with the same number of servers, hardware, database size etc and using this production like environment to conduct the performance/stress/volume/load test. If the production-like environment is a virtual image of the production environment then the tester may have even obviated the need to extrapolate performance results from one environment to the other.

Prevent Data Caching
On some projects I found that the same set of records were populated and entered into the application with the same automated scripts, when the performance/stress/load/volume tests were performed. This approach causes data to be stored and buffered which does not fully exercise the database.

The creation of automated test scripts with enough unique data records will prevent the data-caching problem from occurring. The test engineer who hands experience with the automation tool but lacks

About the author

Jose Fajardo's picture Jose Fajardo

Jose Fajardo (PMP, M.S., and SAP certified) has worked as a test manager for various companies utilizing automated testing tools. He has written and published numerous articles on testing SAP and authored the book titled Testing SAP R/3: A Manager's Step by Step Guide. Throughout his career Jose has helped to create testing standards and test plans, mentor junior programmers, audit testing results, implement automated testing strategies, and managed test teams. Jose can be contacted at josefajardo@hotmail.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!