Lessons Learned in Performance, Stress, and Volume Testing

[article]
Member Submitted
Summary:

In this article, the author shares some of his insights from previous testing engagements. He hopes to help testing professionals make informed decisions before they initiate a performance test for capacity planning, measuring an application's response times, identifying degradation points, breaking points, and bottlenecks. Flawed testing techniques are also pointed out as lessons learned from several projects and a mitigation path for overcoming these flawed testing techniques is also explained.

Waiting until the 11th hour–Stress/Volume/Performance test execution 4 days before deployment

I was in a project where the GUI (the client that the end user interfaces with) for an ERP system was upgraded and my client wanted to execute a performance test, volume test and stress with load generation automated tool 4 days before the GUI upgrade changes were moved to the production environment. The project encountered that the application with a new GUI in a production-like environment had unacceptable response times, degradation points and many bottlenecks. The project wanted to repeat the same performance, volume, and stress tests after all the fixes were incorporated to the ERP system before deploying the system into a production environment. The problem was that it took nearly 2 working weeks to troubleshoot the ERP problems and introduce the fixes to the application which delayed the project’s deployment schedule and subsequently cost the company untold tens of thousands of dollars in delaying the release of the new ERP software upgrade. Much time was spent among the various support personnel in trying to identify, troubleshoot and pinpoint the bottlenecks and degradation points within the application. The test engineer, DBA, infrastructure engineer, and middleware engineer had to review multiple graphs and charts from the automated testing tool, and other performance monitoring tools in addition to troubleshooting and fixing the problems for several days which caused the project director to delay the release and deployment of the software until the response times for the application were adequate or in line with the service line agreements.

A recommendation would be to execute and complete a performance, volume, load, soak or stress test 3-4 weeks before the actual deployment deadline or release date to migrate the application under test into its final destination the production environment. One should plan to finish the performance/stress/volume test well in advance of the deployment date since the tests may reveal that the system under test has encountered several performance problems that necessitate the testing team and support teams to repeat the performance, stress, and volume test multiple times for troubleshooting and fixing the application under test. In addition time will be needed to review graphs and interpret results from the various performance-monitoring tools.

Furthermore, additional tasks or support personnel may have to be scheduled for troubleshooting and fixing the system under test if tests need to be repeated. As an example of the additional tasks that may have to be performed the following are included: 1. Obtaining more unique data values from the subject matter experts to re-execute the tests with multiple iterations of data for processes that have unique data constraints, 2. Tuning the Database, 3. Rewriting programs with inefficient SQL statements, 4. Upgrading the LAN, 5. Upgrading the hardware, etc. For these reasons it’s unwise if not risky not to complete a performance, stress, volume, and load test with 3-4 weeks left before the deployment deadline. If you cannot answer the question “What will you do if problems arise out of performance/stress/volume test” because you have an impending deployment deadline or because you have a compressed and unrealistic schedule you will probably encounter a situation where you have to make a tradeoff between deploying a system into production with unacceptable response times or delaying your project’s deployment deadline to properly tune your system’s performance.

Missing trial runs (Proof of concept runs)
In one project that I worked with the SAP project manager wanted to jump into the execution of the performance test, and stress test with a maximum load of concurrent users in an environment different from the one where the

Pages

About the author

Jose Fajardo's picture Jose Fajardo

Jose Fajardo (PMP, M.S., and SAP certified) has worked as a test manager for various companies utilizing automated testing tools. He has written and published numerous articles on testing SAP and authored the book titled Testing SAP R/3: A Manager's Step by Step Guide. Throughout his career Jose has helped to create testing standards and test plans, mentor junior programmers, audit testing results, implement automated testing strategies, and managed test teams. Jose can be contacted at josefajardo@hotmail.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

May 04
May 04
May 04
Jun 01