Lessons Learned in Performance, Stress, and Volume Testing

[article]
Member Submitted

an inexact science. In reality many companies make educated guesses and assumptions about an application’s expected performance that may easily be proven false after the test takes place. In one project I worked with the NT servers were crashed, and the application was rendered inoperable at 6:00 am when the first shift workers were expected to come to work at 7:30 am and would be affected by this problem of an inoperable application. The problem was not solved until 10 hours later and the company had no alternative plans or contingency plans for such an emergency. It is essential that test managers, test engineers, and support personnel not only document all potential risks but also mitigation actions to reduce the risks before starting a stress test.

Knowledge walks out the door. Contractors leave the projects after one test release leaving no documentation
Often projects will hire contractors for short engagements lasting 2-3 months to develop the automated testing scripts, plan the test, execute the scripts and then interpret results. But what projects fail to do is to document what scripts the contractors developed, where the scripts are stored, and where the test execution results, graphs, charts are stored. This is a problem when the contractors walk out the door. The idea is to create re-usable automated scripts that can be re-used “as is” or with some modifications from one release to the other but often times test managers let the contractors wrap up the project and exit the project without knowing where the automated scripts were saved. Other times if questions arise about the test results when the system has been deployed to production the test manager has no clue as to what the contractors did with the test execution results and thus are not able to generate test graphs after the contractors leave the project. It is important to document and store automated scripts and results in a central repository or test management tool where the project can access them long after the contractors left the project.

No consistent method for tracking expected volumes, business process throughput
Another problem that I can recall from previous projects was that many projects did not have detailed requirements that would help the test engineer create automated test scripts in order to emulate the expected volumes of transactions and business processes within the production environment for a given number of concurrent users with an automated testing tool. Other projects rely on interviews with SMEs to attempt to learn what the expected production volumes are that need to be emulated and this information is stored on emails, or personal notes, or not documented at all. To overcome these problems I would suggest that the test engineer create excel spreadsheets with templates for collecting information from the subject matter experts, developers, middleware engineers to capture consistently what the expected volumes of data and traffic will be in the production environment. The test engineer should seek to learn how many concurrent users are executing a given scenario or business process, and how many business processes a user executes per hour on a typical day or on a day where production traffic is much greater than usual. For instance on a typical day a sales entry clerk takes 10 orders for toys per hour but over the period leading to the holidays in December a sales entry clerk takes 50 orders for toys per hour.

No records of transmission Kilobytes of data, no network topology diagram
Once executing a stress, volume, or performance test some network segments or routers are in risk of being saturated and thus

About the author

Jose Fajardo's picture Jose Fajardo

Jose Fajardo (PMP, M.S., and SAP certified) has worked as a test manager for various companies utilizing automated testing tools. He has written and published numerous articles on testing SAP and authored the book titled Testing SAP R/3: A Manager's Step by Step Guide. Throughout his career Jose has helped to create testing standards and test plans, mentor junior programmers, audit testing results, implement automated testing strategies, and managed test teams. Jose can be contacted at josefajardo@hotmail.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Oct 12
Oct 15
Nov 09
Nov 09