Finding avoidable, show-stopping problems when performance testing late in a project is, unfortunately, not uncommon. But Scott Barber says you can save time and money on your software development projects by investigating performance early and validating performance last.
Imagine you are reaching the end of a major software development project. Functional testing is in its final phase and so far hasn't revealed any ship-stopping defects. You have planned and developed your performance tests to validate the requirements you were given, and finally the project is entering two weeks of performance requirements validation, which is anticipated to be the last activity before go-live.
Your first performance test demonstrated that at a ten-user load, the system response time increased by two orders of magnitude—meaning a page that returned in one second with one user on the system returns in one hundred seconds with ten users on the system. The second test showed that at a fifty-user load, the system fails miserably with Java exceptions prominently displayed on every requested page. But the system is intended to support 2,500 simultaneous users!
Sound familiar? That is exactly what happened to me the first time I came onto a project to do performance testing at the end of development—rather than at the beginning. In this case, it took eight days to find and fix the issue causing the failure, not the response time issue, which left four business days to improve response time and complete the performance validation—assuming, of course, that no additional defects further delayed the validation. Not surprisingly, even after we resolved the response time issue, the system was not even close to meeting the performance requirements. In fact, we determined that the corporate network was inadequate to support the additional bandwidth needed for this application! As you can imagine, the product did not go-live on the advertised date.
Think about how different this performance testing effort would have been if there had been a plan to determine the actual capacity of the selected server hardware, to verify the available network bandwidth, to execute some preliminary tests on critical functionality, and to shake out configuration errors in the load balancers when those items first became available. In the case of the project above, the chaos would have been completely avoided if there had been such a plan in place. One test, one script, one tester. Four hours, tops, at the beginning of the project and both the debilitating software defect and the insufficient network bandwidth would have been detected, resolved, and forgotten before anyone had even published a go-live date.
Sadly, late performance testing and finding avoidable show-stopping problems without enough time to react is not uncommon. Many people have similar stories, which lead some managers to briefly consider spending money to bring the performance tester on the project early. But there is a big step from "briefly consider spending” to "spending." To take that step, managers need more than stories, especially when they already know that it is virtually pointless to validate performance requirements on a system that is still in flux. Every change to the system can cause unexpected performance changes and thus require the validation process to start over. Managers need to know what they will gain from bringing in the performance tester before the software is functionally stable. What value will it add? How will it be planned? What other activities will it impact?
This leads to two basic questions. First, how do we communicate to our managers an approach for early project performance investigation that will give them confidence that we aren't just shooting from the hip, so to speak? Second, how do we demonstrate that our approach will actually reduce the likelihood of late project performance surprises rather than wasting project time chasing shadows that turn out to be nothing more than incomplete areas