Mukesh Sharma writes that if you want to build an effective performance test strategy involving smart devices, you need to consider rendering times. It’s no longer OK to simply measure the response times on desktop web applications like we did in the past. For mobile, the rendering time can make all the difference between a good and a bad user experience.
Performance testing continues to be an important component of a product’s overall testing strategy, no matter the type of product or application—e.g., web, desktop, hosted, SaaS, cloud powered, etc. Historically we have all been focusing on finding and defining the accurate server response times to ensure good product performance for our end users. Most of the open source performance testing tools to date consider simulation of virtual browsers to generate load on the application servers. Having used this approach for several years now at QA InfoTech, we understand that this approach gives only the response times of the web requests.
For our use, let’s define response times in very simple terms; this is the time from when a user initiates a request to the instant at which the user receives the first or last byte of the response. The response time as measured by these tools doesn’t include rendering time of the web-page response.
Some commercial performance testing tools are also capable of measuring response times along with the rendering times. Of course, these tools support launching and using real browsers instead of virtual browsers. In my experience, however, the use of real browsers for load testing is extremely limited due to the fact that generating high load using real browsers requires a big test infrastructure. Real browsers measure end-user performance, which includes measuring rendering times, whereas virtual browsers aren’t capable of simulating or accounting for rendering times.
We are all aware of the market penetration of smart devices and the rise in quality assurance efforts to support applications on these devices especially in the last seven-to-eight years now. While the expansion potential is still huge, smart devices penetration is a little over 60 percent in the US alone as of June 2013. With such development and advancement of mobile technologies and devices, the classic performance-testing concept of finding just the response time of your brand new application may not be sufficient. You may ask, “Why is this the case, especially now?”
The reason is primarily because of heavy application usage on mobile devices, the processing power of such devices, and the growing popularity of rich Internet technologies in application development. Read on to understand these three areas in greater detail.
If you want to build an effective performance test strategy involving smart devices, you need to consider rendering times. Let’s assume that your new application for mobile users performs well, meaning that your server side performance statistics are reasonable. Does that guarantee that your application will perform well on iPhone 5? Well if it doesn’t, the chances that your customers will reject your newly built application are quite high. It is important to understand that while the server response time for similar requests from various devices remains the same, the rendering time on different devices and browsers may differ.
In several of our customer assignments in the recent years, we have observed heavy use of rich internet technologies in application development, in which a lot of the request processing is handled in the client-side code. With technologies such as HTML5, it’s even more important to consider measuring rendering times.
What makes this mix so complicated is that mobile devices are way less robust than their desktop counterparts in their processing capacity. We have carried out specific tests to understand differences in the rendering engines of mobile browsers to incorporate such subtleties in our testing strategy. This is the angle that the tester needs to understand and measure as part of his performance-testing efforts on mobile devices; he needs to present the results to the product and business teams to see if additional application profiling is required.