Sometimes, the features of a tool (collecting data, paths, and statistics) are available but not in place from a test perspective. You might have to do some negotiating with the team to expand the current capabilities of the tool. Most web-analytics tools enable the correlation of these metrics with various business and financial figures and statistics. This detailed information is extremely useful for proactive web-application optimization and predictability. Hence, business and technical stakeholders are increasingly utilizing web-analytics systems for managing, measuring, and analyzing the end-user experience and risk to the business. With this information, the test team can truly make risked-based decisions into how robust testing really needs to be. Once data is quantified, management can make risked-based decisions. Naturally, without a test-case-management tool, this cannot be accomplished in an effective or efficient manner.
The cold, hard facts about website functionality—that is, interaction and performance—make up the quantified data that articulates risk to the business for tested and untested code in production, and all test teams should know what is tested and untested. The complex, web-based delivery model significantly increases the efforts required for testing. With increasing time-to-market pressures and shrinking release schedules, a successful web-application testing strategy boils down to prioritizing the efforts based on quantified risk to the business.
Web Application Testing Challenges
In addition to the routine challenges associated with testing, web-application testing introduces some additional nuances including a large number of application-usage workflows. The hyper-contextual nature of web applications lends itself to a potentially large number of workflows for even simple scenarios. Testing all potential workflow permutations and combinations can be fairly complex and costly, so learning from web analytics where to draw the line for diminishing returns will add value. A diverse user base introduces additional complexities in the overall test approach. Not knowing user types or any information about them compounds the complexity, as does not knowing hardware and software configurations, the network or ISP, connectivity speeds, and browser types. The testing organization must fully understand how these unknowns will affect the complexity, the number, and permutations of test cases, as well as the effort and cost in attaining the level of quality that the business owner expects. The testing team needs to ensure that all of these dimensions are accounted for as part of the overall RBT process.
A common criticism of regression testing is that it takes too much time and, as the application functionality is enhanced, so does the regression grow. A test team that cannot quantify the number of tests for adequate (risk mitigation) coverage will have difficulties in supporting the duration and effort. When it comes to exploratory testing, the analytical data will ensure that the team is not wasting valuable time on least-travelled paths. “Information is power” is an understatement in terms of the value that analytics provide to the business and technology teams. For testing, this information can help validate user cases or business scenarios, test-case-development techniques, test-case coverage and depth exploratory, regression and user-acceptance testing, all under the RBT umbrella.
Prioritizing the business workflows and test scenarios based on analytics metrics should also be compared against the product owners’ view of high-priority requirements. There could be surprises about how and what the users most often do. Testers benefit from the ability to identify, tag, and execute the most common paths and configurations, increasing the effectiveness and confidence about adequate test coverage and the associated users utilization patterns. One cannot test every combination or path—there is the law of diminishing returns—but ensuring that a test covers the most important and used features under the most common configurations should lead to greater success for the business. All of the above will be applicable to testing mobile applications. When you consider adding to the test bed of test cases all the combinations of mobile devices, models, and operating systems without some level of analytics about your user base, your costs, effort, and risks will exponentially increase.
A bad user experience can negatively affect revenue in both the short and the long term. Missed opportunities and the loss of future revenue streams can be a deadly combination to the survival of the business. As the world continues to embrace increasingly feature-rich, web-based products, technology delivery organizations must learn how to mitigate risk in the pursuit of their goals and objectives. Testing efficiency and effectiveness will lead to a positive experience for the end-user. Through the utilization of analytic tools and RBT, these criteria can be achieved.
As a tester, make sure that the information used to ensure that a satisfactory level of risk has been mitigated is quantified and measurable. Analytics support the RBT strategy and plan in such a manner. When testing costs are challenged or time constraints limit the ability to deliver a product of high quality, one can turn to web analytics and clearly quantify the risk of not testing. At the end of the day, effective analysis of web-analytics metrics can provide deep insight into the overall web-application-usage landscape. If effectively leveraged, this can be a gold mine of information for deducing, reducing, and prioritizing overall testing efforts from a risked-based approach.