I have always been fascinated with creating methods for efficient delivery, particularly during testing. In the 1990s, I was stretching my theories to the brink and loving the ride. The adoption of evolutionary methods brought about many solutions for better efficiency, including the idea to test smaller, more frequently, and earlier.
In today’s age of automation and complex integrated infrastructures, we often encounter the unresolved issue of how to get high-value testing within the condensed time-to-market window.
Automated frameworks and modularized scripts provide a partial solution, but they are not independently intelligent enough to provide consistently high value or highly efficient testing. To solve this “how,” we need to select tests that require we examine “what” is needed to test within each unique increment, cycle, or iteration. Every change, whether done for improvement or remediation, presents an opportunity for the software ecosystem (applications, browsers, web services, and vendor software) to fail. This results in a much greater need on our part to perform high-value testing.
High-value testing does not mean that you need to perform all end-to-end testing or run the full suite of tests; this can potentially create a bottleneck and dampen the velocity. To properly perform high-value testing requires a precise and often unique test response for each new change, which entails a medley of testing types, each working in concert to ensure the quality goals. This is a modern-day necessity to fully ensure the end-user experience, the ecosystem stability, and product health.
The goal is for you to create an intelligent testing trove (security tests, functional tests, data accuracy tests, performance tests, usability tests, interoperability tests, etc.) that can be succinctly arranged and rearranged across varying sets of browsers, platforms, and hardware. This variety of intelligent tests is scalable to varying business goals and marries the quality categories to the unique business requirements to create test goals. The adapting tests are always targeted at the most relevant business and quality goals, which yield the most important results for the team to use for decision making.
One of my recent challenges involved a two-week sprint with thirty-eight backlog items (including requested system changes), of which most were small, front-end UI changes to multiple web applications. In this case, the test team executed all the tests and performed regression and the sprint was given the green light. This was followed by an uneventful implementation.
To our chagrin, the next day after implementation we received a call from an executive informing us that one of the two user profiles was redirecting to a broken page after login, and the other had severe performance issues, taking over three minutes to authenticate the user. This particular login page and process had not been changed by the recent implementation and our previous regression testing had only targeted the login process of one user profile using test data. We did not test the less common user profile (which resulted in the broken page).