I have always been fascinated with creating methods for efficient delivery, particularly during testing. In the 1990s, I was stretching my theories to the brink and loving the ride. The adoption of evolutionary methods brought about many solutions for better efficiency, including the idea to test smaller, more frequently, and earlier.
In today’s age of automation and complex integrated infrastructures, we often encounter the unresolved issue of how to get high-value testing within the condensed time-to-market window.
Automated frameworks and modularized scripts provide a partial solution, but they are not independently intelligent enough to provide consistently high value or highly efficient testing. To solve this “how,” we need to select tests that require we examine “what” is needed to test within each unique increment, cycle, or iteration. Every change, whether done for improvement or remediation, presents an opportunity for the software ecosystem (applications, browsers, web services, and vendor software) to fail. This results in a much greater need on our part to perform high-value testing.
High-value testing does not mean that you need to perform all end-to-end testing or run the full suite of tests; this can potentially create a bottleneck and dampen the velocity. To properly perform high-value testing requires a precise and often unique test response for each new change, which entails a medley of testing types, each working in concert to ensure the quality goals. This is a modern-day necessity to fully ensure the end-user experience, the ecosystem stability, and product health.
The goal is for you to create an intelligent testing trove (security tests, functional tests, data accuracy tests, performance tests, usability tests, interoperability tests, etc.) that can be succinctly arranged and rearranged across varying sets of browsers, platforms, and hardware. This variety of intelligent tests is scalable to varying business goals and marries the quality categories to the unique business requirements to create test goals. The adapting tests are always targeted at the most relevant business and quality goals, which yield the most important results for the team to use for decision making.
One of my recent challenges involved a two-week sprint with thirty-eight backlog items (including requested system changes), of which most were small, front-end UI changes to multiple web applications. In this case, the test team executed all the tests and performed regression and the sprint was given the green light. This was followed by an uneventful implementation.
To our chagrin, the next day after implementation we received a call from an executive informing us that one of the two user profiles was redirecting to a broken page after login, and the other had severe performance issues, taking over three minutes to authenticate the user. This particular login page and process had not been changed by the recent implementation and our previous regression testing had only targeted the login process of one user profile using test data. We did not test the less common user profile (which resulted in the broken page).
When constructing the initial user stories and tests, we knew that the login process was a critical path and should be included in regression. But, we had designed stories for general login with test data since it seemed stable in the test environment. As we learned, however, 70 percent of the generated revenue was connected to this line of business, and the production environment was conclusively different, which could render some of our test results useless.
In the retrospective meeting, it was clear to me that a few key areas were being underserved, thus resulting in a growing problem. For one thing, the testing value was beneath the quality need, meaning that the best test results did not accurately predict the system behavior or confidently indicate that the business goals would be met in production.
After I digested this premise, the exact root cause of the issues was less relevant, because we didn’t have a process that would allow for us to detect errors left or right of the established regression. The established regression was comprised of previously created user stories and tests.
The regression suite was enormously inefficient and it took two-to-three days to execute, so the thought of having such an ineffective, time-expensive process boggled my mind. The production problem was revealed to be a service break-down between the content management system (production instance only) and the middleware, which would have never been caught due to the established coverage gap and lack of testing in the production environment.
From this, I concluded that there was a potential of on-going defect migrations into production as well as residing unknown issues in the production environment that were both just waiting to be encountered by a customer. We could have used adaptive testing during production, as this method would have created a focus on quality goals, which in this case were critical process flows, usability, and content.
Using adaptive testing would also have been beneficial in another case of mine when a financial client added a new on-line product to their services and rebranded the old content in effort to create a better customer experience. During this project, two decisions were made: we wanted to use a limited set of test data that represented only a fraction of users and functions, and we wanted to eliminate security testing from the scope of work. The testing efforts were from local (decentralized) teams that executed system, integration, performance, and user acceptance tests on their allotted work stream. The end of the project revealed a scattering of moderated to minor defects, which were sanctioned as acceptable in production.
Upon release into production, major processing errors occurred that displayed the wrong user’s account information to erroneous users. This allowed the wrong account holder to view and change another account holder’s information. The decision to remove security testing and usage of small scale data limited the testing and was solely based on controlling exorbitant testing costs.
If this team had employed Adaptive testing, the use of smaller, more precise tests could have been shared across teams, allowing them to arrange tests per their unique project need. This would have resulted in a higher test efficiency and better test precision based on the goals of data accuracy, security, usability, and content consistency (ensuring visual elements style).
Both of these experiences culminated from misplaced testing rigor, or the lack of intelligent test design because of low business domain knowledge. Adaptive testing would have allowed the teams to focus on the greater goal of the changes and to creatively fashion test solutions by combining and rearranging tests, types of tests, browser and OS combinations, or hardware configurations, according to the need in different environments.
For example, high-value tests for production may encompass 40 percent usability (of both functions and content), 30 percent interoperability (of critical user flows), 20 percent security (user authentication and data flows), and 10 percent performance. It all depends on what the quality goals are for that particular test run and environment. The testing value shifts with different changes and potentially with each unique need of the code promotion.
Here are a few ways that I have found to be successful in incorporating adaptive testing methods to gain precision:
1. Become self-adapting
You can break out of the pre-defined test scope by creating versatility and flexibility in your testing suite. You can do this by engineering a flexible framework that allows unique combinations of small executable tests and grouping of test assets based on quality goals. This will provide the ability to re-integrate parts and pieces of stories (or test cases) into new, high-value runs. You can define the new executions by precise needs and execute them in combination or independently. The core principle here is the flexibility of test assets, which presents endless options for creative execution.
2. Define the Test Goals
Most testing, whether agile or not, requires pre-planned executions which are largely categorized into either new change or regression testing of existing functionality. This traditional separation of test effort hinders the creative blending of testing types and methods. With adaptive methods, you can drive the testing based on the goals, regardless of whether it is new change or existing change. By combining meaningful tests together into a logical flow of quality-based goals you can accomplish testing of the new delta along with additional “regression” coverage under the theme of the test goal. Commonly used goals are usability, integration and interoperability, user and data security, data accuracy, and brand testing.
3. Data Driven Adaptation
You should support your test selections with data and analytics of past run metrics, user analytics, and test failure analysis. This will allow for the team to clearly see testing needs and define test goals. For example, user analytics reveal that 60 percent of your customers used a tablet device to access your site, and 40 percent of the existing customers use mobile devices to post product opinions on social media. This data tells you that the tablet presentation (usability and branding) will be important to test and the ease of launching to social media from a mobile phone (interoperability and performance) should also be precisely targeted by combining tests that focus on these areas.
4. Evergreen Maintenance
Continuous integration of the testing baseline is best for adaptive testing, because you can rely on your test execution selections for being relevant and not outdated. You don’t want several generations of automation or old test cases hanging around that can be inadvertently selected or rendered inefficient by not being execution ready. Ongoing fluid development, testing, and test baseline integration (of retrospective feedback, production fixes, planned change, etc.) will decrease the need for large maintenance windows and provide a foundation of continuous testing.
5. Extend testing to production and beyond
Testing based on adaptive goals is valuable across the entire life cycle and lifespan, however, the biggest benefit can be seen during production. The results of early-cycle, pre-production testing can lead to a high-performing live product. The product owners will thank you because you are assisting them with customer retention, and technology staff will thank you for providing aid in an accelerated discovery, fix, and deploy cycle.
6. Monitor and measure
You should measure test velocity and precision by capturing test execution metrics and comparing them to the test goals and the defect types. Production monitoring and issue resolution should be fed into the test baseline and utilized as a production quality metric; this can be used to identify potential areas of risk and aid with test selection. Common metrics that indicate quality and health include the number and criticality of defect hotspots, the time between defect identification to recovery time, and the time between test execution to test goal comparisons.
The simple truth
Adaptive test methods create fluid and continuous testing, which in turn provide a force of adaptive patterns and relevant results. Testing can no longer be defined by an inflexible, unchangeable, one- toned function of test execution. What was once called regression, performance, or security tests are now combined needs that can be incorporated into a standard testing process.
This method serves best when done in a lightweight and self-adapting way. Adaptive testing provides nimble test solutions that bend and shift with the changing needs of the market or the environment.