When constructing the initial user stories and tests, we knew that the login process was a critical path and should be included in regression. But, we had designed stories for general login with test data since it seemed stable in the test environment. As we learned, however, 70 percent of the generated revenue was connected to this line of business, and the production environment was conclusively different, which could render some of our test results useless.
In the retrospective meeting, it was clear to me that a few key areas were being underserved, thus resulting in a growing problem. For one thing, the testing value was beneath the quality need, meaning that the best test results did not accurately predict the system behavior or confidently indicate that the business goals would be met in production.
After I digested this premise, the exact root cause of the issues was less relevant, because we didn’t have a process that would allow for us to detect errors left or right of the established regression. The established regression was comprised of previously created user stories and tests.
The regression suite was enormously inefficient and it took two-to-three days to execute, so the thought of having such an ineffective, time-expensive process boggled my mind. The production problem was revealed to be a service break-down between the content management system (production instance only) and the middleware, which would have never been caught due to the established coverage gap and lack of testing in the production environment.
From this, I concluded that there was a potential of on-going defect migrations into production as well as residing unknown issues in the production environment that were both just waiting to be encountered by a customer. We could have used adaptive testing during production, as this method would have created a focus on quality goals, which in this case were critical process flows, usability, and content.
Using adaptive testing would also have been beneficial in another case of mine when a financial client added a new on-line product to their services and rebranded the old content in effort to create a better customer experience. During this project, two decisions were made: we wanted to use a limited set of test data that represented only a fraction of users and functions, and we wanted to eliminate security testing from the scope of work. The testing efforts were from local (decentralized) teams that executed system, integration, performance, and user acceptance tests on their allotted work stream. The end of the project revealed a scattering of moderated to minor defects, which were sanctioned as acceptable in production.
Upon release into production, major processing errors occurred that displayed the wrong user’s account information to erroneous users. This allowed the wrong account holder to view and change another account holder’s information. The decision to remove security testing and usage of small scale data limited the testing and was solely based on controlling exorbitant testing costs.
If this team had employed Adaptive testing, the use of smaller, more precise tests could have been shared across teams, allowing them to arrange tests per their unique project need. This would have resulted in a higher test efficiency and better test precision based on the goals of data accuracy, security, usability, and content consistency (ensuring visual elements style).
Both of these experiences culminated from misplaced testing rigor, or the lack of intelligent test design because of low business domain knowledge. Adaptive testing would have allowed the teams to focus on the greater goal of the changes and to creatively fashion test solutions by combining and rearranging tests, types of tests, browser and OS combinations, or hardware configurations, according to the need in different environments.
For example, high-value tests for production may encompass 40 percent usability (of both functions and content), 30 percent interoperability (of critical user flows), 20 percent security (user authentication and data flows), and 10 percent performance. It all depends on what the quality goals are for that particular test run and environment. The testing value shifts with different changes and potentially with each unique need of the code promotion.