Reduce Regression Issues by Establishing a Mobile Automation Lab

[article]
Summary:
If you have a spotty test automation strategy, you may get lots of regression issues every time you have a new release for your mobile app. A mobile device lab to run regular regression tests could be the key. Here's a plan to get a mobile automation lab up and running, as well as some practices that can help reduce the number of regression issues and improve your overall app test strategy.

As I eased into my role of managing the quality program for the mobile app and API platforms at a leading e-commerce startup, I made it a top priority to reduce the number of regression issues reported against our mobile app from production.

As I analyzed this problem, I noticed that every release for our mobile app or one of its dependent backend components brought with it a series of issues that could have been caught by regression tests prior to the release.

A major part of the problem was that our test automation was very scattered and patchy. The app automation in place required days of setup and configuration just to get a few tests working, rendering it ineffective as a tool to run regular regression tests. With no proper device management in place, hunting for devices to run a set of regression tests was another problem we faced.

Keeping in mind the goal of solving these problems and making the use of automation a delightful experience, I set up a mobile device lab to run regular regression tests.

My team started by exploring some popular cloud-based services in this area and ran a few proof-of-concept exercises to gauge their effectiveness for our problem space. We quickly realized that for better control and a faster turnaround for our evolving requirements, a local lab setup would be more effective.

Here is the series of steps we followed to get our mobile automation lab up and running, as well as some practices that helped us reduce the number of regression issues and improve our overall app test strategy.

Preparing the Test Automation Environment

Before we moved into designing the mobile automation lab, we had to take care of hygiene first. We started by optimizing our automation framework to require zero manual intervention for setup.

We introduced a well-defined code branching strategy to be followed by the automation team, using Git as the tool of choice. When we had been using this tool earlier, there was no proper branching strategy in place, resulting in chaos at the time of test execution. Simply finding the right branch to point to for a given feature was an exercise in itself; now this was taken care of.

We also didn’t have a build tool in place, making compilation and packaging of the test code cumbersome with manual inclusion of dependencies and hunting for the right version of libraries. We eliminated this by using Maven, setting up a POM file and a remote repository. The remote repository was used to store the required artifacts as well the built automation package.

We set up a CI build for our test code itself on Jenkins to ensure no automation code check-ins were breaking the test code, assuring a stable, up-to-date test automation package at all times.

With these new practices in place, setting up the test automation environment on lab machines became a painless exercise. Parallely, we started ramping up our test automation coverage across key areas and critical customer flows.

Designing the Mobile Automation Lab

We kept the lab design simple, with Jenkins and a host of plugins forming the core execution layer to perform one primary job with the execution logic and a series of upstream jobs tailored for all possible permutations of test suites, device platforms, and versions of the code.

Apart from the regular suite of plugins for Maven, JUnit reports, Git, and Mailer, some of the other plugins we used were Multijob, Extensible Choice Parameter, Blame Upstream Committers, and Parameterized Scheduler.

Optimizing our schedule and ensuring the dependent runs (the ones to be executed on the same device) were run sequentially while independent runs were executed in parallel were important for us. We also had the complexity of multiple upstream jobs triggering a base primary job and passing down the parameters with information about the chosen suite, platform, and branch of code. All this was made simple by the use the Multijob and Extensible Choice Parameter plugins.

While the Parameterized Scheduler plugin helped us trigger scheduled jobs with chosen parameters, The Blame Upstream Committers plugin helped with routing test result emails to the code committers in the app codebase; this was especially useful for the CI runs.

We also enabled parallel execution, customized emailable report templates, and built a basic user interface and dashboard for triggering tests and viewing reports using Selenium Grid, ExtentReports, and Twitter Bootstrap.

Establishing Nightly Test Automation Runs

As I mentioned earlier, any changes to one of the dependent backend components or the aggregating API layer behind the mobile app brought with it a series of regression issues that required either patches for the released versions of the mobile app or a rollback to the component just released. This mandated the execution of regular regression tests against the customer-facing versions of the mobile app that were already in the market.

To optimize our regression testing against released versions, we looked at our usage analytics and noticed that around 80 percent of our active user base was on the last five versions of our app. This more or less held true with each app update that was released. This was the best starting point to make the lab operational, so we quickly hooked a small cache of devices and set up nightly automation jobs on Jenkins. We started running some of our core automation suites against the last five versions of the app to catch any front-end regression issues caused by any of the backend or API releases during the day.

Another key area that was frequently overlooked and formed a large chunk of the issues reported from production was the performance and the responsiveness of the mobile app.

We enabled nightly runs gathering responsiveness data for each of the key screens in the discovery and purchase flows—the user flows most employed by our customers when using the mobile app.

We compared page load times against our competitors and our own last five versions of the app. We also baselined our responsiveness against internal benchmarks set using closed user group studies. We looked for any performance degradation and usability issues around the smoothness of user interaction.

We also started measuring metrics around battery drain, mobile data consumed, and CPU and memory consumption.

Enabling CI and On-Demand Runs

While we wanted to ensure our live apps were not adversely affected by changes to our API player and backend components, we also wanted to ensure high-quality releases for our versions under development. For this, we built a robust set of smoke and regression suites and also enabled continuous integration (CI) runs against branches scheduled for release.

Set on a regular poll interval, any check-in to a release branch spun off a build, and the app got posted to a location where our automation framework picked it up and triggered a sanity suite. Any failures in tests got communicated to the committers and the team.

To make the lab and automation suites useful for developers, we also enabled on-demand runs.

With many teams working parallely on the app codebase and running different kinds of experiments, there was a need to self-validate any possible regressions for the impacted areas. This was especially true for some user growth features that required frequent and fast changes.

Developers used to need QA bandwidth for this, but now they could just define their branch and choose from a range of automation suites to run automated regression tests for the desired platform. The app would get built from the desired branch, be posted to the location where the automation suite picked it up, and run the desired regression suite.

With this run enabled, executing an automation suite against a defined branch became a frictionless experience, helping improve developer productivity and time to market for some of the most exciting features for the app.

Tracking Progress with Daily Reports

With more than two thousand automated tests every night, we needed a way to consolidate and present the summary of findings in the morning.

We pushed all execution results over to a database and ran cron jobs that consolidated the results for each night and sent an app health report every morning to all the stakeholders.

We also developed a dashboarding capability for anyone to query and gather reports for chosen areas of interest against any given version of the app and platform.

Realizing Success

Once our mobile device lab was operationalized, we brought down the app regression issues leaking into production from an average of 2.5 every release to a clean slate, as monitored over six releases in a monthlong period. The app responsiveness, though marginally improved, now had a clear backlog of items we wanted to accomplish, courtesy of the visibility brought about by the performance runs in the lab. The time to release for experimental features was brought down from an average of less than one experiment every release to three experiments with each app update, a change made possible by self-validation from developers for potential regression issues.

There are many different ways to build a mobile automation lab, but the lab itself is not an end; it's a means to an end. The steps mentioned here can be taken as proven practices to put the lab to optimal use in order to reduce your app’s regression issues. An organized approach can make a real difference to your test strategy.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.