Who Should Set Up Continuous Integration for Automated Tests?

[article]
Summary:
If you want to trigger long-running, end-to-end automated tests, you must integrate the test execution system with the continuous integration system. But this job falls in a fuzzy area that meets at the nexus of feature development, test automation development, quality assurance, and build and release engineering. Here's how to decide who should be responsible for the setup.

Automated tests can be triggered manually as needed by a developer, a QA manager, or anyone else on the dev team in order to complement manual testing. That works, but it is much better to have automated tests running automatically without the need to manually trigger them. This way, you can be assured regression testing takes place frequently, there is no mental overhead around triggering test runs, and testing can pick up issues without any input.

You could choose to have tests triggered on new successful builds. This is definitely useful for unit tests, and many teams do this. But what about long-running, end-to-end tests that perform complex regression tests in substitution for human testers?

If your team commits and builds very frequently, triggering end-to-end tests so often may not make sense. You may still want to trigger them daily or twice a day for full regression testing. You may also find it useful to feed in build and change data from the continuous integration system. In either case, this requires someone to integrate the test execution system with the continuous integration system.

Who should do this? Based on my experience, it is a fuzzy area that meets at the nexus of feature development, test automation development, quality assurance, and build and release engineering. Perhaps it falls within DevOps, but most companies using DevOps practices have separate build engineers, devs, and quality engineering teams. In many teams, even these roles aren’t present.

I have worked on teams where there has been a dedicated build engineer focused on handling everything related to builds and deployments. After creating a test harness, I could go and find this useful person and provide details about the commands required to run automated tests. I could even suggest what should trigger them and what build information should be passed in as arguments, and then they would take care of handling everything to do with the build scripts and CI system.

If you have a build engineer like this at your company, you are very fortunate. Hand over the test execution system and get back to building robust, reliable, useful automated tests, handling device farms, reporting, and keeping up to date with new feature work.

I have also worked in teams where there are no build engineers and where no one really has full responsibility for the continuous integration system. Feature developers from various teams handle plugging in their parts for builds and deployments, but none of them was an expert on the CI system. This is a tougher situation to be in.

Having a dedicated build engineer is ideal due to the specialization of labor, but if that option is unavailable, this work is too important to skip. Ultimately, in this scenario, it is going to be up to the automated test developer to deal with the CI system.

This will take away time from writing and maintaining test scripts, test harness infrastructure, and test devices. However, if the work is done right, it should add considerable value over the longer term by making the testing process more frequent and timely. Continuous testing can find issues and bugs in the software being tested, of course, but another thing it does is help us understand the robustness of the test automation itself. The battle with flaky tests, especially for end-to-end tests, cannot even begin without high-frequency runs of the tests.

To prepare for the CI system work, I suggest having the most experienced developer on the team do it and leaving others to continue writing test cases and maintaining the test harness code. CI integration is often one-time investigative work, but pipelines do change over time, so the CI integration itself requires some maintenance. If product teams across your organization are using different CI systems, this becomes more work due to the need to learn multiple CI systems. This is why having a central build engineer is often so important—they can help to consolidate these systems and keep things orderly.

Unit test failures should be set up to reject check-ins if possible. Gated check-ins help here. This is something that the build and CI should be taking care of. However, end-to-end tests (such as UI automated tests) should not block check-ins. I have come to this conclusion based on my experience trying both ways.

End-to-end automated tests through front-end UI often take a long time to run if not enough parallelization is available, and it can hold up and frustrate dev teams. They are probably the most important tests to run because of the wide net they cast, but their results should be looked at to identify issues and to ensure the build already checked in is of high quality.

If massive parallelization of test devices is available, then by all means attempt running them before check-in. But in most cases this is impractical, especially because sometimes subtle or minor bugs are identified and these can just be fixed with another quick commit. Importantly, end-to-end automated test runs should be able to provide detailed information about the cause of a failure.

If end-to-end tests are running on a schedule on the CI system, they can still be set up to be somewhat smart—the system could check whether any commits took place that justify running the tests again.

There is a range of variables the automated test system should provide via the CI, including the environment, build flavor (debug, release, etc.), and the branch being tested. These should all be reported with test results. Knowing exactly what is being tested is critical.

If running tests frequently with CI integration shows that there are indeed flaky tests—which is highly common for end-to-end tests—work should be done immediately to harden these tests. I propose a moratorium on any new test cases being added until the flaky tests are made robust and reliable. These are highly dangerous to the overall usefulness and effectiveness of the entire system.

Setting up continuous integration for automated tests is too important not to do. If automated test devs have to pick up this work because no one else is around to do it, they should find time to do it.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.