Automation testing is undeniably one of the key strategies for any QA manager, and for good reason: Automation promises faster regressions, high productivity, good quality, and reduced costs. However, many QA managers fail to reach those results. They face late deliveries, acquire expensive tools, and deal with a lot of frustration. But what are the causes? In this article, I will review five of the most common practices that cause automation projects to fail.
1. We do not need an automation plan.
“Let’s start scripting ASAP,” says the QA manager.
Imagine this: You and your coworker just arrived in the US to perform a knowledge transfer. At the airport, you suggest looking for a road map, but your coworker says, “That’s not necessary. I’ve been here before and can find our way around.” You decide to put your concerns aside, get into the car, and start driving. After two hours on the road, you’re hungry, angry, and your gas tank is running on empty. You stop at a gas station to buy a map and ask a clerk for directions.
Sound familiar? I can assure you the same happens when you don’t have an automation plan—and you are not going to find one in a convenience store. An automation plan helps you attain a clear view of your scope, the resources that you need, the roadmap that you have to follow to achieve your objectives, and a way to measure how far you are from your goal, provide focus, and identify risks.
2. Let’s use the best, most powerful automation tool in the market.
“This guarantees our success,” says the QA manager.
Now, think of your dream car. For example, a Ferrari Testarossa—a high-quality piece of engineering with almost 300 horse power and luxurious interiors that costs half-a-million dollars. Now, picture that fine machine parked in front of an elementary school, 300 HP on hold. The mom in the driver’s seat is waiting for her children, trying to put on some make up, and singing to the baby in the back. The baby is spilling milk and passing his sticky fingers over the luxurious seats. The sweaty boys outside in their soccer gear are scratching the car and damaging the paint job.
I do not have anything against SUVs; I just think they are called “mom-mobiles” for a good reason. They are perfect for that kind of use. The same applies to automation tools. The most complete, powerful tool is not necessarily the best tool for your project.
To select a tool, you will need to consider factors such as budget, the application’s technology, the engineer’s skills, the training curve, vendor support, etc. I assume that you would never buy a car without a test drive, would you? The same applies for your automation tool. Once you have your selection, perform a proof of concept (POC) to verify that the tool satisfies your expectations. If your client or QA organization already has a tool, do a POC anyway, as it will give you information about potential issues with the application technology and help you fine-tune your estimations.
3. We do not need standards nor a framework.
“That requires time that we do not have. We need to start scripting right away, and I’ve got experienced people that know what they´re doing,” says the QA manager.
The recently released movie The Avengers teaches us that having a team of superheroes in a room doesn’t guarantee anything. You may have five automation superheroes, but in the best-case scenario you are going to end with five perfect, innovative superscripts that do the same thing. In other words, you will have used effort and time that probably did not get you closer to your goal.
On the other hand, let´s say that you have a smart, energetic, and proactive team with no experience in automation. How will they know the best way to do it? And, how can they guarantee that their scripts have the same level of quality?
One of the biggest automation problems is not figuring out how to finish the scripts but figuring out how to maintain them. To achieve an efficient maintenance effort, your scripts should be standardized, well documented, reusable, and easy to modify. That is what a framework—even a simple one—can give you: standards that tell your junior team members the expected result, and a mechanism for focusing your senior team members in the highly complex or reusable parts of the automation scripts.
4. Automation and manual test engineers are independent teams.
“They have different skills and different objectives, so let’s keep them apart,” says the QA manager.
Let me tell you, from my perspective, how an offensive team is created in American football. It has two types of players: the lean, muscular, agile, and fast players who can cover a great amount of yards by air or ground, and the aggressive, motivated, larger players who move only a few yards each play and protect the first group. What happens when they do not work as a team with a common strategy? The opposing team painfully crushes them.
Automation testing is not only able to cover a great amount of functionality faster and with the best quality, but it is also a tool that helps you verify that what has already been tested has not changed. Manual effort, on the other hand, tests what’s new and what has changed, what is too complex to implement using automation, or what requires human skills and experience. They are complementary tools, and you are going to get the best ROI when they work together as a global strategy. If you are making the automation team drag after the manual team or using the automation team to execute and debug, or if you are trying to automate everything new and old or trying to make your automation team experts in the application domain, then you do not have a global strategy. You probably also don’t have an effective and productive team.
5. We are going to automate 100 percent of our regressions.
“We have the team and the tool, so let’s get the best out of them,” says the QA manager.
This is the most common problem in automation testing—trying to automate what is not automatable. Have you seen those lengthy cycling races, such as the Tour de France? They plan it by stages. The winner of each stage is assigned as the next stage’s team leader. However, winning a team leader position does not guarantee winning the whole race; in fact, 2011 winner, Cadel Evans, only won stage 4.
Automation is also a long-term race in which you need to analyze how to use your resources to achieve your final goals and, often, in which you may need to lose a battle to win a war. Not all of the test cases can be automated, nor can all candidates automate. To automate a test case, it needs to be tool-friendly from the technology perspective, have the correct data, be free of human interaction, and be stable. A test case that is going to be automated should be required three times at minimum, and it should be automated within the timeframe parameters (i.e., if it takes too long to automate, it may not be a candidate). Usually, in the best of scenarios, only 75-85 percent of regressions can be automated.
Are those five the only causes of failure in automation projects? Absolutely not. I can provide you ten more examples, and you have likely experienced even more than that. But, I would like to provide one more piece of advice: If you are going to start an automation project, put your shoulders back, sit up straight, and wait for the worst. Remember Murphy’s Law: Anything that can go wrong will go wrong.