How and When to Use Test Automation in Agile: An Interview with Melissa Tondi

[interview]
Summary:

In this interview, ProtoTest vice president Melissa Tondi talks about why teams need to use both manual and automated testing. She explains the silver-bullet tool that often fails for companies, as well as how and when to use automation in an agile framework. 

Josiah Renaudin: Today, I'm joined by Melissa Tondi, the vice president of ProtoTest. Melissa, thank you very much for joining us.

Melissa Tondi: Thank you. I'm very excited to speak with you.

Josiah Renaudin: Great. Can you tell us a bit about your experience in the industry?

Melissa Tondi: Yeah. In my fifteen-plus years in the industry, I've mainly focused on software testing, quality assurance, software quality engineering, both as a practitioner and individual contributor. Until recently, in the last several years of my career, focusing mainly in the management and leadership, focusing on a couple of tenants around efficiency, productivity, and building a strong culture around software testing. I've had a great variety of interactions with multiple project teams, both within quality assurance and also all of the supporting teams that make up successful software development life cycle organizations.

Josiah Renaudin: Now your agile talk, “Test Automation in Agile: A Successful Implementation,” covers how automation is used in agile. Why do think so many teams feel like they either have to use manual testing or automation, and not a combination of both.

Melissa Tondi: In my experience, the people with the domain knowledge generally came from the business side of the user. If you think back to really when companies started adopting software testing and quality assurance as a mainstream team, and started investing in roles and budget to start staffing quality assurance, one of the first ways to staff those teams fifteen to twenty years ago was from somebody in the business. It would make perfect sense to have somebody that was consuming the software be part of testing the software.

Although there have been a lot of innovations since then, specifically with roles around software engineering and tests or development in tests and very high-tech skills, a lot of the people that were very good at software testing really focused mainly on how a user would use the software. Therefore the need for them to increase their skills technically was not really a pressing need within the industry or the teams that they were supporting.

On the flip side, as we began adopting more agile ways of delivering software, such as Scrum or Conbon or Lean or XP, the increased delivery and release cycle caused a need for us to start looking at automated tasks much more closely than we had in the last several years. But because the people that were the best testers also had so much more of the domain knowledge, I think a lot of companies felt like it was either-or. How we'd either have to hire automation engineers only or we'd have to continue with the manual testing team that knew the product so well.

I think that stemmed from a little bit of a lack of education on what software testing roles should be within the software development life cycle and agile team. But instead of using an either-or type of combination, I believe that there's a way for teams to assemble with an embedded automation role or task, as well as emphasizing the functional and exploratory testing that a manual tester can really do and does very well.

Josiah Renaudin: You mentioned a silver-bullet tool that some teams are forced to use. Can you elaborate a bit on this type of tool and why it isn't often right for the job?

Melissa Tondi: That really came from some of the large commercial tools that exist out there with the advent of open source, and specifically within software testing in the last few years with Selenium and other open-source tools like that. That silver bullet has really kind of decreased as far as where we're seeing that now. But in a lot of ways, especially in companies that are regulated and were told need to be commercial proprietary; generally speaking those tools are expensive. Usually the people that sign the purchase order or SOW to purchase a commercial tool are almost never the people that will actually be using the tool practically.

Once there's a high-dollar cost associated with a commercial tool, the tendency is that that tool should be used as much as possible. The metrics coming out for the actual use of that tool gets watched very closely because of course you're investing a large amount of money and you want to see a quick return on investment or ROI on that purchase.

What I found in my experience is that once a commercial tool is taking part on a team and is now being asked to be used for all software quality or quality assurance testing, that becomes … people tend to, for lack of a better adage, to put a square peg in a round hole. Although the commercial tools obviously have a ton of capability, the tendency has been that because there's a high-dollar cost associated to it, that that tool is then being forced to do things that it may not be able to efficiently do.

The silver-bullet phrase really comes from that combination of, once you get into the high dollar and somebody is not very close to the team is finally making the decision to purchase the tool, and then all the way down to the practitioners who are being mandated to provide metrics out of using that tool where otherwise they would naturally be able to be more creative in how they would implement and how they would test using a variety of tools, not just that one commercial tool.

Josiah Renaudin: Absolutely. Now how do testers know what to manually test and what to do automate in an agile framework? What's the easiest way to make that distinction?

Melissa Tondi: In my talk, I'll talk a little bit about the best way that I've seen to both introduce automation and to ensure its success easily off the bat. I start out in a couple of ways. I think all of us in the software testing industry know that not everything should or can be automated. Although sometimes we do hear metrics that we want 100 percent test automation, which quite honestly can never be achieved.

The best distinction to make in this is that, let's use the human-based testing activities. By that I mean really the testing activities that really should and could only be done by that human interacting with the software. Those really tend to be the exploratory type of testing that should never, and can't ever, be automated, except after the fact of course, and more outlier types of tests.

The best way, the way that I prioritize those, specifically in an agile world is to focus on the tests that get executed the most often. In our world, especially in agile and for those teams that are practicing continuous integration or continuous delivery, those tend to be the smoke test, or sanity test or build-verification tests, all kind of same thing. If you look at your suite of tests that a tester would execute within a release or iteration, they're always going to be executing those smoke tests if not every day, multiple times a day.

The first part that I talk about is agreeing on what a smoke test suite looks like within that team and then automating that, because at the end of the day that will be the suite that will be executed the most. That is one of the first pieces of ROI that go into automation compatibility.

The second part, once an automated suite is now both automated, it's running multiple times a day and in organizations that are practicing TI that tied to the build check-ins … the next priority can go one of two ways. For agile teams where acceptance criteria is centralized within user stories and the teams are highly effective and highly mature in their agile development, I suggest going then automating the testing criteria within a user story, because if teams are practicing some of the agile philosophies that MVP or minimal viable product is going to be what a team will then grade themselves on at the eventual release of that.

Because the acceptance criteria being completed and accepted lets the team know that they've completed a certain amount of work, that's why I recommend that as priority number two, because at the end of the day that will become the archive's demonable, working software that the team is expected to achieve. Then those eventually make it into residual releases regression test beds.

Josiah Renaudin: Can you explain the difference between open-source and commercial tools, and when it's most appropriate to use one over the other?

Melissa Tondi: I spoke a little bit about this earlier. In a lot of highly regulated industries such as health care, financial, some of the other regulated industries where a third-party independent validation is needed, a lot of times the introduction of testing tools needs to go through either independent or internal certification within the company. In those cases commercial tools are the only option for automation because they've gone through their own independent verification in a company, or the regulated body has approved its use within the regulated confines of what that company is doing.

In that case, commercial tools are certainly the only option if automation is a venture in which a company is going to be investing in. In other ways, where budget comes up as an issue as far as licensing and tools, some companies will choose to go for the cheapest option, for lack of a better term, and in which way open source will certainly get you there. I believe that if and when possible, that a combination of two or more automation tools that can be used creatively by the individual is a lot better solution. In successful implementation from enterprise-wide at agile and automation, we've been very successful in implementing both commercial and open-source tools as our ultimate solution for automation.

Josiah Renaudin: Is it difficult to transition between manual testing and automated testing if an individual or a company has been manually testing throughout most of their career. Is it a struggle to switch your mindset and go directly into automation?

Melissa Tondi: I think the quick answer to that is yes, and more so because there isn't a lot of time that is given especially within working business hours for an individual to increase their skill set. And for all intents and purposes to be a good automation engineer you really need to understand code to the point where you can write code. We use the saying, "In order to test code you should be writing code." That's really the best and most comprehensive way to understand the best way to top test code.

It's certainly not impossible, and in some cases where there's a true investment from a company in professional development and career enhancement for an individual who is transitioning from a traditional manual tester into an automation engineer, the realistic portion is that it takes at least six to nine months of intense study in order to get somebody who may not have any coding experience up to the point where they can be proficient in using an automation tool.

Unfortunately, I think senior management a) doesn't have the time to invest all that money or project time into getting somebody trained up and ready to go; and b) that the expectations are that an out-of-the-box record and playback tool is all somebody needs to be able to be proficient in order to be called an automation engineer.

I think … if the difficulty is that the time constraints and the realistic expectations that are put on a traditional manual tester to become an automation engineer almost never works out from a timing standpoint in investing in there. That's really where the difficulty lies, not that they can't do or don't have the skill sets to do it, but there isn't the time during normal business hours to do it. Then it requires that manual tester to really put in much more time outside of work and really disciplined time to learn that craft.

Josiah Renaudin: More than anything, what's the message you want to leave with your audience in Orlando?

Melissa Tondi: I think the main message is that for those people who have subscribed to the agile methodology, which is ideally everyone that's going to be attending in Orlando, to really reset expectations on what realistic deliverables and outcomes should be and could be within agile teams. Especially exploring if the makeup of the team of testers is more manual versus technical or automation engineers, and really talk about ways in which to focus on metrics that make sense and are valuable for the agile team itself and the individual practitioner or tester that will be accepting the tasks that are related in the user stories.

Really setting the tone, talking about realistic expectations, and talking about ways that organizations can be successful and also realistic in implementing automation within their agile team.

Josiah Renaudin: All right. Fantastic. Thank you very much for taking the time to speak with me today, Melissa. I'm looking forward to hear more about what you have to say about automation and agile.

Melissa Tondi: Thank you very much.

Melissa TondiIn the software test/QA/quality engineering field for more than fifteen years, Melissa Tondi focuses on organizing software testing teams around three major tenets―efficiency, innovation, and culture. Currently, Melissa is a vice president for ProtoTest and the founder of Denver Mobile and Automation Quality Engineering community. Her previous roles have been as director of software quality engineering for a 150+ person organization for the world's largest education company; QA consultant for health care, finance, and SaaS industries; and the president of Colorado's Software Quality Association of Denver (SQuAD).

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13