The Role of Artificial Intelligence in Testing: An Interview with Jason Arbon

[interview]
Summary:

In this interview, Appdiff’s Jason Arbon explains what the rise of artificial intelligence means for the world of testing. He covers how manual testers can work with AI, the role of automation, and the type of companies that testers can now start.

Josiah Renaudin: Welcome back to another TechWell interview. I’m joined by Jason Arbon, the CEO of Appdiff and a speaker at this year’s STARWEST. Jason, thanks for joining us. First, could you tell us a bit about where you worked at before you started Appdiff?

Jason Arbon: Hi, Josiah, nice to chat with you again. After college, I started my career at Microsoft doing testing and automation for products like Windows and Bing. Later while I was at Google, I worked on test automation for the Chrome browser and ran a team doing personalized web search. Finally, at Applause (formerly known as uTest), I directed product and engineering, focusing on mobile and software infrastructure.

Appdiff is the world’s first AI-powered mobile app testing solution, and we are building smart little bots that have started testing over ten thousand apps today, and soon every app on the planet.

Josiah Renaudin: Something you have a lot of experience with is artificial intelligence. To kick things off, how does AI work, and what’s its current role in testing?

Jason Arbon: Artificial intelligence (AI) is a bit of a mystery and can be intimidating at first, but part of that is because AI is such a broad term. In our context, we’re referring to the ability for a machine to understand an environment, perform “intelligent” actions, and learn how to improve itself automatically.

One of the first AI problems people face is how to find patterns in data, and this has led to lots of classification algorithms, such as neural networks and support vector machines. If you’ve collected lots of examples of how a computer should behave given some inputs, you can “train” bots on this data by showing it the input and output pairs over and over again. After training, the bots are able to do the same task—even on inputs it has never seen before. It is like teaching a child by example.

In Appdiff’s case, we’ve used AI to build and train software bots, which knowhow to tap, type, and swipe through an app—just like a real user. My first industry exposure to AI and neural networks was while working at Bing, and later at Google. Bing was largely powered by AI back then, and my team had to ensure the quality measurements for search engine results were correct. I’ve been straddling the intersection between AI and testing my whole career.

Testing is a ripe field for applying AI because testing is fundamentally about inputs and expected outputs—the same things needed to train bots. Testing combines lots of human and machine-generated data. Folks in testing often don't have much exposure to AI, but that will change quickly, just like everyone else in the world is waking up to the power of AI.

Josiah Renaudin: We’ve heard that “testing is dead” from a few prominent people in the industry, and I feel like a lot of that has to do with automation taking center stage. As AI evolves, will it ever replace manual testing, or will there always be a place for real people actually testing the software?

Jason Arbon: Testers quietly ask that question a lot. The real value in human-powered testing is the creativity required to either identify problems that are subjective or discover bugs that some of the smartest people around (software engineers) didn’t think of or weren’t able to predict at the time of implementation.

In my experience, more than 80 percent of testing is repetitive. You’re often just checking that things work the same way they did yesterday. This work is solvable by AI and automation. That other 20 percent of a tester’s time today, the creative, questioning, reasoning part—that is what people should really be doing, and that rarely happens in today’s fast moving and agile app teams.

Working alongside AI, testers in the near future will be able to focus on the most interesting and valued aspects of software testing.

Josiah Renaudin: OK, so if AI isn’t going to fully take over, how can human testers work with it to better test modern software?

Jason Arbon: Josiah, you get it! You have apparently talked with enough folks in testing to know that it isn’t a completely solvable problem. However, human testers working alongside smart machines can certainly test modern software in a better way than is done today. Here are four implications that AI has for software testers:

  1. Leave the exhaustive testing to AI. Leave tapping every button, inputting obvious valid and invalid data into text fields, etc. to the machines.
  2. Focus on the qualitative aspects of software testing that is specific to their specific app and customer.
  3. Focus on creative and business-specific test inputs and validations. Be more creative and think of email address values that a machine with access to thousands of possible email test inputs wouldn’t think to try. Verify that cultural- or domain-specific and expectations are met. Think of test cases that will break the machine processing for your specific app (e.g., negative prices, disconnecting the network at the worst possible time, or simulating possible errors).
  4. Record these human decisions in a way that later helps to train the bots. Schematized records of input and outputs are better than English text descriptions in paragraph form.

The key is to let the machines do what they’re good at and let the humans leverage their creativity and judgment.

Josiah Renaudin: Like we mentioned, you’re the CEO of Appdiff, which is a startup company that is currently launching its new product. When you first started the company, how did you decide what type of product users were looking for? What made you so sure something like Appdiff would find an audience?

Jason Arbon: Now you sound like a venture capitalist! I realized that generalized AI-powered bots could address some of the major pain points in software testing today.

Problem 1—Performance: Improved app performance is the number one priority of app teams today. They can’t improve what they can’t measure, and the best solutions today measure performance in noisy production environments, depend on SDKs, and require teams to look at raw data and charts to figure out what is slow. Worse, performance regressions are often caught weeks after the offending code change.

Solution: Automated test bots could test the performance of every action in an app, many times, and catch regressions within minutes of each new build. Rather than charts, the bots could take easy to understand pictures of the slowest part of your app, and show them to the app team.

Problem 2—Release Velocity: Every app team wants to move faster thanks to competition, as well as adopt agile, lean and continuous build environments. Manual testing isn’t fast enough anymore, and today’s test automation is expensive, slow, and often breaks when you need it most.

Solution: Bots can generate 100 times the test coverage of most test teams. Even better, with a little bit of AI mixed in, the bots could automatically discover new features and test new behaviors. If the change in the app is too complex for a bot to know it is a bug, it simply sends a before and after picture to a human to make the bug-or-feature decision.

Problem 3—No Money, Talent, or Time: Many teams can’t afford legions of test automation engineers, or the infrastructure they need. Most teams can’t wait for six to eighteen months for an automated test suite to be coded up and be running. Most interestingly, there is far more demand for software test development engineers than there are test engineers.

Solution: AI-powered bots could start basic testing of an app right away. Machines are far less expensive than the cost to hire a team and to write and maintain basic test code. Machines can also provide test coverage and execute in parallel, enabling all this work to be done in just minutes.

Everyone has the problem. Current solutions are painful and very expensive, but with changes to cloud compute, storage, and AI, the problem was finally tractable.

Josiah Renaudin: What were your first few steps when starting Appdiff? I feel like starting a company is one of those things that sounds exciting at the start, but has the potential to be daunting as you move beyond the planning phase.

Jason Arbon: You are right, Josiah. There are lots of myths and hype around starting a company. It is always a struggle. I’ve been lucky enough to have a wonderful wife who understands the software industry and accepts the time and focus required to grow a successful company. Without her, Appdiff wouldn’t exist, and we wouldn’t be chatting today.

I was also lucky to have worked at Microsoft and Google for a few years, so I could stash away enough money to bootstrap. Then for nine months, I worked at Starbucks to see if this crazy idea of building test bots could actually work. I was too embarrassed to make such a bold claim to many folks before they could actually run.

Once the early, baby-bots were running, I demoed to a few friends, colleagues, app teams, and eventually investors. Many of my colleagues that saw the bots wanted to join the mission after a quick demo—even before there was a company. People from top mobile companies and app teams wanted to try the product and also invest personally. That momentum led to building an amazing team and working with some great investors and customers.

Josiah Renaudin: Do you think more testers decide against starting their own businesses due to the financial risk that comes along with it?

Jason Arbon: Definitely, especially testers! Testers often look for the flaws in a system and are risk averse by nature—that is much of the job description. There are not just financial risks, there are product, market, even personal risks to starting a company. I think this is why there are so few testing vendors built by testers and for testers.

Josiah Renaudin: Finally, what do you think makes Appdiff not only different from other solutions on the market, but invaluable to testers today?

Jason Arbon: Our team is the difference. The team is super experienced in testing. We’ve tested using tens of thousands of machines in parallel (the Chrome browser), tested software where the correct output might not actually be knowable (web search), and tested hundreds of mobile apps using large crowds of humans in the wild (crowd-sourced testing). Combine that breadth of knowledge with the boldness of Google, where it’s normal to wonder how we can do things bigger and better, that is what makes us different.

For testers, Appdiff isn’t another testing tool or service. Appdiff is a chance to be a hero on their team, to focus on the most interesting aspects of their career, and to be part of the next technological wave in software.

Jason ArbonJason Arbon is the CEO of Appdiff, which is redefining how enterprises develop, test, and ship mobile apps with zero code and zero setup required. He was formerly the director of engineering and product at Applause.com/uTest.com, where he led product strategy to deliver crowdsourced testing via more than 250,000 community members and created the app store data analytics service. Jason previously held engineering leadership roles at Google and Microsoft, and coauthored How Google Tests Software and App Quality: Secrets for Agile App Teams.

User Comments

2 comments
Keith Stobie's picture

More testers need to be away of this type of technology.  Jason's confluence of "tested using tens of thousands of machines in parallel (the Chrome browser), tested software where the correct output might not actually be knowable (web search), and tested hundreds of mobile apps using large crowds of humans in the wild (crowd-sourced testing)"  seems to bring amazing insights.

February 2, 2018 - 1:13am

About the author

Upcoming Events

Oct 13
Apr 27