In this TechWell interview, Facebook's Simon Stewart digs into his company's shocking approach to testing, which is that they don't have a testing department. He also talks about the challenges and pressure that comes along with producing so many different mobile builds per year.
Josiah Renaudin: You mention in your talk that Facebook has no testing department, and that this could be a future trend for other companies. Should this statement scare testers, or do you believe they still have a place in software? Do they need to adapt and gain an extra set of skills to remain marketable?
Simon Stewart: Should it scare them? No way! It’s an amazing opportunity. The trick will be to articulate the unique value or perspective that you bring to a company, and find a way of providing that within the framework and DNA of the particular organization you’re part of.
I see the trend of ever-quicker releases continuing (you can see it in the practice of continuous deployment that some companies follow), and that means that testers will need to be creative about how they do their work. It also means that they need to be able to repeat their work at scale, and I think that leads us towards a world where all testers need to code (unless they’re happy repeating the same thing, over and over and over and over again at super-human speed with no mistakes)
In order to enable fast release cycles, feedback loops need to be as tight as possible. There’s no space in this for QA to be kept at arm’s length or until the end of the process (which is madness: “Quality” isn’t something you can add as an afterthought). As such, I imagine that an effective “next-generation” QA will need to be technically competent and to have very good “soft” skills --- after all, no one likes to hear that their work is faulty. They may find that collaborating closely with a developer will allow them to move as quickly as they will need to. Or they can learn to code and avoid needing someone to translate thought into something a machine can understand.
One refrain I hear occasionally is that knowing how to program will somehow “damage” a tester, because they understand how the software and machines work. My view is the exact opposite: Knowing how something works gives better insight into potential flaws. Essentially, I think that understanding how to code widens the set of tools available to a tester, without diminishing what they can do in the slightest.
In my experience, one problem is that the “follow a script” end of the QA market has negatively colored people’s views on the role, and that means that for far too long being a tester has been seen as a poor relation to being a developer, and that’s just not true. Hopefully the rise of automation will mean that this misperception will evaporate, but it will come at a cost.
Josiah Renaudin: What kind of pressures to you feel when releasing one of your new monthly builds, knowing the magnitude of your audience?
Simon Stewart: I think that there’s a distinction to be drawn between fear and respect. Releasing a new version of the site or the app is rapidly becoming a routine process, and it’s hard to fear something that’s routine. On the other hand, we deeply respect the people who have chosen to spend their time on Facebook. One of the mantras of our release engineers is that no release should ever leave a person worse off than they were before.
That means that the pressure of maintaining quality, functionality, speed, and all the other things that people care about is constant, and I think it’s something that everyone’s aware of. The teams I’ve worked with have been very keen to adopt tools and techniques to make sure what they write is as solid as possible, and I’ve seen teams and individuals go to great lengths to make sure that people using Facebook have a smooth experience.
One of the aspects of working at Facebook that I really enjoy is that despite this pressure to deliver an amazing product, when something goes wrong the focus is always on identifying what happened and identifying the causes rather than who did the wrong thing, and the actions we take to rectify systemic issues tend to be lightweight, trusting that our staff are all people doing the best that they can. I’ve worked in other environments where the opposite happens: where blame is placed on individuals or teams, and where rectifying problems leads to heavyweight processes that ultimately lead to a moribund software development methodology.
When someone releases a new feature on the site, they are responsible for ensuring that it works when that feature goes live to the world. I think that this aspect, of owning a feature from inception to release, gives each of our engineers a good insight into what’s needed at each stage of the process, and it helps highlight the importance of automated testing.
Josiah Renaudin: Have you been a part of what you’d consider a botched release at Facebook? If so, can you describe that experience?
Simon Stewart: On campus, there are posters on many of the walls. Your question reminds me of the one that reads, “What would you do if you weren’t afraid?”
Have I been part of a botched release? I’m an engineer in the internal tools team: My releases only impact other software engineers. Having said that, yes, of course things have gone wrong. As part of my work on Buck (our build tool), I’ve accidentally caused build times to bloat, and on one memorable occasion (fortunately well before release!) increased the size of every Android app we build by several tens of megabytes.
As I mentioned in my previous answer, the experience was actually fairly positive. Once the problem had been identified and a fix put in place, we had the time to try and understand what had happened. In this case, we added some extra tests that we’d not been aware we’d needed (hindsight is a wonderful thing).
On a larger scale, when something major happens on the site or in one of our apps, we categorize the severity of the error, identify the key people who understand the area, and then work to ensure that it doesn’t happen again. The more severe the error, the harder we look at it. As I’m sure you’re aware, Facebook has made some spectacular mistakes. So far as I’ve seen, each of them has been taken as an opportunity to understand weaknesses and improve things.
Josiah Renaudin: How do you sort through user feedback, since it’s likely coming in at an unmanageable rate?
Simon Stewart: I’d love to give you a great answer to this, but I’m not sure how we do that. I know that we get some great feedback through the Android alpha and beta channels.
Josiah Renaudin: If testers need to code, what’s the easiest route for testers who have little to no experience in coding to learn this skill?
Simon Stewart: That’s a lovely question, and I think the answer depends on the environment you’re in. If you get along well with your development team and are close to them, ask if they can help write some tests with you. Developers (should) love to code, and they often like to pass on tips and tricks about the IDE and environment they’re using. That implies that you’ll be learning the language that they use, and that’s a good thing --- if you do a great job in showing the value of the tests, there’s a strong chance that the developers will help maintain them.
If you’re not close to someone who’s willing to show you the ropes, I’d pick a well-known scripting language such as Python or Ruby, and tackle a problem that’s driving you crazy with boredom. Remember Larry Wall’s traits of a great developer: aim to be lazy in the best possible sense of the word. Scripting languages don’t tend to have the same sophisticated (and intimidating!) IDEs as most modern programming languages (Java and C# in particular), so it’s really easy to get started. They also normally easy to install if they’re not already on your machine, and both Python and Ruby have a wide range of libraries that can handle some of the heavy lifting for you. They also have oodles of documentation in the form of books and answers you can find via a search engine.
As with all new skills, it can be hard to get started, and it takes practice to become comfortable and confident putting together code. Start small and don’t worry about things not working; after all, at the end of the day, you can always throw the code away and start again, but this time with all the knowledge and experience you’ve gained ready to throw at the problem.
I think that they key thing is to want to know how to code. Without that, there’s little point in making the effort. Go and do something you love to do instead!
Josiah Renaudin: There will be plenty of software testers at our upcoming STARWEST event. What current industry trend do you think they need to pay attention to moving forward if they hope to find success?
Simon Stewart: There are three things I’d watch out for: tighter release cycles, increased automation, and the need to be able to articulate the unique value your style of testing brings to a release in light of those two factors.
In the film Fight Club, there’s a scene where one of the characters explains how the auto industry choses whether or not to recall a vehicle. It’s an equation that something like, “the cost of recall” needs to be less than the “cost of a payment if something goes wrong” multiplied by “likelihood of a payout being needed.” Automated tests are much like that: the cost of writing and maintaining them (however you measure “cost”) needs to be lower than the cost of not writing them.
I mention this because one thing that I’ve seen is automated testing starting as something relatively lightweight and gradually become harder and harder to maintain. This is partly because there are more and more tests, partly because, for whatever reason, test code isn’t given the same level of care as production code, and partly because people aren’t aggressive enough when removing old, low-signal tests. I think that someone who can help guide an organization to keep the cost of automated tests as low as possible is going to be extremely valuable. So, one of the things you’re going to see is people trying to figure out how to meddle with this equation. One of the things that Facebook does is to only promote automated tests into their regular test runs once they’ve demonstrated stability. We’re ruthless about disabling flaky tests, and equally ruthless about deleting disabled tests. I think you’ll see other companies doing something similar.
Josiah Renaudin: Thanks again for your time!
Simon Stewart: It’s been a pleasure. I hope you found some of the answers helpful!
A software engineer at Facebook, Simon Stewart helps build the tooling for testing their mobile applications. Simon is the current lead of the Selenium project, the creator of Selenium WebDriver, co-editor of the W3C WebDriver specification, Facebook's W3C AC representative, and contributor to Selendroid, a mobile implementation of the WebDriver protocol targeting Android. As his experience and current work suggest, Simon is keen on automated testing and views it as instrumental to allowing software development with today's compressed release cycles. The way Simon sees it, testers can spend time doing things more creative than walking through tedious checklists of details. Simon tweets at @shs96c.