Have I been part of a botched release? I’m an engineer in the internal tools team: My releases only impact other software engineers. Having said that, yes, of course things have gone wrong. As part of my work on Buck (our build tool), I’ve accidentally caused build times to bloat, and on one memorable occasion (fortunately well before release!) increased the size of every Android app we build by several tens of megabytes.
As I mentioned in my previous answer, the experience was actually fairly positive. Once the problem had been identified and a fix put in place, we had the time to try and understand what had happened. In this case, we added some extra tests that we’d not been aware we’d needed (hindsight is a wonderful thing).
On a larger scale, when something major happens on the site or in one of our apps, we categorize the severity of the error, identify the key people who understand the area, and then work to ensure that it doesn’t happen again. The more severe the error, the harder we look at it. As I’m sure you’re aware, Facebook has made some spectacular mistakes. So far as I’ve seen, each of them has been taken as an opportunity to understand weaknesses and improve things.
Josiah Renaudin: How do you sort through user feedback, since it’s likely coming in at an unmanageable rate?
Simon Stewart: I’d love to give you a great answer to this, but I’m not sure how we do that. I know that we get some great feedback through the Android alpha and beta channels.
Josiah Renaudin: If testers need to code, what’s the easiest route for testers who have little to no experience in coding to learn this skill?
Simon Stewart: That’s a lovely question, and I think the answer depends on the environment you’re in. If you get along well with your development team and are close to them, ask if they can help write some tests with you. Developers (should) love to code, and they often like to pass on tips and tricks about the IDE and environment they’re using. That implies that you’ll be learning the language that they use, and that’s a good thing --- if you do a great job in showing the value of the tests, there’s a strong chance that the developers will help maintain them.
If you’re not close to someone who’s willing to show you the ropes, I’d pick a well-known scripting language such as Python or Ruby, and tackle a problem that’s driving you crazy with boredom. Remember Larry Wall’s traits of a great developer: aim to be lazy in the best possible sense of the word. Scripting languages don’t tend to have the same sophisticated (and intimidating!) IDEs as most modern programming languages (Java and C# in particular), so it’s really easy to get started. They also normally easy to install if they’re not already on your machine, and both Python and Ruby have a wide range of libraries that can handle some of the heavy lifting for you. They also have oodles of documentation in the form of books and answers you can find via a search engine.
As with all new skills, it can be hard to get started, and it takes practice to become comfortable and confident putting together code. Start small and don’t worry about things not working; after all, at the end of the day, you can always throw the code away and start again, but this time with all the knowledge and experience you’ve gained ready to throw at the problem.
I think that they key thing is to want to know how to code. Without that, there’s little point in making the effort. Go and do something you love to do instead!
Josiah Renaudin: There will be plenty of software testers at our upcoming STARWEST event. What current industry trend do you think they need to pay attention to moving forward if they hope to find success?
Simon Stewart: There are three things I’d watch out for: tighter release cycles, increased automation, and the need to be able to articulate the unique value your style of testing brings to a release in light of those two factors.
In the film Fight Club, there’s a scene where one of the characters explains how the auto industry choses whether or not to recall a vehicle. It’s an equation that something like, “the cost of recall” needs to be less than the “cost of a payment if something goes wrong” multiplied by “likelihood of a payout being needed.” Automated tests are much like that: the cost of writing and maintaining them (however you measure “cost”) needs to be lower than the cost of not writing them.
I mention this because one thing that I’ve seen is automated testing starting as something relatively lightweight and gradually become harder and harder to maintain. This is partly because there are more and more tests, partly because, for whatever reason, test code isn’t given the same level of care as production code, and partly because people aren’t aggressive enough when removing old, low-signal tests. I think that someone who can help guide an organization to keep the cost of automated tests as low as possible is going to be extremely valuable. So, one of the things you’re going to see is people trying to figure out how to meddle with this equation. One of the things that Facebook does is to only promote automated tests into their regular test runs once they’ve demonstrated stability. We’re ruthless about disabling flaky tests, and equally ruthless about deleting disabled tests. I think you’ll see other companies doing something similar.
Josiah Renaudin: Thanks again for your time!
Simon Stewart: It’s been a pleasure. I hope you found some of the answers helpful!
A software engineer at Facebook, Simon Stewart helps build the tooling for testing their mobile applications. Simon is the current lead of the Selenium project, the creator of Selenium WebDriver, co-editor of the W3C WebDriver specification, Facebook's W3C AC representative, and contributor to Selendroid, a mobile implementation of the WebDriver protocol targeting Android. As his experience and current work suggest, Simon is keen on automated testing and views it as instrumental to allowing software development with today's compressed release cycles. The way Simon sees it, testers can spend time doing things more creative than walking through tedious checklists of details. Simon tweets at @shs96c.