What Pokémon Go and Overwatch Can Teach Us about Scalability: An Interview with Jonathon Wright

[interview]
Summary:

In this on-site interview from STARWEST 2016, Jonathon Wright, the director of software engineering at CA Technologies and a speaker at the conference, joins Josiah Renaudin to discuss artificial intelligence, scaling for load, and virtual reality.

Josiah Renaudin: When you look at a major video game development studio like Blizzard, you see that they just released Overwatch, which is a multiplayer game that comes out and has well over 10 million users at this point. What do you think is the correct solution for dealing with anticipating that type of load? Same with Pokémon Go—everyone with a phone pretty much had it for a stretch. You release something like another game, Minecraft, and just have no idea that suddenly, it's going to be the most popular thing in the world. How do you anticipate and test for something like that?

Jonathon Wright: Absolutely. Actually, I've just come back from presenting across APAC in Australia. I did a presentation on Overwatch. It was quite interesting for a number of reasons, the first being that ... I did it based on the two weeks before, Doom had been released.

For things like Overwatch, which was the second example which I went through, was, and I was trying to relate this to how we've done things historically in testing as well. When I talked about Overwatch, the big thing was saying that the first week, there was 10 million users had used Overwatch, which was more than 2016 years, which is the equivalent of the year, was consumed in that first week.

I was saying with the example of, how can you possibly test something at that scale? The whole point of the deg was that I was relating ... What's the most recent. I was relating actually the comparison between Doom now and Doom back when it was release. It actually is interesting, or it might not sound as interesting as it was in there, but back in 1994, and I saw something interesting today, but 1994 was the first time that they did a Cloud based multi player platform for Doom. They've actually got the service listed of which ones you could dial. This was using IP, in the old days where you're using your dial up modem.

Equally, even if you remember back in these days, you still had bots. You had bots walking around. In here, you've got bots talking about their aiming ability, whether or not they've got lag, which is very similar to network virtualization. That's their IP. Part of it was to give them a disadvantage or a not disadvantage. Equally, there's a whole stack of power shell commands that was in the console, apart from walk through the walls mode, part of it is, something even ten, fifteen, twenty years ago, the techniques which they've used, none of it's new. We're talking about bot frameworks now with Microsoft. I've been doing stuff with Domino's, and they've replaced all their online people with the Microsoft Bot framework.

We're starting to see these systems that can actually go through and exhaustively test through the application, especially when you start taking video analytics and image recognition, which is what we were talking about with Google APIs. If that's a simple call, it can understand where am I in Minecraft by looking at it and identifying, I'm stuck in front of a tree, so I can't physically go through, whether or not the tree is a virtual tree or a picture of a tree. Part of it now is those APIs.

I think we're going to get more and more sophisticated, and part of what we do is with a lot of our modeling side of things as well, how do we possibly test through all these possible routes? I sat down with Paul Gerard about three weeks ago, in the case of Domino's, and we showed the model of their ordering pizza. I don't know if you've heard of Slack?

Josiah Renaudin: Yeah, we use it at TechWell.

Jonathon Wright: Yeah. He's created a bot in there, which is an exploratory testing bot. The exploratory testing bot sits in there and you say, "I'm going to do some exploratory testing." It says, "Where are you?" It doesn't know where you are. You have to describe where you are. In this case, for the Minecraft thing, we're saying, "I'm still in front of a tree." Then it'll turn around, it'll suggest that it thinks you are here in the model, and you can say, "Where should I go?" It'll say, "Go to the haunted house." Then if you start walking over to the haunted house, it'll say, "Actually, your friend Paul is in the haunted house at the moment testing that component. Why don't you head over there?" That way you don't get people using duplicate paths.

All it's doing is learning the session, understanding a model, doing the discovery or the exploratory, mapping that and capturing that, and then using adapters to take and capture information. If you suddenly go through there and there's a stack trace saying, "There's something wrong," or, "There's something not rendered properly," or you see something on the screen or you see something in one of the logs, it can capture that information and they'll be able to reproduce it.

We're using AI technology, even in exploratory. It won't be much longer before we've learned the paths and those can be fully autonomous. I've spent the last fifteen years focused in automation. The last time I spoke here in 2012, I talked about automated testing in the Cloud. Has things moved on? We use the cloud now because it's got compute power, we're able to crunch massive amounts of information and do all the stuff which we're talking about here. It's not going to be that much longer before we start bringing some of that stuff back here. We're running this stuff continuously.

We've talked about, one of the new things we're doing at the moment for DevOps for instance, is we're doing this concept of always on testing. If it finds an issue, instead of it just falling over, it can say, "Actually, what I'll do is I've got time. I've got no other CPU jobs. I'll run another six different tests with various different other data and components like NV components out, and see if I can find more variations of information throughout this.

Especially when you look at a release pipe, it just releases stuff through the pipeline, and then it stops or it breaks. Part of using fuzzy logic and all this new understanding is, it understands that it breaking at this component. What other routes can it take? What else can it be doing with that time that it's got? It's always on testing. You don't have to tell it, you can guide it, but it should be able to go and do it itself.

Partly what my presentation is about tomorrow, is this whole concept of, everything's just a node. To me, this is quite a scary concept, because we've had, and I use the word, it's world peace for automation side of things. We've had long battles around the fact that UI testing is fragile. That's where I came back from in the '90s when I started automation. I always said, you've got to abstract it a level above. You can't talk about the technology whether it's Flex, SilverLi, Adom, HTML, whatever it may be. You've got to abstract it a level above.

You've got to have some kind of DSL, which is fine, nothing new. I used to talk about something called content sensitive validation, which is it understands the context of what it's doing. It knows it's a button and it'll interact with it. It doesn't care if it interacts using Selenium 3 or if it uses some kind of other third-party adapter. It just knows it's got to interact with the object.

Equally, one of the scary things when I've been talking about API testing recently, is you're using an API to drive Selenium. Selenium is just an API to access the dom. The dom is just the next path, which is just a model, it's just lots of nodes structured in a format that can be readable and identifiable. W3C now being the compliant standard format for browsers. Now, all those other third parties, whether or not Perfecto's using selenium to do remote driver to do a browser on a mobile phone, or it's trying to recognize a native application in iOS versus Android, it's still just a node.

Part of cutting out the middle man is actually saying, actually, UI, you can drive UI through API. You don't actually need to see the application to test it. Equally, it doesn't actually physically have to be there. That's the scary thing when you come to SV, and I talked a lot about this at a Gardner conference, saying that actually you can do all the testing without the system, even a single line of code being read, because you understand how FX or derivatives work, it's just a flow. It's a lot of business rules. To test APIs for whether it's a rest call or whether or not it's some kind of micro service, it doesn't matter.

Part of it is you're proving the business logic. Part of the same aspects go here that actually, whether or not it's an API, and you use service virtualization to mock out the service, the same applies for UI. You don't actually physically know you need to have the UI, because it could be ... We all know how model view controller works. You know it's going to generate a dom, you know it's going to have a certain type of container model, and it's going to have a certain item or ID that it's going to recognize.

Therefore, the divides are finished, really. We're talking about testing exhaustively, whether that be through the UI layer, through the application layer, or the component later, testing shims, stubs, down at unit test, the end unit, whatever it may be, node JS, node Red. We're now getting into these challenges that it doesn't matter. Part of it isn't providing a UI tool or an API tool. It's actually, how do we build a model to test our systems? That model has to continuously evolve.

Jonathon WrightJonathon is a strategic thought leader and distinguished technology evangelist specializing in emerging technologies, innovation & automation with over 15 years of international commercial experience within global organizations, and is currently Director of Digital Assurance at CA in Oxford in the UK. His practical experience and leadership in the area of DevOps and Digital Assurance has seen him in demand as a speaker at international conferences including Gartner, Unicom, HPE Discover, Oracle Digital Forum, STARWEST, STAREAST, and EuroSTAR, where he was awarded Innovator of the Year (2014). He is the author of several books on test automation as well as numerous online webinars, podcasts & training courses on Automation and API Testing. With Jonathon’s practical insights into real world application of the core principles and methodologies underpinning DevOps and Digital Assurance, his presentations are not to be missed.

About the author

Upcoming Events

Oct 13
Apr 27