Test throughout the Entire Development Process: An Interview with Andreas Grabner

[interview]

Josiah: All right. Today, I'm joined by Andy Grabner, a technology strategist at Copmuware APM. Andy, thank you very much for joining us today.

Andreas: You're welcome. I'm happy to be here.

Josiah: Great! First off, could you tell us just a bit about your experience in the industry?

Andreas: Sure. I've been working… I was in the performance industry for the last fifteen years. I may not look that old but I've been actually… I started my career with a company called Segway Software. Back then, we built load-testing tools like Silk Performer, which a lot of people may know.

I've been switched over to other products like Silk Test which was like on the functional testing side. Seven years ago, I then joined a company called DynaTrace, which is now part of Compuware APM, which I'm still working for. Basically, the problem that we try to solve is instead of breaking applications with load-testing tools, we wanted to figure out what seems to be wrong in these applications when they break.

Actually, I follow one of my colleagues who founded DynaTrace. He used to be my colleague at Segway. Then I followed him because he builds DynaTrace and I thought it would be perfect. We've been breaking applications for so long and we built up a lot of expertise and knowing which metrics to look at, and how to do load testing right.

Then we wanted to know, what do we need to look at, what do we need to tell our customers on what to look for within the application? Why it may break and what they've did wrong in architectural decision.

Overall, I've been in the industry for almost fifteen years now and run into many different roles. Started as a tester, I was an architect, a developer, and product manager. Now I'm working as… we call it a technology strategist. I'm trying to share my knowledge with the people out there who blog at conferences where I'm talking this year. Also, we deal with customers and make them successful when it comes to performance management.

Josiah: You'll be talking at the upcoming STARWEST event in Anaheim. Your talk is called, "Checking Performance along your Build Pipeline." A lot of what you'll be talking about are the small changes along the process.

Why do you think that people tend to ignore the impact small changes can have on performance and scalability?

Andreas: Developers like to build new features. Obviously, they're on this big pressure to come out with new features more frequently. I think there’s still the thought, "Hey! In the end, there's somebody who is testing my software anyway. I’d rather spend my time in focusing on what I'm paid for, that's basically building new features and somebody later down the pipe will take care of load testing and then tell me if anything is wrong."

Unfortunately, the way we develop software, the longer you wait, the longer it will last until you’re actually getting into the testing phase. More of these small changes add up to bigger problem.

The real reason is, there's a thinking of "Hey, my role is developing and I'm developing new features. Your role is a tester and you're going to test it and then you tell me what's wrong." I think that's just a mentality change that we need to educate people more. It's not about… there's testing in the end. Testing needs to be continuous, they have to keep up with it.

Josiah: What do you think are some of the benefits of catching these smaller issues earlier on in development rather than just trying to take care of them all at the very end?

Andreas: I think there are multiple benefits. First of all, for developers, if they find, "Hey, this coaching that we just did had a significant impact,” for instance, they're now executing twice as many sequence of statements to get the same results. If they learn about this by changing the code, because they, for instance, used the frameworks in a different way… if they see that this small change has a strange impact, like, executing too many previews to the database or allocating more memory than necessary, then developers will maybe learn while they develop new features, on how to better use the frameworks and tools that they're using so they can start… they educate themselves with that.

On the other side, if you catch problems earlier on, you don't have to work to find this low-hanging fruit of the basic problem patterns the first time when you do a load test. Basically, the load-testing guys can then actually really focus on the tough problems and once again, breaking the application with only five users instead of 500 users that they should simulate on.

I think it's a benefit for developers. They will learn how to develop their software better and testers can really focus their expertise on real tool, large scale load testing because they don't have to deal with breaking the applications, right? With very small load.

Josiah: Can you give some real-world examples that you've experienced of major problems that were caused by just small changes that could have been taken care of early, but ended up snowballing and becoming something bigger?

Andreas: Yeah. I'm going to give you two examples. One is actually from our own software that we actually launched. We have internal… internally we use our backend services. We basically keep track of customers and their licenses. We recently introduced a free trial version of our product that you can download. Obviously we also have the reports. I, for instance, I can go in and see who downloaded the free trial after I have my talk at the conference.

We implement the report and our developers would implement the report, I'm sure at the best intention to make a nice report. Basically, what they did from one build to the next, they introduced, they used a new version of Hibernate, one of the more very popular OR networks out there. They spent no time on figuring out how to best configure Hibernate for their own purposes.

Now, in order to get my report, instead of executing a single-sequence statement that goes to the database and gives me the free trials that signed up based on my last talk, actually, Hibernate went off and executed 2,000 sequence statements with the same report. Obviously, that's a small change for the developers because they just flipped a new version of Hibernate. For them, from a financial perspective, everything still stayed the same. In the end, if you need to execute 2,000 times more database statements to get to the same results, it's a small change with a very big impact.

The other example, actually I wanted to talk about one example that was very prominent last year in the US. The healthcare.gov example. When that website went down, we did some analysis and the very small thing that they forgot were basically some best practices on that performance optimization. The first version that they released had fifty different JavaScript files on the page. Not many files, not merged together, and these are just some small things. But developers added more JavaScript features on the page because they needed it to download a chip and different plugins.

What they forgot to do is to follow some best practices and merge these things together. Again, on their desktop everything worked fine. Later on in production when millions of US folks tried to get to that site instead of downloading one JavaScript file, they have to download fifty JavaScript files and that was basically not possible. Because the pipes to the web servers were just clogged and not able to handle them alone.

Josiah: It's crazy to think something of that much importance of that big of a scale could have been fixed by just a few small things early on in development. I would like to transition a little bit into mobile, speaking of quality. Do you think the quality bar from mobile apps is currently high enough? Are apps launching on phones and tablets with enough testing to back them up or do you think developers are kind of leaning on updates to be a kickstand for their apps?

Andreas: That's a good question. I think we are in a transition phase right now. Obviously, there's a lot of mobile developers out there that develop web mobile applications. We also know that the majority, I’m not one to give a percentage, but I would assume 90 percent of all the apps that are out there have very bad ratings. Nobody's using them because they're either bad quality or… in most cases are bad quality.

They definitely get bad ratings and therefore, basically, all the investment that they made in developing these apps is basically wasted.

I think people learn now with more and more competition on the mobile app market, it is crucial that you have a good, quality product. One example was this year, the FIFA World Cup, soccer World Cup, there was the FIFA World Cup app, the official one. We looked at this app two weeks prior to the world cup and it got, I think, one of the worst ratings I've ever seen for an app. Obviously, FIFA had the advantage that they're the only one that can produce an official app. I think a lot of them, a lot of the users still went with one of the apps that somebody else built. Because it just worked. They probably lost a lot of revenue that they could have made with some ads in there.

I think people understand more and more now. It's a very competitive market if you are building bad-quality apps, you get bad ratings, much faster than it used to before. And therefore, nobody will ever look at your app and they’ll then just go to the competition.

The things, I think, things are changing now. People are understanding it's not enough to push something out, but when you push something out it has to have quality. Because otherwise, you'll lose your reputation and sometimes you don't have a second chance.

Josiah: Absolutely. I don't want to give away too much of your STARWEST talk, but what one metric would you say that you're going to share during your discussion do you think will surprise most testers and developers?

Andreas: Actually, I think I already took away a little bit of the surprise because I already talked about the wrong things that we recognized and saw and I see with every customer that we interact with. It is the number of database statements being executed for a single action on the page. Sometimes it's just really hard to understand for some people what actually happens within the application, because they don't go to the database directly as a developer.

All developers, most of developers I would assume, never write a single line sequence statement but they just use frameworks to do their work for them. They have no clue of what's going on underneath the hood. So for us, whenever we go to customers that come to us and say, "Hey, we need your help. All our applications don't perform well. We know we waited too long with all the testing and all that stuff."

We look at that, and they analyze the software and its typically the sheer amount of clearing the database. A number of duplicated statements to get the same data all over again not using caching. I think this is a key metric, a key metric for me. If I can just chime in on another metric for more on the web developers, here we see that typically pages get overloaded with too many images, too many JavaScript files. Too many resources on a single page and need to get downloaded in order to display the initial page content or the initial message that people want to push out to their end-users to go to their website.

I know we all live in a world where we want to leverage the web and everything is bright and shiny. Everything is web 2.0 and HX. But, in the end, it's about making sure that the end-user who is spoiled now, with Google and other great websites that work pretty well, they're spoiled, because we want to be, people want to have fast performance and so overloading pages are not good. The number of resources on a page that need to get downloaded, in order to get the first visual impression, is a key metric that I want to keep low.

Josiah: Fantastic. Well, I really do appreciate your time, Andy. It's been really nice talking to you and meeting you. I'm looking forward to actually meeting you in person and hearing more of what you say about all this at STARWEST in October.

Andreas: Looking forward to it, too.

Josiah: All right. You have a great day.

Andreas: Thank you, you too. Bye. 

Andreas Grabner

Andreas Grabner has more than fifteen years of experience as an architect and developer in Java, .NET, and Web 2.0 with a strong focus on application performance. At Compuware APM (formerly dynaTrace) Andi is a technology strategist, helping companies improve their applications’ performance across the development lifecycle by embracing ideas of continuous delivery and DevOps. He is a frequent speaker on software performance, testing, and architecture topics at technology conferences including Velocity, STAREAST, STARWEST, and JavaOne. Andi regularly publishes articles and blogs on apmblog.compuware.com. Before dynaTrace, Andi was an engineer and product manager for Segue Software and Borland on their Silk Product Line.

About the author

Upcoming Events

Nov 28
Dec 04
Jun 25
Sep 30