Balancing Exploratory and Automated Testing in Agile: An Interview with Matt Attaway

[interview]
Summary:

Matthew Attaway has worked as a tester, developer, manager, and elephant trainer. He currently manages the open source development group at Perforce Software. In this interview, Matthew talks about automated testing and agile as well as dealing with excessive test documentation.

 

Matthew Attaway has worked as a tester, developer, researcher, designer, manager, DevOps engineer, and elephant trainer. He currently manages the open source development group at Perforce Software. In this interview, Matthew talks about automated testing and agile as well as dealing with excessive test documentation.

JV: We are on. All right, this is Matt Attaway. Matt, thank you for joining us today.

MA: It’s my pleasure.

JV: Why don’t we start off with just having you talk a little bit about yourself, your career, and your experience.

MA: Sure. I’m Matt Attaway. I’ve been here [at Perforce] for almost fourteen years now, actually been my entire career.

JV: Oh, wow.

MA: And I started as an intern, and in 2000 we didn’t have a QA department, so I was one of the first two QA engineers. We started the whole group and worked together during that for six or seven years, started getting into development. I moved into the R and D team and did all the research on how to do version control and how to do software development in general. And then, I guess it was four years actually, I moved back to the QA department to lead a team. Right at the same time as we were doing our transition and moving to automation was a big life change in our development organization, so I got to come in right as that was happening and kind of figure out how to make it happen. I had, at the time, just a small team of three people, but we had ten projects that cycled through at random times.

JV: Wow.

MA: Yeah, it was interesting. It was all great and everyone came in on time, because we had slotted them out, so that every two months or something there would be three products to work on, but every once in a while there would be the backlog and all of a sudden all eleven would just hit us at once.

JV: Oh, yeah.

MA:  Wonderful. That was great.

JV: Yeah, let’s probe into that then. We know you’re getting right into the heart of the matter, so let’s talk about this testing. What went on in this project? You all of a sudden had to get agile thrown at you—explain that.

MA: Right. Well, I think it comes to many software companies where the cool new buzzword at the time … you know, that was a while ago now, but agile was really … it was already hip. A lot of people were already doing it, but we decided that was something that we needed to do to become more responsive to our customers, right? The promise of agile development. And we make a set of software that, historically, our customers only want to consume and they only want releases from us every … just a couple times a year, because it’s a big thing for them to take on and to upgrade and deploy across their organizations.

JV: For people who aren’t aware, what does Perforce make?

MA: We do version control software generally toward large enterprise companies, so tens of thousands of users out there using one system. So yeah, they’re not just looking to take on change on a regular basis. So we ended up in a situation where we were trying to move to an agile development process, because we wanted that rapid iteration and that responsiveness. On the flipside, we had a customer base who expected us to kind of release in a waterfall fashion, so our release teams were all organized around waterfall releases. We still like, “Ok, every six months let’s stop the big thing, but let’s do it agilely inside of that.” The software we deliver isn’t SAS and it’s not hosted in any way shape or form; it’s all delivered to the customer.

A lot of the excitement that’s come up with agile and now continuous delivery is a lot of it is targeted around people who can deploy software at a moment’s notice and upgrade everyone in one dose. Do a slow roll out like a Twitter or Facebook and it’s a very different thing when you’re effectively … we don’t actually have boxes, but we’re a box software company.

It was kind of an interesting transition. As engineers, we want to do this, because it seems like the right way to make software and we hear a lot of great things about agile process. Also, I think every time you do one of those process changes, it’s kind of like a fad diet. It’s not that the diet actually helps you, it just kicks you out of your bad habits.

JV: For a period of time. For six months you’re looking great and then all of a sudden all heck breaks out.

MA: Exactly. So yeah, it was good for us. It broke us out of some of our old habits and waterfall, but it was still there. It was kind of this layer on top of us in release management waterfall. So, it was a lot to rectify, from a process standpoint, of how do we deliver this software, how do we do these cycles and iterations. When you release every two weeks you feel the pressure of “Oh yeah, I really need to build and develop and test and be done in two weeks.” But you know you really have five months until this feature goes live, you don’t have that same pressure and it’s interesting.

JV: That’s a very interesting thought and I’ve talked about this a lot with other people about that idea of putting, all of a sudden, that amount of pressure on your team where they may not be used to that sort of immediate deadline. What is a way that you went about to reassure your team or get them used to that sort of rapid delivery?

MA: Honestly, I don’t know if I ever entirely succeeded at it. What I would say is we ended up doing, as part of moving to using agile workflow, we started having burn down charts and you’re really tracking the development within a sprint, or at least within one of these releases, within these release cycles. For me as a QA manager, what we try to do is get ourselves integrated into that flow, so that we can see that five features have dropped and they are not sitting in a state waiting for us to complete the testing so they could bog down and really finish out the burn down chart.

As testing backed up and we didn’t keep on top on that cycle, we’d see these plateaus where it was out of development and in test, and so we had this nice visual indicator that we really need to step up. On the flipside though, you look at that sometimes and say well, yeah that took you two weeks to develop, but that’s really only like a day for us to test, and we’re working on this other product with my team. I was talking that we had, like, eleven products, so sometimes it was well, that one’s on fire so you’re going to say to them, “Wait a minute we’re going to put that one out and then we’ll come back.”

JV: You’ve got to prioritize right.

MA: For me, a lot of demands involved just a lot of communicating with the other development leads. Like, “It’s cool, we got this” and we’re trying to build metrics that show trends that given this set of data you know we’re good for this; we’ve done this many times before and we’re going to make this work.

It was good to have that visible kind of accountability there and the burn-down chart, but it did take a lot of communicating across teams to be able to explain why there was those small plateaus sometimes. It was, I would say, always stressful for the QA team. I think even today, and we’ve been doing this for a while now, back in the waterfall days you would get a big pile of code dropped on you; you were kind of like a pig—you just bathed in it. You’re rolling in this stuff, you got to just play with it.

JV: I like that image a lot of the pig bathing in code.

MA: Well, especially for exploratory testers, that’s kind of what … they like getting something that’s fairly complete, because then you know the work that you’re doing is meaningful. The agile cycle, sometimes when we do exploratory testing we find that the developer is trying to get that burn-down chart to behave, and so they’re pushing out code as fast as they can. Sure, the automated test they wrote passed, but they didn’t necessarily do their due diligence to really examine the code and play with it. Something to you as a tester that’s either potentially not fully tested or just in progress, like okay, we’re doing this piece.

My favorite is always, we’re going to do the functionality for this sprint and next sprint’s sprint we’ll handle the error cases. It’s like no, no, no, no. I’m a tester; my how job is to find the error cases. You can’t give me functionality, ask me to sign off it and then in two weeks come along and see what the error behavior is.

JV: Yeah, you’re not used to working like that. That’s not your mind state.

Pages

About the author

Jonathan Vanian's picture Jonathan Vanian

Jonathan Vanian is an online editor who edits, writes, interviews, and helps turn the many cranks at StickyMinds, TechWell, AgileConnection, and CMCrossroads. He has worked for newspapers, websites, and a magazine, and is not as scared of the demise of the written word as others may appear to be. Software and high technology never cease to amaze him.

Upcoming Events

Apr 13
May 03
Jun 01
Jun 07