Pivoting Organizations to Agile Testing: An Interview with Howard Deiner

[interview]

JV: What's been the response like with organizations when you tell them to pivot towards. Is there some resistance?

HD: Yeah, to be honest with you, it seems like an easy message. I'm thinking from my developer kind of stance. But there's too many times that management needs to also make that pivot into leadership. And many times it takes an act of faith, a bit of courage to jump over that chasm and say, "You know what? Let's give it a whirl." I mean, honestly, organizations that do it never look back. Developers who have never been exposed to things like test-driven development, acceptance test-driven development and stuff, once they see it, they never look back. It's infectious.

JV: Does it matter what industry or what an organization is or do you just see it sort of across the board?

HD: Look, I've worked in like a number of different industry sectors, and I've had really great results in embedded systems, real-time stuff, things that are non-deterministic even, all the way through more traditional IT sort of development. Like I say, it's a message that most developers naturally want to hear. It's a natural thing to want to make a little bit of change and test that and make a little bit more change and test that. Making it a little bit more concrete and saying, "Well, actually there's a process. If we're going to be doing this little testing along the way, can't we make that repeatable? Can't we make it fast?" And all these things start to twine together.

JV: Looking at your website it's interesting seeing some of the presentations, and you have one that sort of outlines the traditional manufacturing role and how that sort of doesn't really fly too much in today’s testing environments. Can you explain that? Testing in small batches as compared to the old-school methodologies.

HD: Oh, absolutely. You know, that comes out of the lean manufacturing, the production sort of work, and people forget about the fact that if you can get rid of wastes inside of the system, you're going to get better results out of it. I've been running a new exercise called "Death by whip."

JV: That sounds interesting.

HD: Yeah, it actually has to do with like Legos, binary numbers, plastic bags— you know, the traditional sort of agile exercises. We simulate in one scenario what manufacturing would look like if we had stations, and then we kind of pull all the things back together, which is kind of what waterfall does.  We send them off to the silos, get our results at the end, and then try to put them all together. Usually people don't make very much with that. Then we have another scenario where we set up an assembly line and try to push work onto it, and that gets nice and messy. I love that during the exercise. And by the time we tell people to single stream, get really small batches, they actually find out that they have lots more sufficiency.

When you start to apply this, like why is it that small batches work so well with software development and tests in general, you're getting less defect waste. There's a quasi-exponential curve where the cost of defect remediation is really, really high if you find stuff at the end, and then as you work your way down, instead of testing at the end, if we test it during integration, there's still some waste involved because now we don't remember what we're doing. As we keep pushing closer and closer to when the error actually occurs, when we actually code something, make a mistake, and say,"Oh, my god, we had a side effect." Imagine that. We figured out that defect waste goes down.

JV: And what exactly could constitute a defect waste? What's an example of something that you could find?

HD: Let's take something that takes an entire software stack. Let's say we have a three-tier type of architecture. We got some back-end database kind of stuff going on. We have a middle layer with logic. And then we have a presentation-type layer. And we make a change. We make a change to add a new column that we need to bring all the way back up. We may find that making that change made something really obscure that used to work stop working. And the cost of doing a complete regression test is usually really high, so people tend not to do them or they do them only at the end when they're under time pressure.           

And it ties in with the inventory waste. We have a large batch of stuff that's yet to be tested. You know, we're going to forget what went on. We got all this stuff piled up in front of us, so things are going to happen. We get all those hand-offs in lean type of thinking. Each of those hand-offs, those documents that say, "Here's what I did," you know, I got this big, old list of stuff. That waste of going through there, missing stuff, it's a bad thing. The waiting that occurs, the time between when the developer makes that mistake, that little side effect, and the time that it's caught, all that time, they forget what they did. Even if it's a couple of days, it's a hard thing. That's why continuous integration is such an important part of agile engineering and that whole set of things that we want to do to get things better.

JV: Do you think there are some projects that sort of lean themselves more to the use of agile techniques versus some projects that, I don't know, maybe because of the culture of the organization, they're inherently going to be made this old-school way. Is there some projects that just are more adaptable to agile practices?

HD: You know, I'll be honest with you. There actually will be some.

JV: There will be some.

HD: It's not going to be the stuff that people normally deal with. It's going to be where the risk is high.

JV: Okay.

HD: If you're developing an avionics system, and you want to make sure that the plane can land itself or whatever it's supposed to do that's where— human life is at risk—you're going to have to go through a much different kind of process to actually figure out that everything is good. In fact, when you think about the real end game there is certification, verification, and certification, which is different than just testing. I wouldn't want to be in a rocket where only agile development was done (laughter). I'd actually want to make sure at the end that we did a slow and careful valuation of it. The reason it doesn't come up very much is that those sort of systems are kind of few and far between.

About the author

Jonathan Vanian's picture Jonathan Vanian

Jonathan Vanian is an online editor who edits, writes, interviews, and helps turn the many cranks at StickyMinds, TechWell, AgileConnection, and CMCrossroads. He has worked for newspapers, websites, and a magazine, and is not as scared of the demise of the written word as others may appear to be. Software and high technology never cease to amaze him.

Upcoming Events

Apr 13
May 03
Jun 01
Jun 07