Overcoming the Hurdles of Continuous Delivery: An Interview with Jeff "Cheezy" Morgan

[interview]
Summary:
In this interview, Jeff Morgan, the chief technology officer and cofounder of LeanDog, explains how continuous delivery and continuous deployment have changed how software teams do business. He breaks down funding projects versus teams and validating quality as you build your product.

Jennifer Bonine: We are back. You have a morning full of interviews, so hopefully we're entertaining people enough that they stay. Cheezy, this is a huge task. You've got to be funny. So ...

Jeff Morgan: Oh, no.

Jennifer Bonine: I'm glad you're here with me again.

Jeff Morgan: Thank you.

Jennifer Bonine: I get to talk to you usually almost every conference.

Jeff Morgan: I think just about every one. I think so.

Jennifer Bonine: Yeah. You're a celebrity around here. For those that don't know you, you go by Cheezy.

Jeff Morgan: That's right.

Jennifer Bonine: Yup. So we can always find you. What have you seen, or what are you working on lately?

Jeff Morgan: Wow. Same thing I was working on last time. All of the work I'm doing is in the continuous delivery, continuous deployment field right now. My clients all want to find ways to deploy their software to production every single day.

It was really interesting. I was at another conference last week, and one of my old clients had a booth there. It was great, because I hadn't seen them for a few years. So I walked up to the booth, and the first question I asked him is, "How many times have you guys deployed to production today?" And the guy pulled open his phone and touched something. He said, "Nine."

Jennifer Bonine: Amazing.

Jeff Morgan: And this was in the morning, so I think ... Yeah, so it's a great thing.

Jennifer Bonine: And I bet that's not where they were when you first talked to them.

Jeff Morgan: Oh, no. When I first talked to them, they were deploying about every six months. So now ...

Jennifer Bonine: Right. I mean, the transformation's huge. I think that's a good, interesting thing for people to realize. Just as an example, just take them as one example, but about every six months. Now, he looked at his phone. I'm guessing every six months, even, he didn't have the capability to pull it up and know exactly what happened, have that information at his fingertips. Now he does.

Jeff Morgan: That's right.

Jennifer Bonine: What can people expect if they're out there and saying, "Yeah, we're every six months right now." How long? What are some of my hurdles?

Jeff Morgan: The hurdles are pretty profound. To summarize some of the big hurdles, the first thing is you have to start producing very high-quality software. In other words, you can't push your software to production every few minutes, or once a day, or even once a week, if you have a lot of defects. So we have to introduce the practices right away that drive the quality really high and the practices that help us react quickly whenever we do make a mistake. So that's the first category.

The second category goes into our environments that we run in. We have to have rigid consistency from the very first test environment all the way out to production, just because variability will cause us to have problems, like it does so many companies out there.

Then I think probably the third main category is rethinking how we think about our product. No longer do we have long roadmaps of product design, but instead, we have a lot of data we can collect now. So we use that data to help us make those product decisions.

Jennifer Bonine: And drive what goes first and how to categorize and prioritize items.

Jeff Morgan: That's right. Absolutely. So those are like the three large categories that change, and it takes a while.

Jennifer Bonine: Yeah. So realistically, as you're making that shift, because I'm guessing part of it is ... I've talked with someone else about a leadership change shift in how they see stuff get delivered and how they budget, right? And budgeting differently for projects. Not saying, "Okay, for this much money, I know I'm going to get all six hundred things, and it's going to happen in the next twelve months."

Jeff Morgan: Right. What tends to happen is the whole idea of project goes away.

Jennifer Bonine: Yeah.

Jeff Morgan: There are no more projects, because projects are actually pretty wasteful, once we start thinking about it. So the idea of project goes away, and what starts to happen is we fund teams. We're talking about budget. So we fund teams. We fund product owners. So we get product owners this much amount of money, and then they start working on hypothesis. I believe that here is a change that will help my end-users. Then it's all about what sort of experiments can I run, first of all to prove the hypothesis. Then what sort of experiments can I run to help me make sure I come up with the best design. So it's all about pushing these experiments out simultaneously, collecting the data, and using that data to feed back into product design.

Jennifer Bonine: Right. I love that. So a great distinction, I think—first person I've heard concisely say that around don't think about funding projects. You're funding teams and product owners.

Jeff Morgan: Product. You're funding a product.

Jennifer Bonine: Yeah. And make the right decisions.

Jeff Morgan: That's right.

Jennifer Bonine: So you have money, and you're given that money to make the best value for the funding you've been given, right?

Jeff Morgan: Exactly.

Jennifer Bonine: To move that product forward, serve the needs of the consumer, make sure you're doing the right things.

Jeff Morgan: I have a very good example of this. A previous client, their website, the product owner thought, "Well, we need to improve the way that our users navigate through our site." So he went out, got the budget, got the funding, and then had to wait for a team to become available. Then they build it. I think it took them four or five months to build it. Then at the last minute, he said, "Well, what if my users don't like it?" So they built something that allowed users to opt in to it.

Over time, I think that it made it up to 8 or 9 percent or so of their users had opted in, but then people started opting out, and it went back down to 2 percent. So what we had in this case was a situation where a lot of time and effort and money was spent on something that the users really didn't need or want.

That's so often what we have whenever we have people who, in a vacuum, sit back and speculate, "What do we need?" When I went there, I said, "Let's shut it off all together." Now, let's start with a hypothesis. The hypothesis is that we can improve the way that our users navigate. So, how do we run experiments to prove that? So then we started pushing small little changes out and collecting data, and collecting ...

Jennifer Bonine: As you pushed the small changes.

Jeff Morgan: That's right. That's right. So we homed in on something that the users were happy with.

Jennifer Bonine: Right.

Jeff Morgan: Clearly, the first one, they weren't happy with.

Jennifer Bonine: Yup. No.

Jeff Morgan: So that's the power of pushing small, little changes very quickly, sometimes running multiple experiments in parallel and collecting the data to see which one had the best results. That's what continuous deployment, continuous delivery's really about. That's where so much of the industry's moving these days.

Jennifer Bonine: Right, well, and it makes sense, right? We talk about, we've heard the term "Fail fast and often." You want to figure it out sooner rather than later, before you invest all of that money, and then it's not, to your point, what the consumer wants and what the industry needs in terms of that product. So multiple experiments, home in on your hypothesis. What are you trying to solve? What is that hypothesis? Once you know, try these experiments in parallel. Don't be afraid to do that. I think that's great advice to the folks out there.

Now for testers out there who are saying, "Oh, gosh. I get we're shifting to this. What does it mean to me? How does my world change?"

Jeff Morgan: From the traditional world, it changes pretty substantially.

Jennifer Bonine: Yeah.

Jeff Morgan: I mean, if you think about the traditional approach that we have is we build something, and then as soon as we're finished building it, we test it. Then as we're leading up to a release, we might have a hardening phase, where we go through a really thorough regression. Then that's usually when we add more of your security and your capacity-type tests, sort of at the end. That doesn't work in the world where I work, because as soon as we develop it, it goes straight to production. So there is no time to do that.

What tends to happen is the lines between development and testing get very, very blurry. There still is a huge emphasis on testing, because like I said, we have to keep the quality high, but what tends to happen is this realization that, first of all, the majority of the things that we throw on the shoulders of testers really doesn't require testing skills. For example, going and looking to make sure something looks the same across three different browsers. You don't need to be a tester to do that. Firing up JAWS to a screen, having it read back the screen and saying, does it describing what I see? Again, that's not testing skills, either.

What tends to happen is developers and testers tend to collaborate much closer, and they tend to share a lot of these things. So the testers still own the exploratory testing, and they're still doing that, but except what they're doing it is they're doing it in close proximity of the developers as tiny pieces are being completed, they're doing it. So it's more iterative. The automation skills, often we find that that testers and developers might share those, or whatever it might be. So it's more of a ... You find almost never do you have standalone testing teams, but instead, it's all integrated with development.

Jennifer Bonine: Integrated together.

Jeff Morgan: So it's about building it and validating the quality as we're building it.

When we talk about things like security or the capacity, you know, scalability-type tests, those change as well, because historically those, again, like I said, were at the end. What tends to happen is, at the end, it's like, "Okay, show me all the changes so I can update my scripts."

But what tends to happen is that those things, in today's world, get wired into the pipeline. So they run every day. So those folks have to be integrated with what's happening with the teams, so that they know on a real-time basis, those sort of changes.

Then the other thing that starts to happen as well is we want to start to find ways to push some of that highly specialized knowledge into the teams. In other words, your developers and testers in the teams start to become much more aware of what are the accessibility requirements, so that they can build them in as they're doing it, or what types of security vulnerabilities do we really need to be watching out for. So it's no longer, again, a check after the fact. But it's more of a, as I'm building it, I bake these things in.

Jennifer Bonine: Right. Integrated effort.

Jeff Morgan: So it's a highly collaborative, fast-paced, testing-as-we-go-along type of ...

Jennifer Bonine: Yeah, yup, different environment, right?

Jeff Morgan: That's right.

Jennifer Bonine: From the very traditional approach and check at the end.

Jeff Morgan: We tend to push a lot more tests into the unit test level in that world as well.

Jennifer Bonine: Yup, exactly. So, some significant changes, as you mentioned, and things to think about for those of you going through that transformation. If people want more information and said, "Okay, I get everything you told me. I want to learn more about that," where do they go to find you?

Jeff Morgan: Oh, well. Any STAR conference, you can find me first.

Jennifer Bonine: Yes. I find him there all the time.

Jeff Morgan: But you can reach me via Twitter or email. My Twitter is very easy, it's four letters, @chzy. You can't miss that. My email is [email protected].

Jennifer Bonine: Oh, cute. I like that.

Jeff Morgan: Yes. Yes.

Jennifer Bonine: I love it.

Jeff Morgan: That's right. That's an invite to you.

Jennifer Bonine: Yay.

Jeff Morgan: Let's do it.

Jennifer Bonine: That's awesome. That's awesome. I will see you coming up in Canada. I hear you're a keynote.

Jeff Morgan: I have a keynote in Poland this year.

Jennifer Bonine: You have a keynote in Poland?

Jeff Morgan: I sure do.

Jennifer Bonine: This is amazing.

Jeff Morgan: Yes. That's right. It's the agile and automation test ... Oh, I'm sure I butchered the name.

Jennifer Bonine: Yeah.

Jeff Morgan: ... so much. In Krakow.

Jennifer Bonine: Nice.

Jeff Morgan: It's in October, so October's going to be a very busy month for me.

Jennifer Bonine: That'll be a busy month for you.

Jeff Morgan: I'm in STARWEST the first week, Poland the second week, and STARCANADA the third.

Jennifer Bonine: Yes, so if you want to talk more to Cheezy, you can come to the STAR conferences in Anaheim or Canada.

Jeff Morgan: Yes.

Jennifer Bonine: And Poland. I mean, if you're ready for an international trip.

Jeff Morgan: That's right.

Jennifer Bonine: That sounds great.

Jeff Morgan: Thank you.

Jennifer Bonine: Thank you of being here today.

Jeff Morgan: Oh, you're very welcome.

Jennifer Bonine: And we'll see you soon again.

Jeff Morgan: Okay.

Jeff MChief technology officer and cofounder of LeanDog, Jeff “Cheezy” Morgan has been teaching classes and coaching teams on agile and lean techniques for twelve years. Most of his work has focused on the engineering practices used by developers and testers. For the past few years, he has experienced great success and received recognition for his work helping teams adopt acceptance test-driven development using Cucumber. Cheezy authored the book Cucumber & Cheese and several popular Ruby gems used by software testers.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13