Better Test Automation, Metrics, and Measurement: An Interview with Mike Sowers

[interview]
Summary:

In this interview, TechWell CIO and consultant Mike Sowers details key metrics that test managers employ to determine software quality, how to know a piece of software's readiness, and guidelines for developing a successful test measurement program.

Josiah Renaudin: Welcome back to another TechWell interview. Today I am joined by Mike Sowers, the CIO and senior consultant at TechWell. He'll be conducting two tutorials at Better Software West covering test automation, metrics, and measurement. Mike, thank you very much for joining me today.

Mike Sowers: Josiah, great to be with you as always.

Josiah Renaudin: Absolutely. First, just as a good primer, could you tell us a bit about your experience in the industry?

Mike Sowers: Sure. I've been just really fortunate in my professional journey. I started as a co-op student right out of college, and my first testing job was as a hardware tester. Not to say how old I am, but I used a paper tape program to program a pneumatic tester that pounded on a keyboard for doing keyboard reliability tests. People don't probably even know what paper tape is.

From there I tried my hand at programming. Really wasn't very good at it, so I moved into software testing. I've had the opportunity to work with large, medium, and small companies as a tester, and also be a testing leader across financial, transportation, software OEMs, banking, and other industries. Tried my hand as consultant for a while. Worked with a lot of great Fortune 500 companies. Probably my largest role was as a senior vice president of a QA and test, so moving from a co-op student to senior VP of QA and test was pretty exciting. Leading a team, internationally distributed a team of about 400 people in eight different geographies so I learned a lot.

Now I am with TechWell, and I've got the opportunity to speak at conferences, and teach, and consult, and really help testers worldwide become the best that they can be.

Josiah Renaudin: Like I mentioned, metrics is something that you will be covering very strongly in your tutorials, and as you just explained, you've been around the block, you’ve seen a bit of everything. What are some key metrics that test managers employ to determine software quality?

Mike Sowers: I think as we start to think about projects, we've got the beginning of the project, we got rolling through the project, then we got post project, so I think about those across the spectrum. The quantity and the quality of user stories is a metric, degree of change. Risk, whether it be product or project risk, or complexity. There's time based metrics such as estimating the schedule. How long is going to take to do test planning? How long is going to take to do test analysis, test design, test execution? How long is going to take us to automate? How long does automation take to run even? We have ops, environments, and trying to do continuous build, continuous integration.

There's a lot of quality metrics which is the number of defects found at any given point in the lifecycle by category, by severity, how are we containing defects. Defect containment or defect leakage from one stage to the other.

In the agile world, now we're talking about a team's velocity. Velocity, how quickly can they implement user stories. The degree of technical debt maybe accumulated, and address stories completed, stories committed.

Always the retrospective of metrics that come out of that. How are doing at improving our process? How long does it take us to do our builds? Are we able to integrate our builds and testing and deployment together in a cycle, and continually refine that. Continuous integration, continuous testing, and continuous deployment process. Lots of metrics to think about.

Josiah Renaudin: Something that is really interesting to me in software is this concept of readiness, and something that I've been involved with a lot over the years is video games, for example. Back way in the day when it's cartridges, you have to make sure everything is nip and tuck, and ready to go, because you don't have the opportunity to update it later because it's on a hard cartridge.

Today, because everything is so digital, you have some leeway. You can release a piece of software or game with some issues, and then update it later. When a test manager is determining a software's readiness, how much leeway do they have? For example, if a manager determines a product is ready to go, ready to go out the door, but soon discovers crashes or bugs that force an update to fix that software while it's live, can that significantly harm the manager's reputation or even the future of that software, because people's first opinion of that software is, "Well this is broken."

Mike Sowers: Yeah, I think that's a fabulous question. Certainly defects that are exposed to customers can have a direct impact on the product, and even the company in the customer's view if these defects are business impacted. To your point, people have an expectation, for better or worse these days, that they want software sooner so they're willing to live with kind of the good enough, and then if we get a process and methodology in place to get it fixed quickly, to enhance it quickly, to add features, functionality, that works.

I think internally the manager making that decision may have some reputation exposure as well, however, I think there might be a flaw maybe in our logic in having just one quality gate to make or release decisions. I think that's risky these days. Traditionally we have had a quality team, a quality manager, be that quality policeman or that quality gate if you will, and that might be required in some companies given their regulatory environment, but the trend now is towards team accountability, from an agile perspective. Team accountability for quality and therefore that release decision. We get many more subject matter experts involved. The tester, the developer, the product owner, maybe even the operations team member to make a quality decision. In the best of cases we even have the customer involved in making that decision so you get the buy in commitment to the readiness.

Josiah Renaudin: I like the idea that kind of everyone is responsible for quality now. That's something I've heard quite a bit. When I first started this job I feel like that wasn't as strong of an idea, but now people I talk to… it's like when I ask where does quality start, and they're like, "Right at the start, and throughout the process." That seems like something that is critical.

What are some guidelines for developing a test measurement program that you found to be successful and that you might talk about during your tutorials?

Mike Sowers: I think probably the elevator speech there for me is alignment. The key is aligning the measures to the business goals and the desired outcomes. Something that I didn't do very well, quite frankly. Really, early in my testing and quality leadership career we were more focused on what I might call internal testing metrics. Maybe number of defects found, or number of tests available, or number of tests linked to the requirements, those kinds of things. Those are all important metrics, but there are more important metrics from a test organization's efficiency and effectiveness perspective as you know. We need to link those to the business outcomes and goals.

I remember being in front of my CISO once and doing this great presentation on the number of defects we'd found, the number of tests we had automated, and the amount of test coverage we had. I could tell I was kind of getting the deer-in-the-headlights look. I paused for a moment and looked around the room. I could just tell this was not going to be a good experience. Finally one of the business managers asked me, "So, what does all this mean to me?" I could care less how much test coverage and how many tests you have, and how many are automated, how many views that came in. Are you delivering me a product that works, you know? What are the risks, and that.

The reason we measure and monitor is to determine if there is a possible problem in a product, or our methods, of course, our processes. Those are kind of like early warning systems on your automobile. If a red light comes on then we need to determine what action is necessary. I think that alignment is really important. I think some other aspects in developing a measurement program are the collect and buy in. The commitment of the stakeholders is crucial, so you don't want to do that in a vacuum. I think alignment of the measure is important. You can't have one measure working against another measure. In other words, we want metrics that build upon and complement one another.

Then finally we want just enough measures. I worked with one organization that I won't name, but they were very proud to have this quarterly book of a hundred pages of metrics and charts. That's really impressive, but what actions have been taken as a result of these measures? I did get kind of the deer-in-the-headlights look. I was a little bit embarrassed about it for a while. We've got this great metrics book with all these measures once a quarter that like ten people are pulling together, but they couldn't articulate what corrective actions, what improvements, what results from those metrics had occurred.

Josiah Renaudin: Speaking of metrics, what metrics paradigm have you found to be the most consistently successful? I think it's always important to note that there's no one single magic solution for everyone. Even if agile is seen as this methodology that a lot of people have found success with, sometimes it just doesn't work for certain people. What have you found in terms of a metrics paradigm that has worked the most for most people?

Mike Sowers: There's been, of course, a lot written on metrics. There's hundreds of books on metrics, metrics in general as well as software metrics. I kind of like Victor Basili. He did some great work early in the, I guess, the mid-90s. He called it the Goal, Question, Metric paradigm, and all that really was the taxonomy to say think about your goals first, in particular your business goals, then think about what questions you need to ask in order to achieve those goals, and then from those questions flow the actual metric. Victor Basili is kind of famous for this goal, question, metric paradigm which I think provides a good model or framework for thinking about metrics.

The other one is from George Duran that a lot of people use called Smart. The acronym S.M.A.R.T. Ensure that your metrics are specific, they are measurable, they're achievable, they're realistic, and they're are time bound. I like those two. I think they complement one another as we start to think about the specificity of metrics if you will. Then Victor Basili kind of gives us the high level goal question metrics approach. There's many others, but those are the ones I use quite often.

Josiah Renaudin: I have two questions that I am going to combine into one here, because they are both about automation. How important is it to have an integrated test automation plan instead of incorporating several unrelated tools, and to kind of the branch off of that, how difficult can it be to incorporate new automation tools into your team?

Mike Sowers: The analogy I use here, Josiah, is not many of us would attempt to build a home or a remodel rooms without some kind of a plan. Even a light weight hand drawn sketch, or at least some type of a mental model that we thought through in our head. We want to think through the end in mind, and that desired state of where is it we want to be.

Just like building a structure. We need to understand how the plumbing, and the wiring, and the room layouts, and the doors, and the windows are going to flow together to offer us function. Functionality. I think selecting and integrating tools is no different. How will our test management tool connect with our agile project management tool? How will our test execution framework integrate with our test management tool or our build tool, or continuous testing? What tools will connect together to track traceability between user stories or requirements, and test cases, and the actual code that implements the features or functions. Most of us have existing tools, and then maybe we want to try out or add new tools.

What I talk about in my tutorial is that we really need an architectural diagram. We want to understand our twelve month, eighteen month, twenty-four month plan on how all those are going to fit together. Offer some common workflows, and talk to one another, integrate with one another. We get the gains, we get the efficiency and effectiveness out of that collective tool set.

Josiah Renaudin: I have one more question for you, just kind of another primer for your tutorials. More than anything, what central message do you want to leave with your tutorial audiences?

Mike Sowers: I think on the metrics side, just enough measures that are linked to business and product goals. Those measures need to drive actions and outcomes such as corrective action, and the product, improvement of the product, or improving in the methodologies and the process.

On the test tool architecture side, I think the important message there is to have a vision and to have a plan. Some kind of architectural picture of how you want things to fit together, then address the most significant pain points first. Then you can simply just evolve that plan. That plan should be dynamic and adaptive, of course, to fit your needs.

Really, the end of mind is a well integrated set of computerated testing tools with useful work flows to support getting products to market faster, better, and cheaper.

Josiah Renaudin: All right, fantastic. Thank you very much for speaking with me today, Mike, and I'm looking forward to hearing more from you at Better Software West.

Mike Sowers: Always great to speak to you, Josiah, and thanks very much for the time.

Mike SowersMike Sowers, CIO and senior consultant at TechWell, has more than twenty-five years of practical experience as a global quality and test leader of internationally distributed test teams across multiple industries. Mike is skilled in working with both large and small organizations to improve their software development, testing, and delivery approaches. He has worked with companies including Fidelity Investments, PepsiCo, FedEx, Southwest Airlines, Wells Fargo, and Lockheed to improve software quality, reduce time to market, and decrease costs. With his passion for helping teams deliver software faster, better, and cheaper, Mike has mentored and coached senior software leaders, small teams, and direct contributors worldwide.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13