Overcoming the Challenges of Test Strategy Creation: An Interview with Randy Rice

[interview]
Summary:

In this interview, Randy Rice, an author, speaker, and consultant in software testing and software quality, explains how you should navigate the nuances of detailed test strategy creation. He covers why a test strategy may seem good at first, but once the testing begins, it can fail.

Josiah Renaudin: Welcome back to another TechWell interview. Today I'm joined by Randy Rice, an author, speaker, and consultant in software testing and software quality, as well as a keynote speaker at this year's STAREAST conference. Randy, thank you so much for joining us today.

Randy Rice: Hey! It's great to be here, Josiah.

Josiah Renaudin: Before we really get into the meat of your keynote and go through all the different aspects of it, could you tell us a bit about your experience in the industry?

Randy Rice: Sure. Basically I've been around since dinosaurs roamed the Earth, I think, in terms of computing. But I don't want to bore you of all the details there. Some of us are still hanging around in the industry. I worked with punch cards and all that. The big thing was though I was developer for about a dozen years before I got into software testing. I actually got in like a lot of people do, in kind of an unplanned way.

But I go way back to the late '70s in being a coder. Then I became a test manager in 1998, like I say, in kind of an unplanned way. They chose me, they being the company I went to work for, because I had read a book on software testing. It was Boris Beizer's book on software testing technique. I had actually written a test plan as a developer so they felt like I was uniquely qualified. For the next couple of years I was trying to learn my way into what software testing was, what software quality assurance was. The thing about it was there were only about four books available at the time.

Anyway, with the help of Bill Perry, at QAI, and others, I really learned what testing was. Then in 1990 I decided to go into that as a practice area on its own as a consultant. I've been doing business as a consultant since 1990 focusing mainly on software testing, although I get into areas that touch testing like life cycles and requirements and things like that. I've seen a lot of change obviously in the field of computing and everything since I got into it as a full-time consultant.

Josiah Renaudin: That's a good point. You have seen the industry grow and change, especially software, and that's an industry itself so open for change, so open for complete sweeping differences. In your keynote you highlight how testers are facing a major challenge when designing test strategies. Is that because mobile, the Internet of Things, and other new technologies have created more opportunity for test failure? Or in your mind, as you mentioned you've been doing this for a while, has test strategy creation always been this difficult?

Randy Rice: It's an interesting time that we're going through because yes, we are seeing some really sweeping technological changes that ... the way I describe it is that they're really expanding the scope of what we do enormously. To me, the challenge is that we were able to have the experience and the depth of knowledge to create a strategy. One other thing, too, let me take a step back before I say what I was going to say, is that a lot of people don't understand the difference between the strategy of something and the tactics of something, of how to do it.

The strategy is the big, objective picture. If you're thinking in terms of military strategy or something, this is the thing that the generals create. So it takes people with some pretty good understanding and some pretty good experience to see down the road a bit, to envision the things that you might be coming up against. Any time a new technology comes about, one of the very first things that I do is, I typically write an article on the strategy of testing that thing. Right now the big thing I'm looking at is how the Internet of Things is going to be a total disrupter. Everything we've been doing back when the web came about it was the same thing, that I started looking at what can we take that worked before and either keep it, or toss it, or maybe adapt it. At the strategy level we frame the big picture. Then when we start thinking about, how are we going to test it? Then that becomes more the planning picture and more, as we get down into looking into the details, that becomes the test design part of it.

Josiah Renaudin: In your mind, this is something I'm really fascinated by, why do you think so many strategies and test designs actually fail once they're introduced and used in real-life situations? It's one of those things where in practice you can do things over and over and it seems like it's going to work but then when you put out to a bigger environment, and the obvious example here is Pokémon GO, where suddenly it's out, so many people are using it and it doesn't function. It's an app that can't work in that scale. Is it impossible, in your mind, to correctly simulate real-life testing even though we are trying to do it much, much more lately?

Randy Rice: It's the big challenge of where the real world meets the paper world. It's that classic thing of, wow, that really did look good on paper. Even it sounded good as an idea, but then executing on the idea, as you say, is the real challenge. This really gets into the nugget of the idea for the presentation. Based on experiences of where, yeah, we thought we had a great strategy going into something, then once we started to design the test and even start to perform some of the test we realized, whoa. Wait a minute. It's not scalable, it's not workable. We've got to go back and rethink the strategy.

The idea that you get a strategy in concrete and go forth with it and everything is going to be fine and dandy, it very seldom works out that way because we just don't know what we don't know. That famous quote. We don't know those things until we get right in the middle. Here's the thing. If we allow ourselves the ability to fail and not see it as a totally destructive thing and allow ourselves in our planning the option of coming back and doing a restart, or two, or three perhaps, like I'm going to discuss in the case study, then it really makes it a planned part of your strategy. I know that sounds weird but to say, but part of our strategy is to fail. At least two or three times.

There have actually been situations in history. There was a thing back in the day called, back when they were exploring Canada and settling Canada, they called it the Hudson Bay Start where people would start out going west from Hudson Bay for a day and then their plan was to come back to pick up all the things they forgot before they made it any further out. It was actually called a Hudson Bay Start when you came back to pick up all the stuff you forgot the first time.

Josiah Renaudin: We mentioned before this idea of trying something in practice and then suddenly the scale of it, when you put it out in real-life situation, it's different. This is something you mentioned in the abstract for your keynote. Can you explain how scalability and flexibility influence the test strategies and test designs?

Randy Rice: Yes, absolutely. We're in a totally different world of testing now then we were in, let's say, even five or seven years ago. It's changed radically. One of the things that agile did back fifteen years ago—I know it sounds even crazy saying that, fifteen years sounds so old, doesn't it?—it put an emphasis on flexibility. If we have test designs that are not flexible, they're going to be hard to automate. They're going to be hard to maintain. They have to be designed with change in mind. That's not hard to do actually if you know some techniques to do that.

Unfortunately, when most people learn test design, they don't learn it in a way to make it adaptable to change. They also don't learn it in a way that can be scalable. You always have to be looking at whatever you're dealing with and test design and saying, "It works for ten tests. How would it work for a thousand? Or ten thousand?" To be thinking through that whole process. Once again, sometimes you really don't even know until you get in the test lab or you get into the environment you're going to be testing in and you get that feel of scalability. I like to play in the sandbox first, so to speak, and try it for a hundred or a 150. I'm going to go through in my talk an example of that where we really learned that, in one project in particular that, that's just not going to work and so we had to have a restart.

Josiah Renaudin: You mentioned agile before and how, just think, fifteen years ago sounds crazy. But agile has now come into play and it seems like everyone's adhering to that. You look at test automation, which now is being used more than ever. Has the increased importance of automation changed the process of designing tests and putting together effective strategies? Have you had to really think more and more about where can we automate? What are the smart spots to automate? And how do we make sure we're not over-automating?

Randy Rice: Yeah, absolutely. Test automation has been the Holy Grail ever since I was in testing. Ever since I got in it. Even in the late '80s we were trying to figure out how to automate stuff. It's been this elusive goal for so many people. Now it's almost like we've reached a tipping point to where enough people are being successful with it that it's almost, I would say, maybe even to a dangerous point to where people are thinking it's the only way to do testing. We have to figure out, yeah automation is good. It's a fundamentally different way of thinking about and doing testing so we have to figure out, just like you said there, what can be automated? What should be automated? What shouldn't be automated? And how do we carry that forth? What do we see as things that can be automated and how do we make sure that we don't lose in the process?

One of my great fears is that testing will be relegated to this idea of regression testing. That's just a small subset of all the testing we do. Now am I glad that we're doing a better job in automating regression testing in some places? Yes. I'm really glad because it's been something we've been struggling with for many, many years. So I'm really glad to see that a lot of companies are getting a good grip on it. It's changing up how we think about testing.

Josiah Renaudin: Something that you're going to mention in your keynote that caught my eye was the idea of regulatory laws impeding testing. How external constraints can change how you test. Can you discuss how regulatory laws impede navigation through test strategy creation?

Randy Rice: Sure. The thing that really got me thinking about the whole topic about the nuances was that I did a talk a couple of years ago, it was a keynote on principles before practice, how important it is that we understand the principles of what we're doing before we actually start diving into doing it. One slide in that presentation was, that I said, was that test design is a very nuanced thing. There are a lot of wrinkles around test design. People are used to this idea of learning a technique and then immediately going out and applying it. Then they learn that, wow. That didn't really work very well. So we have to understand all the differences around it.

One of those nuances is the regulatory aspect or the, maybe if not that, the need to document things. Some people have that and some people don't. That could be considered also a context of your testing. But the regulatory bodies such as FDA and places like that, they're more often than not, they're looking to see your test plans. They want to see you test cases. It's not acceptable, not sufficient, to just check off pass fail, you have to give evidence of pass fail.

You can imagine the burden that would be for someone who's in that environment as opposed to someone who might not be. In some cases they have to have designated people to do nothing but document things. To make matters worse a regulator can show up at your company unannounced and want to see your documentation. You have to be able to produce it within a very short amount of time. It's not something you can go back into the back office and cook up at the last minute. Although I have seen some scary things before. The bottom line is that all of that happened because things like medical devices and things were being found to have bugs in them and defects. Not that regulatory efforts necessarily solve all of that but it did bring to the forefront the need to at least document what you did because of the safety criticality of something.

When you're thinking about testing in that environment it needs to be pretty rigorous. When you think about it, think about yourself being the patient of a medical procedure that's being totally controlled by software. If you've been around the industry as long as I have that can be a scary thing. Almost like self-driving cars and stuff. So you want it to be rigorous. The question is, does the documentation actually deliver that? In some cases it does, in some cases it doesn't.

Josiah Renaudin: I've never really thought about it that way. That's interesting. One of the main points about your keynote will be the case study you're bringing up, so I don't want to spoil that for everyone listening. To wrap things up, more than anything, what central message do you want to leave with your keynote audience? Maybe based off the real-life case study or something else. What do you want them walking away thinking, that was a really interesting point that I might take to my team and help us progress in testing in some way?

Randy Rice: Actually, if I can, I think sometimes in dualities and things like that, one thing is the importance of the strategy. How everything flows from that. How you see the thing that you're testing, all the considerations you need to have. Then once you start flowing into your test design the idea that you may not get it right the first time, you may not get it right the second time, but whatever you do, don't invest everything and then find out that you don't have it right. Play around. Experiment. Pilot things. Then always be willing to come back and do a restart if you need to. Give yourself that permission to fail until you finally have it right.

The good news is once you get it right, at least in that particular situation, then you have something that is working. That you can grow and you can continue on. I'm going to give a whole list of nuances that at least I've seen in projects. I'm sure that anyone that is going to be in my keynote address, they're going to probably think of probably two, or three, or four that I don't even have on my slide up there. So it'll give them a chance to think through that with their team. This is something that is more than context. This is tucked inside the context, if that makes sense. That these are things that no matter what you're doing, you're going to have to be paying attention to make these test designs, these concepts, to make them an actual reality and make them workable.

Josiah Renaudin: Fantastic. Thank you so much, Randy. I really appreciate the talk. I'm looking forward to actually seeing the full keynote and hearing more from you at STAREAST this year.

Randy Rice: Thanks, Josiah. I'm looking forward to being there.

Randy RiceRandall (Randy) Rice is a leading author, speaker, and consultant in software testing and software quality. With more than thirty-eight years of experience building and testing software projects, Randy has authored more than sixty training courses in software testing and software engineering. He is coauthor (with William E. Perry) of Surviving the Top Ten Challenges of Software Testing and Testing Dirty Systems. In 1990 Randy founded Rice Consulting Services where he trains, mentors, and consults with testers and test managers worldwide regarding complex testing problems in critical applications. Randy is on the board of the American Software Testing Qualifications Board (ASTQB). Find more information at riceconsulting.com and on Randy’s blog.

About the author

Upcoming Events

Oct 13
Apr 27