Classic Software Testing Is Broken: An Interview with Regg Struyk


CP: Right, OK. I guess with mobile development, what does limited resources play into that? Is that going to affect the waterfall method? Is that going to affect a more agile approach? Is that going to affect the requirements and the goals being defined properly and being really carried out the best way they should be?

RS: Yeah, I think there’s a mix that’s there. It’s certainly from a resourcing perspective. Actually, there’s a bit of a caveat there. A lot of organizations think that you attach more resources to the testing area, and that would resolve the problems. I remember back when I was managing the technical product management group at Agfa HealthCare, we used traditional waterfall methodology. We had two hundred developers, so the sheer size of the actual develop process was fairly, significantly large.

What we would do is we would actually attach, near the end of the development lifecycle—which, by the way, was two years long—more testing resources. Basically, all testing would occur at the end. There’d be the fear of getting it out to market in time, a lot of defect issues, and that. Instead of actually improving the methodology itself, they would just throw a bunch of testers at the end to the actual project. That was a recipe for disaster.

Keeping those kinds of things in mind, I think that is the new—what I would call—hybrid, because no one uses purely agile. Good organizations use some kind of hybrid—a mixture of what works for them. The way that we actually view testing and how we do testing will certainly change the dynamics of what gets released out into the marketplace.

For example, good practices that I’ve seen are things like testing early and testing fast. Having testing done at the very beginning and continually testing. Not just the testers, but also development, everyone, is involved in the testing process. Also, having the testing organizational, or the stakeholder from testing, involved at the very beginning of the development process as well, helping define the goals, all those things. That’s a really good example.

Test-driven requirements, as well. Being able to actually, once the requirements are created, in synchronization, actually doing testing at the same time. That really changes the complexity. I would say that definitely there is sometimes a need for additional testing, but I truly believe what I’ve seen out in field, the reality is that we have to change the mindset of what we’re doing as opposed to actually just adding more resources.

CP: OK. That’s a good response. Then you also cover cloud, big data, in your session. What is the biggest problem for the adoption of the cloud and big data for software testing?

RS: Good question. I’d say there’s two components to this. First is obviously the cloud. The real issue with that is basically security issues, so companies being reluctant to use an off-site, so to speak, model to have their testing artifacts. However, Amazon has actually taken a good approach to this and has certainly alleviated a lot of the concerns. Second of all, obviously, is the cultural adoption. As I mentioned is that back to three to five years ago, roughly, when companies were reluctant to outsource pieces of development and testing. That’s certainly a part of that.

Then with big data, it really is—I have a really good white paper that we’ve written about big data and some of the issues and discrepancies that we’ll see with testing. What it comes down to is that it’s how we actually handle big data and how we actually will process big data, as well as the tools that we use. Meaning that we’ve got this insurmountable amount of data that’s always being pushed toward us with the Internet of Things, so all these mobile devices and all the data that’s being created. In the last five years, we have created more data than what was created in the Library of Alexandria in its entire history.

User Comments

1 comment

About the author

Upcoming Events

Sep 12
Oct 01
Oct 15
Nov 05