Random Testing and "App Compat"

[article]
The Ten Commandments of Software Testing, Part 1

own writing and teaching is full of advice on how to select the right set of inputs. But I also counsel testers to buck up and face infinity anyway. The method of doing this: massive-scale random testing. It is a tool that should be in every tester's toolkit and few, if any, testing projects should be without it.

Massive-scale random testing must be automated. Although it isn't easy to do the first time, it does get progressively easier with each project and eventually becomes rote. It may not find a large number of bugs, but it is an excellent sanity check on the rest of your testing: If you are outperformed by random testing, you may have a problem on your hands. And I am always pleased with the high quality (albeit few in number) bugs that random testing manages to find.

Another reason to apply massive-scale random testing is that setting up such tests requires a healthy knowledge about the input domain of the application under test. Testers must really get to know their inputs and the relationships that exist among the inputs. I almost always find bugs and get good testing ideas just from the act of planning massive-scale random testing.

2. Thou Shalt Covet Thy Neighbor's Apps
This commandment sounds a bit perverted, but I can assure you it has a "G" rating. The idea I am trying to get across here is not to test your application in isolation. Otherwise, you might run into the nightmare scenario of "application compatibility," or more specifically, lack thereof. Application compatibility, or "app compat" as it is widely abbreviated, means that one application does something to break another one. In case you are not sure, that is a bad thing.

One way to combat this problem is to keep a cache of apps (old ones, new ones, borrowed ones, blues ones-the more diverse the better) and make sure that you run your app at the same time each of these are running. Of course, we want to do this with operating systems as well. A user should not have to tell you that your application won't run with a specific service pack installed; that is something you should find out in your own testing.

So covet those apps and those service packs: The more the merrier!

References

About the author

James Whittaker's picture James Whittaker

James A. Whittaker is is a technology executive with a career that spans academia, start-ups, and industry. He was an early thought leader in model-based testing where his Ph.D. dissertation became a standard reference on the subject. While a professor at the Florida Institute of Technology, James founded the world's largest academic software testing research center and helped make testing a degree track for undergraduates. He wrote How to Break Software, How to Break Software Security (with Hugh Thompson), and How to Break Web Software (with Mike Andrews). While at Microsoft, James transformed many of his testing ideas into tools and techniques for developers and testers, and wrote the book Exploratory Software Testing. For the past three years he worked for Google as an engineering director where he co-wrote How Google Tests Software (with Jason Arbon and Jeff Carollo). He's currently a development manager at Microsoft where he is busy re-inventing the web.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Sep 24
Oct 12
Oct 15
Nov 09