The Value of Positive Testing

[article]
Member Submitted

thing to run, and then I’d worry about how much gas it used.

I view positive tests in much the same way. Show me it works like it’s supposed to and then we'll worry about what happens when it doesn't or how well it performs (or doesn't).

I always write and run positive tests first. Once the system can pass the positive tests the fun starts. Now I get to be creative and break it (insert evil laugh here). Equivalence Class testing, Boundary Value testing, etc. are all great test techniques, but they are effectively useless if the system isn’t functioning correctly to begin with. If the system is failing with valid data, it doesn't really make sense to test with invalid data–yet. Unless, of course, the system accepts the invalid data–that would be bad. But that's why we need to test both positive and negative scenarios. Test the positive first to make sure the system responds correctly to good data, correct sequences of operation, correct user actions, etc. Then–and only then– can we validate what happens when entering invalid data, incorrect sequences, or incorrect user actions.

My preferred sequence of tests–run all the positive tests first, and if they’re successful–jump into the negative tests. If the positive tests don't pass I halt all testing until they do.

An additional benefit of positive testing–smoke tests! When you receive a new code drop or build, what better way to validate the core system functionality than to run through your suite of positive tests? Positive tests are my first automation candidates. They are typically tests that are quick and easy to run. My smoke tests will usually consist of the entire library of positive tests, or a large subset of them (the critical ones at least). I like to target no more than 30 minutes to run a valid, end-to-end smoke-test. With a good test automation tool you can achieve a lot of testing in 30 minutes. I like to run an automated smoke test with every new build, on every environment. If we're doing daily builds, I run a daily smoke test. When the smoke test passes I can be reasonably sure I have a good system to begin more in-depth testing. I can accept the build, and start my test clock. If it fails–I can kick it back.

For a bit of extra incentive–consider the doughnut factor. If the smoke test passes, I buy doughnuts for the team.If it fails, the development team buys the doughnuts. I hear bagels work too.

About the author

TechWell Contributor's picture TechWell Contributor

The opinions and positions expressed within these guest posts are those of the author alone and do not represent those of the TechWell Community Sites. Guest authors represent that they have the right to distribute this content and that such content is not violating the legal rights of others. If you would like to contribute content to a TechWell Community Site, email editors@techwell.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!

Upcoming Events

Nov 09
Nov 09
Apr 13
May 03