I've got my product ready on CD. It's better than the previous release from both quality and functionality perspectives. Does that mean it's ready to be released as a production product? How do I know its really ready?
This challange faces every product, every release of every product. Ever hear of a rocket launch failing because of a software error? What about telephone service being interrupted by software problems? Yes on both accounts, but to be fair, the success rate in these two sectors has generally been good.
There are a number of factors to consider. For one: How critical is the application? If it's a manned space mission launch, the application is pretty important. Almost perfect is not necessarily good enough. You'll want to be sure that when you release the software that every conceivable path of execution has been exercised. If not, production release, and hence the mission, simply has to be delayed.
But a critical application could also provide a reason for releasing sooner. If I've cut a new telephone switch into operation, only to find that it's resetting twice a week, I've got some pretty big liabilities. If someone tells me that there's a better release available which has run through its verification tests in good shape with respect to the high priority problems, but needs work on some of the medium priority stuff, if I'm the telco, I'm going to say ship it. I can't afford to keep booting my customers off their phones every third day or so. I rather them find out that Call Display is occasionally not working, and find other annoyances that aren't as likely to result in a law suit. Although this might backfire if a higher quality competitve product is ready to roll.
So there are a number of factors that come into play. There's no simple answer. It depends on the application. It depends on the state of the current production version.
Then, how can you make the best judgement for your products in your company. I recommend that you start by getting a handle on your process and the underlying metrics. Let's start with this line of questioning:
If you fix 100 more problems, how many of those fixes are going to fail to fix the problems? How many failures will you detect before release? Well, hopefully your verification process will help you to catch a very high percentage of the fix failures before they go out the door. But let's continue.
How many are going to break something else - that is, fix the problem but break another piece of functionality? How many of those failures will you detect before release? Not quite as high a percentage, I would imagine. Now let's continue this line of questioning.
How many of the failures that slip through the verification net are going to result in problems of higher priority than any of the 100 that are being fixed? Well I suppose that depends on the priority of problems being fixed, to a large extent. So let's say, we can estimate that 2 new problems of high priority will likely come fall through the cracks - for our particular process and verification capabilities, based on previous metrics. If you're putting together initial builds of the product, many of the 100 problems are likely to be high priority already - so you're likely to come out way ahead. However, if you're near the end of a release cycle with a very stable product, pretty much ready for production with no outstanding high priority problems, the last thing you'll want to do is to risk adding high priority problems to the release. It's not always beneficial to fix 100 more problems.
So what do we uncover from this line of questioning?
First of all, we need to