ready to release the product? You need to be tracking the builds that are sent to verification, tracking the problems that result from verification, and validating the set of tests being run by verification.
Test cases need to be linked to the features/requirements they are addressing.
Your ALM tools should be able to tell you in a single click what requirements don't have test case coverage or what requirements are covered by a set of test cases. You should be able to select a particular build, ask what verification sessions have been run against it and ask for the results: what/how many problems were raised?
how many test cases failed? what percentage of test cases were run? Your ALM tool should be able to provide you with this picture for each of the verification builds - so that you can see the progress from build to build. If you compare this progress
curve from one release to another, you'll notice similarities, with the greatest variance due to changes in your process and methods. You'll be able to predict when this release will reach the quality required, based on this curve.
Generic results such as this are helpful. But there are two more things you need to do before being able to release your product. And your ALM tools must be front and center here once again. First of all, you need your CRB to be analyzing problem reports coming in and identifying which problems must be fixed prior to release.
One approach is to try to fix them all. And that's OK, as long as you realize that fixing them all is going to introduce additional problems which may take you a few months to uncover. The "fix them all" approach is best done at the beginning of a release cycle so that the side effects can be discovered before you release. Closer to release date, you need to be very specific about which problems you fix. I've seen some very, very trivial looking issues cause great problems after being fixed incorrectly - even when the fix was reviewed and appeared simple. I recommend you get into a habit of planning for a "service pack" release following your initial release, and placing all non-critical problems into that service pack, rather than trying to address everything prior to release. (We're talking software here - the same does not apply to hardware, or at least the weights and balances are different.) Often, the list of must-fix problems is referred to as the "Gating" problems (i.e. gating the release).
The second thing you need to do is to get the product into your customer hands.
Plan alpha and beta releases. Give away the software if you have to, but make sure plain ordinary every-day users are going to exercise the product. You will never be able to test all scenarios. If you think so, consider why NASA, with all of it's tight development, review and verification processes, still hits problems in flight, or on the launch pad. It's not because of process problems. It's because they have a finite window and budget to complete a task, just like every development project has. And its also because the test environment is different from the specific user environments. Getting the product into the user's hands is the real way to evaluate the readiness of a release. It's a key part of release management. Track the issues found specifically against field trials. You'll likely find that users rarely hit the problems your test cases are there to catch. It's usually some more obscure case that never made it as a test scenario, at least not in the same run-time environment.