they can't supply an estimate (which is often the case), I'll propose one, say seven bugs per developer per week. The development plan should tell me how many developers will be available to fix bugs, say five developers.
Now Week One looks like this.
We go into week two with almost 1,000 test cases still to execute and pass and twenty open bugs.
If developers aren't added to fix bugs, the number of open bugs continues to rise. Week two ends with forty open bugs; week three (when we are executing more test cases and finding more bugs) ends with seventy-nine. And so on.
By the end of week four, we will have run through all the test cases once and will be finding fewer new bugs than during the first couple of weeks, at which point the emphasis shifts almost entirely to bug fixing and retesting. When that is complete, so is the endgame.
That's the simple model, which provides a rough idea of the end date. You can refine the model by adding more variables reflecting expected reality.
For example, some of the bugs fixed, say one in ten, will fail retest and go back into fix. When planning, I include a few other assumptions, such as a number of reported non-problems that take everybody's time, and a number of environment problems, especially in the first couple of weeks.
Like all models, this one has limitations. For one, the calculations tend to be linear while projects aren't. But on several projects I have found that my models weren't far from reality. I once used the model in a phone conversation to convince a project manager and the client's CIO (whom I'd never met) that we needed three more months to test than they had planned for. Impressed by the detail in the model, they bought my plan. In the end the model was short by three weeks, but I didn't think I'd done too badly on a four-month plan-given that the development team decided to do an infrastructure upgrade in the last month that delayed everything.
When we use a planning model like this, based on assumptions built from estimates, it's important for everyone to understand that it is a heuristic device. Estimating endgame activities, including testing, is about as accurate as estimating development or any other project work. The more we know, the closer we can get to something that might work out in reality. But our numbers will always be estimates rather than exact predictions.
Merriam Webster's Online Dictionary has this definition for "heuristic":
"Of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance."
This definition is a useful reminder that once we have modeled a plan, it's essential to track actuals so we can continually check every assumption as we go through the endgame and adjust the plan accordingly.
Modeling the endgame helps test managers and project managers estimate an end date that the project has a reasonable chance of achieving. That's the date when we should have passed all the test cases and fixed and retested all the bugs that must be fixed. Most importantly, because the model provides an explicit set of assumptions, it's hard for project managers and others to argue that we need less time to test, and it's easier for us to argue for more time if the assumptions turn out to be wrong.