finished. In my book, a module is finished when all the test cases pass, period. However, it is generally accepted wisdom to say that a program is ready for beta testing when approximately 85 percent of the test cases pass. And, although you theoretically need a 100 percent success rate for a module to be production-ready, our client will generally accept going to production with a small number of non-critical issues which can be fixed at a later date. So we also defined a "pre-production" state for modules with at least a 95-percent success rate and no critical issues.
Finally, I find it's motivating for the troops to distinguish modules on which coding has begun from truly new modules.
We distinguish five states representing five development stages, which are objectively measured by the number (or percentage) of test cases which pass:
- Planned: coding hasn't started yet.
- In progress: coding has started.
- Beta: 85 percent of the test cases pass.
- Pre-production: 95 percent of the test cases pass, and there are no critical open issues.
- Production-ready: 100 percent of the test cases pass.
Once you have the percentage of passing test cases, you get a pretty good idea of module progress and stability. We present this data graphically in weekly progress reports via graphs, such as the one in Figure 2.
Figure 2: A typical project progress report showing progress per module based on test results
An arbitrary 10-percent, violet bar (e.g., the eighth bar down in Figure 2) is used to indicate that work has started on a module. This is primarily to encourage developers and to give sponsors a clearer idea of work in progress.
The progress of each module can be followed at a glance, using a color-coded schema:
Figure 3: The work progress color coding
Test-based progress overview
We get a high-level overview of project progress by representing, in terms of number of test cases, the relative weight of modules in each state (see Figure 4). This graph is easy to understand for an outsider and particularly useful for an executive summary chapter in a progress report.
Figure 4: Test-based progress overview
The iterative development cycle we use is provides a convenient basis for tracking defect data We try to target client-deliverable versions every one to two weeks, and an internal version on a weekly basis or sometimes every few days. The regularity of new versions is more important than the number of new modules or bug fixes in each new version. However, QA personnel do like to test the same version for a reasonable length of time before receiving a new one. Delivery target dates are decided together. Before each delivery target date, we decide whether a delivery is feasible (presence of critical issues), and what new modules (and bug fixes) can be announced to the client.
To do this, we use defect data taken from the defect database to measure product quality and reliability. Overall-defect-status graphs show the number of defects for each defect status (open, to-be-deployed, pending validation, etc.).
We also measure defect status--recording the number of open issues and total issues--for each deliverable. This is important for delivery scheduling:
- The number of open issues gives an idea of the current stability of a given module. Is a module presentable to the client in the next iteration?
- The number of total issues gives an idea of the number of total defects found in a given module. I generally find that modules with a record of having a lot of issues in the past are more likely to cause problems in the future