Making Business Sense of CMMI Level 4

[article]

Two fundamental characteristics of the inspection process that can be evaluated with control charts are the preparation rate (number of pages per person per hour of effort spent in individual review) and defect density (the number of major defects per size inspected). Inspection process results usually are better when preparation rate is under control, and defect density is one measure of the value of inspections.

I recently saw an analysis of inspection data where an organization used control charts to show that the process was under control, but a fundamental oversight compromised the analysis. Both preparation rate and defect density require a size measure. In the case of functional specifications, a measure often used is pages. In this case, the product was a legacy development where the new functions were incorporated into the existing functional specification and noted by change bars.

Size can be counted two ways: by the number of pages with change bars (easy to do) or by a more exact count of just the new and changed lines (potentially inaccurate due to manual counting and somewhat time-consuming for this particular organization, as a tool did not exist for the company in its environment). The company used both ways to count size, and both methods produced data that generated preparation rate control charts showing a "controlled" process. Defect density was represented differently; counting full pages showed a controlled process, but counting changed lines indicated many out-of-control data points. The process analysis for stability depended on the true relationship between size, effort, and defects. It was important to understand the actual process stability, as the results of inspections showed a large variation in the defect density when measured against the new and changed functional specification material. Were the low defect densities the result of poor process execution or good functional specifications? Were the high defect densities the result of good process execution and poor quality specifications? The organization could not make a decision because the gathered data was so convoluted.

These questions can be evaluated by looking at the process execution. Going back to the preparation rate control charts, it was clear that they were significantly different depending on which counting algorithm was used. When full pages were used as the size measure, the process control limits were at least three times the mean value, an indication that the process was not very capable and that the stability was in fact a byproduct of the data and not a true evaluation of the process.

What was overlooked in the initial definition, which was chosen to "make it easier to count," was the fundamental relationship expressed by the calculation and pages per unit of effort. The counting method equated effort used to make a one-line change to a full page of new material. While the control chart did not have obvious out-of-control indications, the relationship of effort to size was not valid.

About the author

Ed Weller's picture Ed Weller

Ed Weller is an SEI certified High Maturity Appraiser for CMMI® appraisals, with nearly forty years of experience in hardware and software engineering. Ed is the principal of Integrated Productivity Solutions, a consulting firm that is focused on providing solutions to companies seeking to improve their development productivity. Ed is a regular columnist on StickyMinds.com and can be contacted at edwardfwelleriii@msn.com.

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, TechWell.com is the place to go for what is happening in software development and delivery.  Join the conversation now!