as high as 30 to 40 percent. I have seen defect rates in inspections of one-line fixes vary from 10 percent to 30 to 50 percent. Testing usually runs in the 20 to 40 percent range, but this number was highly dependent on the quality of inspections prior to testing.
I ran across a case where the data in a conference presentation looked funny. When I questioned the author, he admitted mixing major and minor defects on one chart. His reason was "it made a better story," and admitted the mixing of data was incorrect.
Let the Reader Beware
So what is the poor reader supposed to do? We are faced with incorrect paraphrasing, incomplete explanations, and--on occasion--deliberate errors of representation. I use the following methods to help me determine if information is of high quality.
First, look for articles that are peer-reviewed or posted on moderated Web sites. Peer-reviewed journals will typically have fewer technical errors. Check the author's acknowledgements to see if they thank reviewers. This increases the chance that errors have been removed. For Web sites, look for publications that encourage feedback or forums encouraging discussion (StickyMinds.com is an example). Attend conferences where presentations are reviewed for accuracy (sponsors, review committees, or program chairs that are respected for program content).
Second, beware of hype and exaggerated or out-of-context claims. If a case study or experience report is generalized into a larger context, be skeptical. If the reported cost savings is equal to or greater than the original project cost, something is amiss! Look for claims of savings or ROI that apply only to a part of the development or test cycle that is inadvertently generalized over the full project. One study of the schedule improvement accredited to software process improvement claimed a 95 percent improvement. (A twenty-week task would take one week.) I find this a bit hard to believe if applied to the total project schedule. On the other hand, if applied to a task within the project, I can see where a system build, or final test stage activities can be improved this much by eliminating rework due to defects.
Third, try to find multiple sources of information to allow you to compare and contrast numbers and results. If there are wide disparities in the results, proceed with caution. When an article is paraphrased and there is a reference, check the reference to see if your interpretation is the same as the author's.
Fourth, look to the underlying principles that are being discussed. This is where the real value lies in most of what we read. Identify the method or process being used, and see if it can be applied to your organization. Consider running a pilot project to verify that similar results are possible in your organization.
There is a lot of information available; unfortunately not all of it is accurate. We must objectively consider the information we gather. With a little practice, we'll soon be able to separate the good apples from the bad.