a definition of quality to some documentation metrics, we can set goals. These goals are expressed as a level of quality that must be met for a project. Metrics can also be used to measure the quality of the finished document. (JoAnn Hackos describes this method in chapter eight of Managing Your Documentation Projects , John Wiley & Sons, 1994.)
A mechanical metric is one based on something that can be measured mechanically. The number of words on a page, the number of pages in a book, the number of commas in a document—all these are mechanical metrics. The more sophisticated mechanical metrics use computer programs to do various counts and analyses of a text (I discuss some of these below). But the extra sophistication doesn't change the fact that mechanical metrics don't tell us what we need to know. Often, they cause more harm than good.
If a metric isn't a predictor of quality, then it's very likely a mechanical measure, with little or no application in this context, as the table in the sidebar illustrates.
Metrics of Questionable Value
Mechanical metrics are based on concrete measurements, which actually makes them very trendy. The computer industry thrives on statistics and forecasts, and this finds its way into other areas of study.
The problem with mechanical metrics is that to arrive at a viable metrics "matrix," we'd have to have an n-dimensional chart or graph that accounts for all the factors and variables that exert some influence on the writer’s productivity. Although we can benefit from measuring our processes by assigning valuation in a comparative schema, we have to be careful about approaching user documents in this way. We aren't in the best position to make that determination: The customer is.
Following are some common mechanical metrics and the problems I see with them.
The page-count metric is used to estimate the scope of a project. It's an internal measure—it's useful for internal planning purposes. By comparing completed projects of similar scope, we can determine the level of effort that most likely will be required to complete the project at hand. We can determine how to allocate resources and schedule production and then use that information to plan for larger projects.
However, the number of pages has no intrinsic value. You cannot draw a correlation between a 500-page document or a 50-page document and their inherent communication or knowledge-transfer quality.
The three major readability metrics used in communication today are the Gunning Fog Index score (grade level units), the Flesch-Kincaid formula values (grade level units), and the Flesch Reading score (with 100 representing the greatest ease of readability and 0 representing the least). All three are generated computationally—a computer program examines the copy and assigns a score.
The problem with readability metrics in technical documentation is validity, as the technical backgrounds of a target audience will determine the baseline of our measurement. For example, I helped write a software architectural specification overview for one of our product lines, and I edited the entire document. The Flesch Reading score for the specification was 17, indicating a difficult-toread document; however, the document was praised by the executive management team and the board of directors as well written and easy to understand.
In this example, the executive management team and the board of directors constituted the audience. It was their assessment that mattered, not the arbitrary numbers on the reading scale.
Pages per Day
This metric is also sometimes called the "documentation rate." Management may like it because it's straightforward, but what is its real intrinsic value?