of a "directory" - Large directory fan-outs are hard to look at. They require real estate and/or scrolling. It's harder to classify their contents. You may want to set some reasonable guidelines here, especially considering that a suggested maximum of only 20 is enough to give you 8,000 files at the third level of fanning out. It's a lot easier to navigate 8,000 files with a fanout of 20 than it is when you have a couple of directories with hundreds of files. I'm sure that with some CM tools, this could make a performance difference as well.
Number of Files per Subsystem, per Product - How big and complex are the subsystems and the product overall. More important perhaps, is how this is changing over time. You might want to relate this to the lines-of-code metrics to get some meaningful trends.
Lines of Code (by file, by subsystem, by file type, etc.) - Why is this such an interesting metric? Certainly it's useful to compute coding/design error rates. But usually it's just useful to compare the size of projects. Unfortunately, I've seen the same, or even better, functionality often produced with just a tenth the amount of code - so perhaps its a good way to measure the effectiveness of your designers.
Revisions of a File by Stream - This is not much different than the "most frequently changed files" metric above. However, breaking it down by stream allows you to look at maintenance and support as opposed to the initial introduction of the file and the related functionality.
Branches per File - This is interesting in a stream-based development environment (where only one branch is created per stream) to get a good idea of what percentage of the files are modified for each release. In a more arbitrary branching environment, it can reflect any number of characteristics, depending on the branching strategy.
Delta compression level per file type - This metric, which gives the compression ratio for each file type, is useful for disk space requirements for your repository.
Other Functional Areas
There are plenty of other areas I haven't covered. Each one is important. As you go through the list below, think about what other metrics are important to you and how can they help you improve your processes. You may want to consider, as part of your process improvement process, putting together a list of metrics for each process area - those you currently use, and those which you need.
Test Suite Management
- Test case coverage (by Problem, by Feature, by Requirement)
- Test case failure rates
- Project checkpoint completion rates (S-Curve and predictions)
- Effort per Activity/Feature
- Actual vs Planned Effort per Activity/Feature (by Dept)
- New Documents Per Week
- Changed Documents Per Week
- Average Time to Review
- Average Approval Time
- Customer Requests Raised/Implemented per Release
- Number of Communications per Customer
Metrics on Metrics - How Well are They Working
When I want to know how well metrics are working, I look at a number of factors. These include some other metrics which I would tend to measure more on a quarterly basis.
Changes per Process Area per Quarter - More specifically, how many process changes are occuring as a result of other metrics?
Missing data fields per record [field values per 100 records] per Quarter - When data fields are not being set, metrics are going to be suspect. Processes need to change to ensure they are set. This will track whether or not proper attention is being paid to data entry forms. As this number goes down, my data is