20,000 new files to be distributed to all sites.
License checkout usage - This allows us to know when we will need more licenses. It also helps determine our ratio of floating licenses to users and average tool usage times.
CM Repository growth rate - This will affect both performance and disk space, both to verify against the vendor's performance claims and to ensure our overall data growth has been adequately planned out.
Application data growth rate - This is a finer measure of growth rate on a per application basis. If the overall growth rate is too fast, this will help pinpoint and react to the areas of concern.
Administrator Effort to Support CM Tool Operation - A CM tool has many cost components. If Administration is a key component, track the costs. This will give you a cost-savings objective for the next time you need to upgrade your CM tool suite. (There are very good CM tools out there that require almost no administration.)
Build Preparation Time for a Single Build - This metric is a useful one to help identify how well your CM tool is working. After product development work, how much effort does it take you to prepare for a build: promotion, merging, defining the build contents, producing build notes, retrieving source code, launching the build processes. If your effort is significant, you may want to split this into different metrics, and that's good. But I would also recommend you have the overall number. You want to drive this cost down to near-zero.
Changes per Designer - You might want to break this one out further. If a designer has a lot of changes, is it because a lot of rework is required, or because you have Super-Designer on your team?
Files per Change (Bug fixes, Features) - Here's a good one to look at. How many files change for the average bug fix. I'll bet it's very close to one. But if it's the same for feature implementation, you can probably reduce file contention by breaking these changes into a series of changes.
Lines of Code per Change - How many lines of code are added/modified/deleted per change. Check this out over time. If it's growing, then your modularity plan may not be working.
Header File Changes per Build - You'll want to look at this across an entire stream. Ideally, this is high at the start of a stream, low afterwards, and almost non-existent once verification testing has started. This is a good measure of your product stability.
Most frequently Changed Files - How many times to your files change. I looked at this today and I thought I had some interesting results. But when I zoomed in, it was pretty much what I should have expected. Still I was able to identify files which are candidates for architectural re-engineering. If a file is changing a lot, there will likely be contention on it. So the high runners are a good place to look and ask why? You may find that there are architectural solutions which can simplify your product desing considerably.
Most frequently Changed Interface/Header Files - When interface files are changing frequently, there's a problem in their initial design. The high runners should be the target of a design review. It may be simply that there's a symbolic range that keeps growing (e.g. for each new message type, or each new command or GUI button). That's not a serious problem, but you still might want to consider a dynamic allocation of these range elements.
Files and Directories