Actual vs Planned Effort per Activity/Feature (by Dept) per Quarter - This metric will show whether or not my effort estimations are improving over time. The first few times someone makes an estimate, it's not unusal to see a difference of a factor of 2 or 3. But if the estimator is not learning from his/her mistakes, it's time to review the process improvement agenda.
CM Tool Performance Queries - Is the CM tool working?
While we're at it, how about some metrics to assess how well your CM tool is working. I might challange the CM vendors out there to post their own results for these metrics. These are common use cases for CM users and managers. Assume a client platform of about 2.5 GHz, with whatever configuration server you recommend to your clients.
Basic CM Tool Metrics
- Time to query full file history (all change meta data) [sec. per 100 revisions]
- Time to query full file delta history (all code changes) [sec. per 100 revisions]
- Time to retrieve files [sec. per 1000 files - (avg. file size 50K)]
- Time to perform build comparison (code changes) [sec. per 100 revisions]
- Time to search problem reports (by title only, by full description only) [sec. per 1000 problem reports]
- Time to bulk load files [sec. per 1000 files]
- Time to change context view [sec]
- Time to generate sorted Report for Problems - single line, full descriptions [sec. per 1000 problems]
- Time to start CM Tool - i.e. when client has control (command line client, GUI client) [sec]
- Time to create a baseline (based on a set of marked files/changes) [sec per 1000 members]
- Time to freeze a baseline [sec]
Advanced CM Tool Metrics
- Time to perform build traceability (all meta data - problems fixed, features, change descriptions, etc.) [sec. per build]
- Time to produce traceable source file (each line traced to file revision) [sec. per 1000 lines]
- Time to identify which revisions of a file contain a Function name [sec. per 100 revisions, avg. 1000 lines per file]
- Time to automatically identify and check-in changes to a file tree [sec. per 1000 files]
- Time to generate Requirements Tree document [sec. per 100 requirements]
- Time to generate an Activity WBS (Work Breakdown Structure) document [sec. per 100 requirements]
- Time to distribute a basic data change to all sites (MultiSite normal operation) [sec.]
- Time to distribute file change to all sites (MultiSite normal operation) [sec. per 100KB file]
- Time to generate MakeFile [sec per 100 dependants]
- Time to identify Dependencies on a header file [sec per 1000 files in scope]
Metrics vs Limits for a CM Tool
Another related area for CM tools, as for any system, is the set of quantifiable limitations on the tool. These are really a different form of metric - one that will change much more slowly over time. How about the following easy ones - I would hope they're all sufficiently high.:
- Maximum number of directories/files supported
- Maximum file size
- Maximum revisions per file
- Maximum files per change
- Maximum number of problems
- Maximum number of changes
- Maximum number of branches
How easy is it for you to make a new metric visible?
You'll need a tool that makes it easy to track metrics. If it's not, you'll find you're not using metrics as much as you should be. Ideally, your CM tool suite has suitable metrics functionality. There are a number of things to consider here.
First, you want it to be easy to obtain your metrics. Maybe you have an overnight job generating them. Or perhaps you can just go into the tool and