switch back and forth between checkpoint contexts and do queries between checkpoints. They're different from baselines. Perhaps a baseline is to CM as a checkpoint is to data. But having the checkpoint capability does take away the reasons that we might otherwise have for applying CM to all of our data.
Configuration and Data Management
So let's return to our original discussion.
There are objects that we perhaps will agree don't need Configuration Management - problem reports, build records, change records, activities, users. I'm sure I could make a case for CM for some of these things, but I won't until I really see a need. If the CIA needs to track revisions of a user so that it knows what permissions and brain implants the user had on a specific date, we'll adapt our user management to permit that. In fact, you might find that version control of users is not so far out when you start to consider staff relationships. But it's a capability that has to be addressed as needed, and in such a way as to keep things as simple as possible.
Application Life-Cycle Management depends not only on CM capabilities, but other generic data as well. At times, it's hard to separate CM from DM (Data Management). A new revision of a requirement is really just another requirement record that is tied to it's predecessor requirement revision record, isn't it? We're back to our yes and no response. Databases generally don't understand concepts like revisions, history and baselines. Revisions of a requirement form a history. They can be collected into baselines. And although it's ultimately the database that represents these relationships, it's CM that understands them. Without the CM, we need a data interpreter.
Advanced DM tools (or Hybrid Databases, as I might refer to them) will let us express all sorts of data, data dependencies and data relationships. Good CM tools will let us look at data from a specific configuration viewpoint. It will let us specify revisions and branches, and so forth. An integrated CM and DM tool will do so much more. Try storing the "include" relationships of each file for each revision. Any database will let you do that with the appropriate schema. It's just that when you get a half-million file revisions that you realize that the 50 million include relationships are causing a bit of a performance issue.
Take an integrated CM/DM tool that will automatically difference and restore the include list only whenever it changes. Or take an integrated CM/DM tool that will let you compute the "affected" source files, not for a specific configuration, but for an arbitrary rule-based configuration that changes over time, even when the include lists which dictate the "affects" relationship doesn't. This integrated entity understands data and CM.
Show Me the Power
Integrated CM/DM will be more prevalent in next generation CM systems. These tools will allow complex data queries on CM data. Here are a few of my favorites.
- Problem Fixes Missing From a Release Stream. Take a set of files and two development streams. Go back through the history of each file in each development stream and look at the change records used to produce each revision. Now identify the problem reports that were addressed in one stream (i.e. referenced by the changes) that were not addressed in the other stream. Now I know which problems which were fixed in the one stream for a set of files (say my whole product source tree), that potentially need to be addressed in the other stream.
- Build Comparisons. Take two build record definitions