Configuration Management Planning: What To Do Before you Start


This is more attractive because there are well-known sets of procedures, tools, expertise, etc. that can be harvested with more or less a predictable payback and set of problems. Time constraints often push a project into the follow the leader approach, but in a sufficiently large project, it's really worth the effort to push the state of the art, or at least to set that as a goal. You may find that there are advanced tools, processes, and technology that are ready for mainstream and that will give you a competitive edge.

Some Examples
Many of the big strides we've take in CM were made in CM groups of large telecom companies. In those telecom companies where I headed up the CM group, I was always aggressive. In the late 1970s, when CM was really just version control and some build assistance, we took the time to analyze the situation. Two big results seemed to stare us in the face:

(1) Changes, not file revisions, have to drive the software CM process. That's how everyone always worked: they set about implementing a change but then saved the change as a bunch of file revisions. The key information was lost. As a result we were held hostage to the practice of dumping file revisions into the repository, trying to build and then fixing the problems until we had a stable base to go the next round. After a couple of iterations on a change-based CM theme, we settled on the fact that it was the change that had to drive the downstream effort. Changes, not file revisions, were promoted. Baselines and non-frozen configurations were automatically computed based on change states. Context views were based on baselines supplemented by changes. This was a huge success, but not without the other key result.

(2) In the '70s and throughout CM history, including today, many, if not most (read "all" in the '70s) software shops believed that branching was best if it were flexible enough so that you could multiply branches upon branches upon branches. Of course there was a cost for merging the branches, eventually back into a main branch, but that was simply an accepted cost. In 1978 we identified that a branch was required when doing a change only if the previous code needed continued support apart from the change. We attacked the question of what that meant and in the end evolved the stream-based persistent branches instead of a single main trunk. We pushed further to identify what was required in order to minimize parallel checkouts and addressed those issues, one by one. In the end, we built a stream-based CM tool that would grow to support 5000 users on a small number of mainframes (no real network existed back then)

The results were astounding:  simple two-dimensional branching, with one branch per stream in one of the most successful telecom projects of all time (at Nortel). There was very little training required (less than a day) to educate users on a command-line based tool (GUIs didn't exist yet). There was no need for complex branching strategies, labeling, and even, for the most part, parallel checkouts and merging. 95% of the merges were from one development stream to another, not using parallel branches. It was a simple, scalable solution still in use to this day (I think there's a GUI now though). Quite frankly, we didn't know how good a job we'd done until a decade later (late '80s) when we started looking at the industry as a whole and where it was.

About the author

AgileConnection is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.