to drive the Software CM Process. That's how everyone always worked - they set about implementing a change but then saved the change as a bunch of file revisions. The key information was lost. As a result we were held hostage to the practice of dumping file revisions into the repository, trying to build and then fixing the problems until we had a stable base to go the next round. After a couple of iterations on a change-based CM theme, we settled on the fact that it was the change that had to drive the downstream effort. Changes, not file revisions, were promoted. Baselines and non-frozen configurations were automatically computed based on change states. Context views were based on baselines supplemented by changes. This was a huge success, but not without the other key result.
(2) In the '70s and throughout CM history, including today, many, if not most (read "all" in the '70s) software shops believed that branching was best if it were flexible enough so that you could multiply branches upon branches upon branches. Of course there was a cost for merging the branches, eventually back into a main branch, but that was simply
an accepted cost. In 1978 we identified that a branch was required when doing a change only if the previous code needed continued support apart from the change. We attacked the question of what that meant and in the end evolved the stream-based persistent branches instead of a single main trunk. We pushed further to identify what was required in order to minimize parallel checkouts and addressed those issues, one by one. In the end, we built a stream-based CM tool that would grow to support 5000 users on a small number of mainframes (no real "network" existed back then)
The results were astounding. Simple two-dimensional branching, with one branch per stream in one of the most successful telecom projects of all time (at Nortel). There was very little training required (less than a day) to educate users on a command-line based tool (GUIs didn't exist yet). There was no need for complex branching strategies, labelling, and even, for the most part, parallel checkouts and merging. 95% of the merges were from one development stream to another, not using parallel branches. It was a simple, scalable solution still in use to this day (I think there's a GUI now though). Quite frankly, we didn't know how good a job we'd done until a decade later (late '80s) when we started looking at the industry as a whole and where it was.
The point is that some analysis, and a resolve to do things right, resulted in a highly successful solution.
Another goal we set for ourselves was to automate. This started at Nortel in the late '70s, where our nightly build process would automatically test compile and notify developers of problems before they left for the day, and automatically produce the builds required each day at the various promotion levels. In the '80s at Mitel, we took this one step further so that we could even download (over an RS232 link) the executables onto the test targets and run predefined test suites against them at virtually a single push of a button.
In both cases we would automatically compute what needed to be compiled based on change status and "include/uses" dependencies so that we would not have to compile the world every night. (A 1 MIP computer was a powerful mainframe back then [VAX 780], and still could support dozens of users, but could not take a load of having to perform several thousand compiles in just a few hours.) So we focused on automating and then optimizing the automation to use as few resources as possible.
The focus on automation was highly successful.
In developing CM+ at Neuma, a couple of focus points were "near-zero administration" and "easy customization", to