Last month I spent a bit of time describing how a CM tool could support the creation and comparison of builds, to support the building of multiple variants, etc. based on a single baseline. This month, I will focus on how the CM tool can simplify the build process, moving the process out of "Make" files while supporting the creation of reusable, layered software.
Make (as in Makefile) has been a workhorse in the build scene for as long as I can remember. OK, not quite. Before Make, we were forced to either "compile the world" every time, or build a CM system which could tell us what to compile. When I first used Make, I was surprised at how easy it was to compile only what I needed. At the same time, I was amazed at how complex it could be. Since then, there have been some improvements on the build scene, ANT, Jam and OpenMake among the many.
But some of the lessons from the earlier years need to be kept. Who has not been burnt by the "date" mechanism of Make, for example, after restoring an older file when the change to it didn't suffice. Or by the complexity it permits.
The CM environment is the centre of the build universe. It understands (hopefully!) what files there are in the "universe". Many CM systems understand the various file classes: C, C++, Java, Frame, Shell Scripts, etc. It is one thing to deploy a build environment and use a make-like (or more modern) facility to tell you what needs compiling. But a CM tool should be able to tell you the impact of a change before you make it. It should permit you to explore multiple views and assess the impact of a change. It should allow you to take a set of changes (e.g. those check-in for the nightly compile) and tell you which files will need to be compiled, and which changes should be omitted if you wish to avoid a large compile (i.e. because a common header file was changed).
In the late 1970's our mainframe CM system was able to tell us what needed to be compiled. Even though there were many thousands of files in millions of lines of code, it took but a few seconds. The reason was that the CM system itself tracked the dependencies between files. We could ask: if I change "timer.h" and "files.h" what has to be recompiled. This was an important tool, not only to determine what the impact of a change might be, but to allow the designer to explore which other ways of implementing the change would have less impact.
A key feature of this capability was to support the building of re-usable layers of software. By ensuring that no file in one layer depended on any files in the layers above, the code was kept well layered - the lower layers from one project could be used to start development on a parallel product, perhaps not even related to the first. The CM tool could provide this layering enforcement. Without the enforcement of layering, it was difficult to produce reusable code as "include" (or "uses" or "depends on", etc.) relationships crossed layering boundaries freely, creating dependency loops. In fact, its a common practice these days to put conditional compilation instructions in place to break dependency loops (e.g. #ifndef abc-header / #define abc-header /... / #endif). This seems like a fine solution, and it is, as long as you don't want to reuse only a portion (e.g. a layer) of the software.
Some languages, such as Modula-2, Ada and Nortel's Protel, even had compiled headers which had to be compiled in a specific order to avoid compile failure. This was actually a good thing, as long as you had an environment which would tell you the correct compile order every time you had to do a compilation. These languages really did help to provide reusable