Moving Dependency Tracking Into The CM Process

Last month I spent a bit of time describing how a CM tool could support the creation and comparison of builds, to support the building of multiple variants, etc., based on a single baseline. This month, I will focus on how the CM tool can simplify the build process, moving the process out of "Make" files while supporting the creation of reusable, layered software.

Make (as in Makefile) has been a workhorse in the build scene for as long as I can remember. OK, not quite. Before Make, we were forced to either "compile the world" every time, or build a CM system which could tell us what to compile. When I first used Make, I was surprised at how easy it was to compile only what I needed. At the same time, though, I was also amazed at how complex it could be. Since then, there have been some improvements on the build scene, ANT, Jam and OpenMake among the many.

But some of the lessons from the earlier years need to be kept. Who has not been burnt by the date mechanism of Make, for example, after restoring an older file when the change to it didn't suffice. Or by the complexity it permits.

Impact Analysis
The CM environment is the center of the build universe. Hopefully it understands  the files that exist in the "universe". Many CM systems understand the various file classes: C, C++, Java, Frame, Shell Scripts, etc. It is one thing to deploy a build environment and use a make-like (or more modern) facility to tell you what needs compiling. But a CM tool should be able to tell you the impact of a change before you make it. It should permit you to explore multiple views and assess the impact of a change. It should allow you to take a set of changes (e.g. those check-in for the nightly compile) and tell you which files will need to be compiled, and which changes should be omitted if you wish to avoid a large compile (i.e., because a common header file was changed).

In the late 1970's our mainframe a CM system was able to tell us what needed to be compiled. Even though there were many thousands of files in millions of lines of code, it took but a few seconds. The reason was that the CM system itself tracked the dependencies between files. We could ask: if I change "timer.h" and "files.h" what has to be recompiled. This was an important tool, not only to determine what the impact of a change might be, but to allow the designer to explore which other ways of implementing the change would have less impact.

Software Layering
A key feature of this capability was to support the building of re-usable layers of software. By ensuring that no file in one layer depended on any files in the layers above, the code was kept well layered - the lower layers from one project could be used to start development on a parallel product, perhaps not even related to the first. The CM tool could provide this layering enforcement. Without the enforcement of layering, it was difficult to produce reusable code as "include" (or "uses" or "depends on", etc.) relationships crossed layering boundaries freely, creating dependency loops. In fact, it’s a common practice these days to put conditional compilation instructions in


About the author

AgileConnection is one of the growing communities of the TechWell network.

Featuring fresh, insightful stories, is the place to go for what is happening in software development and delivery.  Join the conversation now!