code modules because the dependencies could only go in one direction, from higher layers to lower layers.
Impact with Dynamic Views
I'm not suggesting that we go back to a world of compiled headers (although some IDEs support this feature for performance reasons). But I am suggesting that the CM system needs to track arbitrary dependencies among its configuration items (CIs). It is the most logical place to track these dependencies because a user can have an arbitrary view of the CIs. Without actually deploying them, it is not easy to identify dependencies for the view. Some CM systems (e.g. Clearcase) allow you to avoid deploying the CIs because it makes the OS view of the file system reflect the actual CM user's view. However, in a system of tens of thousands of CIs, having to traverse the file system to determine dependencies is a time consuming, resource intensive task. If you've ever tried to identify who broke the layering rules, you'll know that a database, even if its only an intermediate one, is necessary to allow you to traverse the dependency relationships repeatedly to help determine the culprit.
Putting the Process Where it Belongs
Perhaps a more important point - a CM system which tracks dependencies can readily tell you what needs to be re-compiled, without having to rely on date stamps. Instead, only a straight-forward process is required: keep track of the process used to generate your object environment. Then changes to your source (and meta-source) environment can be used by the CM system to determine what needs to be compiled.
How about a concrete example. Let's say I'm supporting a development team's object environment and I've just compiled the entire object code for the first time. From this point forward, I can can do incremental compiles. Suppose headers "one.h" and "two.h" were the only header files modified by the changes submitted for the next nightly compile. Then I need only compile all of the changed files plus any files affected by "one.h" and "two.h". The CM system can tell me exactly which files need to be compiled. I don't have to rely on date stamps. The CM system tells me which files to replace in the compile environment, and also which files to re-compile. In fact, it can generate a simple "compile script" which has no dependency information in it. My compile script is now basically a list of items to be recompiled, possibly with the set of compile options for each item.
There's great benefit here in that the build process moves out of the Make file(s) and into the realm of the build process suppport of the CM tool. (Ever try to infer the build process from a [set of] Make files?) The process can be defined and applied to any number of make files. The dependency information does not need to be dumped to an intermediate file for interpretation.
Full Compiles versus Incremental Compiles
Most small and even medium sized projects don't even have to worry about this problem. They just recompile the world every night. What's a few hundred even a few thousand files. And I agree with that strategy - simple is best. If your designers can use the same strategy on their own desktops without a significant delay - compile the world.
Its when you get to large projects, or find your developer compiles taking more than a few minutes or find yourselves having to generate dozens of environments nightly that a reliable, incremental compile capability is needed. This is where the CM system should be giving you support.