not required to fix urgent bugs". In the context of the process, this is reasonable and even good. But overall, code is going to be checked out for even longer periods. When code is checked out longer than necessary, more parallel changes are required. When the developer does not have full control over when they are checked in, even more parallel changes are required. So starts a complex maze of branching and merging and a Configuration Management and labeling strategy begins to grow for management of such branches and merges.
Another implication of the above is that the make files checked in are going to be used for the build operation. That's not bad in itself, but does it force a serialization of changes to the make files, and implicitly the corresponding code changes, that is other than optimal. In other words, the structural changes of the product are tied to the code changes of
the make file(s). One frequent response is to segment the make file into many little pieces. Now anyone who has worked with make files knows that they can be complex beasts. So now this complexity, along with all of the maintenance of the complexity, is distributed and/or replicated into many files.
You could probably identify some other reprocussions from the above nightly build policy. For example, what happens when a build breaks? What happens if the server is down
prior to the build and required changes don't make it in? How does a change control board figure into this? What about parallel stream development so that work on the next release can be started? These questions all hinge on the CM Process and Tools used to implement your Change Management Process.
CM Requirements for Build Automation
To keep Change Management working properly we need to take a different perspective. To effectively run an automated build and effective integration shop there are a number of requirements that must be met by the CM tool and process. Let look at some of these:
(1) Change Packages (aka: Updates):
Ability to treat code changes as logical changes, rather than as file changes. Let's face it, a designer will fix a problem or implement a feature. The files being changed, as well as the problem(s) or feature(s) being addressed form a logical unit of change. The change
must move through the system as a whole, from check-in through all levels of promotion. File-based CM is an old concept and if your tools push you this way, it's time to look around.
Ability to snapshot a build record and to easily reproduce the build from the snapshot. A build record must be something that drives the build process rather than a record of a build being performed. Reproducibility is the goal. The snapshot can be in the form of an
enumeration of file revisions (including tool/process components), as is done for a typical baseline, or perhaps in terms of an existing baseline plus a set of change packages relative to that baseline.
Ability to check-in code without forcing it into the next build(s) - Completed code that's out on a developer's disk is a liability. It's prone to disk crashes; it's prone to becoming out of date; it's likely to go through fewer verification cycles. Completed code belongs in a
repository where it can be used by others, simply to review, or as a basis for future changes, without the need to have a parallel change branch. A developer should be able to put a source change into the repository even before dependent changes are checked in. The developer then pushes the change to the "ready" state after confirming it's ready
to go into an integration build.
Ability to differentiate between check-in authorization and build acceptance [similar to (1)]. Often, especially at the beginning of a release development effort, any