ways. Talking about doing CM on the contents or structure of a relational database reveals some of the obvious limitations of the file-version approach. Trying to treat a database as one or more giant files, or trying to manage a database by treating it as a composite image built from input sources (SQL files) are two common responses to the database CM challenge. Neither works.
One possibly crucial difference is in the basic representation. File-versions are worked on until they are considered acceptable in toto. A developer changes the entire file, and generally has some desired end configuration in mind. In contrast, a database is changed by executing statements. The changes that a developer makes are made not by modifying the entire database, but by delivering a set of executable statements that perform the desired change. Thus, while a CM tool (or a developer) may use a differencing tool to compute the delta between the old and new versions of a file, in a database system the executable statements delivered are the delta between the old and new versions.
Actually using the versions of a file generally requires performing some kind of update to a work area- a location in a computer system where the files being used are projected in order to permit the development team to edit, compile, or test them. The act of updating a work area to reflect a new configuration of files and version is remarkably similar to updating a database. In fact, most CM tools emit a running commentary about what actions are being taken that looks remarkably like SQL: "creating directory X; updating file Y with version V; deleting file Z." By expressing CM in terms of the operations on work areas, LDM takes advantage of these similarities with databases to support both work-area based development (files) and database development.
A non-obvious shortcoming of the version-centric approach is the idea of encoding quality (or ‘status') in the file-version's life cycle. If a file has discrete versions, then those versions can be configured or selected independently. It should be possible to determine the acceptability of a file-version either alone or in the context of a set of associated file-versions. It follows, then, that as a file-version passes a series of tests, the quality or status of the file-version is known to be higher and higher. This ladder of perceived quality is encoded into a life cycle that is applied to each file-version.
The idea of a discretely tunable configuration is insupportable in a general-purpose database system. The order-dependent nature of database changes makes discretely tuning database content or structure impossible except in limited, rare circumstances. In fact, the notion of independently selectable versions of files is pretty thoroughly discredited in "standard" CM circles, too. Nearly all vendors, and a healthy cross-section of open-source tools, offer some mechanism for bundling sets of related file versions. Whether they are called ‘tasks' or ‘change packages,' the idea is the same: that a group of file changes belongs together. Most modern software systems require this kind of treatment because an individual file-version cannot be added or removed independently-the build or an immediate functional test will fail if the related changes to other files in the system are not also added or removed.
LDM is rooted in database-heavy development. Changes to the database are incremental, and omitting or reordering the changes is frequently impossible. Because of this, LDM focuses on the reality of delivering compatible changes, in order, to a finite set of target locations: database servers. This makes the LDM technique both a server CM technique and a