objects visible to the host operating system as files.
Other fourth generation advances will include:
- Promotable Directory Structure Changes, where the structural changes are easily queried and applied to your view.
- Dynamic Variant Capabilities, where repeatable variants may be applied across multiple streams and/or builds
- Advanced Product/Sub-product Management, so that a maze of shared sub-products may be readily managed across multiple products.
- Automatic Updates, so that the CM tool can inspect the workspace and create (and optionally check in) a change package complete with structural and code changes.
- Data Revisioning, so that data other than files may be revision controlled in an efficient manner.
Reliability and Availability
Having a central CM repository off line becomes pretty costly when dozens or hundreds of users are sitting around idle. Unlike first generation CM systems, second generation systems tend to have some form of centralized repository. This holds key meta-data for the CM function, along with the source code. Take your central repository out of action and or provide poor performance and the feedback, and cost, will be plentiful.
Reliability and availability of the CM tool and repository is of critical importance. Even having to restore from disk backup can be costly. So in third generation systems you'll see a lot more redundancy. Repository or disk mirroring becomes a more important part of your strategy.
As well, live backups (i.e. no down time) are necessary. For larger shops, it's necessary to improve backup capabilities. Neuma's CM+ has an interesting feature that allows you to migrate most of the repository to a logically read-only medium which only has to be backed up one time ever. This reduces the size of nightly backups while new data continues to be written to the normal read/write repository store.
Initial attempts at disaster recovery will make use of replication strategies, especially where this doubles as the strategy for handling development at multiple sites. And this is OK, as long as all of the ALM data is being recovered.
Third generation administration will also aid in the traceability task, from a transactional perspective. This will help to satisfy SOX requirements, addressing the traceability from a repository perspective. Development traceability will remain a CM function.
What lies beyond the 3rd generation? Well, we'll start to see hot-standby disaster recovery capabilities, so that clients can switch from one central repository to another, without missing a beat. We'll also see CM repository checkpointing and recovery capabilities without loss of information. Even in a multiply replicated environment this will add value as it will permit the repository to be rolled back to a given point, offensive transactions edited, and then rolled forward automatically, with the effect that offensive actions, whether by accident or intentional, will no longer pose a significant threat.
Security levels will be improved. It will be easier to specify roles, not only by user, but also by associating teams with specific products to which they have access. This will simplify the complex world of permissions and will enable ITAR segregation requirements, such as those in the Joint Strike Fighter project, to be more easily met. Encryption will be built into CM tools to more easily protect data without the complexities inherent when protected data must be moved off to a separate repository.
Long projects, such as the JSF, will force CM vendors to clearly demonstrate the longevity of their solutions by demonstrating those properties of their tools across the background of changing operating systems, hardware platforms and data architectural capabilities. A major project will want to know that the tool will be able to support them for 20 to 30