of software corrupts a version file? Do you have to duplicate the entire disk to protect against disk failure? And perhaps a bigger question: how many people are affected by a server outage, and for how long? What about outages for upgrades - does your information disappear for a few hours or days, or do users not even notice a blip in most cases?
What about all the other data that's on user disks, perhaps on laptops? Are there stategies to easily backup data in workspaces or elsewhere? Staging is one popular way of doing so, but does this staging clutter up the CM database with a lot of irrelavant data in between good data points, or is it done more effectively?
There's a lot to be covered here, and some of the best tools can have some of the worst levels of exposure in this area. Again, familiarize yourself with vendor technology and use this information as a lever against your current vendor to get your requirements met.
9. Use Multiple Site Solutions That Span the Entire ALM Spectrum
Multiple site solutions facilitate global operation. Older generation systems require partitioning and re-synchronizing of data, a painful and potentially administration intensive operation. Modern systems have a more automated approach, but sometimes at the expense of flexibility. You'll have to look at some of my previous articles for a more detailed account of multiple site solutions for global operations. But one key element I'd like to highlight here is that a multiple site solution is not much of a solution if it doesn't cover the entire ALM spectrum. If it's a version control solution but leaves the rest of the data out of the picture, you've got a problem.
Or perhaps you have different tools with different multiple site capabilities for the different pieces of data being controlled. You might be able to get this to work successfully, but more than likely, there's some consistency exposure, not to mention an extra level of administration to coordinate the multiple solutions. Whatever your solution, make sure your global development is covered by a consistent multiple site solution across all parts of your ALM function.
10. Unit Testing and Peer Review of Changes
Finally, many people consider design and development practices separate from CM. This is not the case, in many areas. One area that critical is the quality of what goes into the CM repository. If garbage is going into the CM repository, you'll be dealing with a lot of roll-backs and other CM administration that might go with it. Your development and CM processes must help both your product and its quality to move forward.
If you're not doing unit testing (where the "unit" is a change package, a.k.a. an update), you'll notice your quality dipping as changes are made. Unit testing must be done by the developer before checking in software, or at a minimum, before checked-in software is marked ready for build integration. If you think your organization or project has a good reason to avoid this, think again. If there are road blocks, remove them. The cost of not doing so is simply too high.
Along with unit testing, peer reviews of code are critical. A well groomed peer review process will be more effective than testing at discovering product quality problems, and at a much lower cost. Peer reviews should not just review code changes, but should include a demonstration of the problem fix or new functionality, and should also review the unit testing for completeness and success.
CM/ALM tools come into play here by providing the means