weeks or months or even years. If you think that's impossible, all it means is that it's time for you to review CM technology again.
Branching strategy is key, it's true. But the minimal strategy that meets the requirements must take precedence over the minimal CM tool functionality mentality. CM tools need to deal with promotion without forcing every file to branch. The same goes for change packaging, for short term parallel changes, for change owner identification, for baseline and build identification. Design teams need to use the tools effectively, handling variants as a software engineering capability first, and only when that's in place, using CM to support it. Undisciplined branching will eventually contribute to the downfall of a project.
A Monolithic Monster
I remember the days of PCTE (portable common tool environment), of "backplane" technology, and of other efforts to get all of the best of breed management tools to work together in a cooperative manner - requirements, test management, change management, version control, document management, build and release management. Contrast that with the companies that are trying to build all of those things into one giant monolithic monster. No way - I'd rather imagine having the best of breed and them all working together. Twenty-something years later, we're still working at such a solution.
If I'm using tool A, B, and C, and you give me a way of helping them work together so that I have better overall management and traceability, that's great. That's fantastic. I don't have to create the glue myself and, even better, I don't have to maintain it. In these days of open source, especially, someone else is doing that for me.
Mistake #3: Common backplane schemes reduce effort and costs, but they don't come close to well designed monolithic tools
I've been doing software development since the late '60s and there's one thing I've noticed - it's generally better, less costly and takes less time to add-on your own mouse trap than to try to integrate with an existing one. There are exceptions, of course. I wouldn't want to build an OS onto my CM tool and then try to sell the CM tool with the OS. But a CM tool is layered on top of an OS. A problem tracking tool is not. Nor is a requirements tool, nor a release management tool. They're more like siblings. And I wouldn't say that a Problem Tracking company could relatively easily build in a CM component.
But CM is the heart of the ALM cycle - it deals with configurations and changes to the configurations. That's a complex process. It requires data repositories, process engines, advanced user interfaces, etc. If you use these "engines" as a common platform for all of the N ALM functions, you simplify administration by a factor of N; you don't have to figure out how N multiple site solutions will work together; you don't have to build message structures and concepts for interchanging data among N different tools; you don't have to build glue for N*N/2 tool integrations; you don't have to adjust N-1 pieces of glue every time 1 of the N pieces are upgraded. In fact, you probably won't need training on N tools. You can probably customize all N tools in fairly much the same way.
With a monolithic system, if you're building an ALM tool, you can put most of your energy into the common engines, which will benefit all functions, and with the resultant resource savings, you can spend more time on individual functions to tailor them more specifically to the requirements of