The editors can be smarter if they're tied to the language. And so forth.
The key is that someone has to specify a framework to orchestrate the effort, and all of the tools have to sing to the same tune. A single vendor will generally make sure that
this happens, but not always. For example, when a vendor creates a solution by acquiring component tools, there will be a lot of glue and rework before they can sing together. The game plan for having a common framework is often abandoned in favor of gluing together tools that already exist but don't share the same architectural base. This glue integration approach is usually a mistake.
When can an integration approach work? It can work for very well understood applications. Computer languages are compiled into one of many object formats. Whether through a GNU effort or a cross-platform single vendor effort. This is a well understood application. The expectations for integration are well know, and so all tools are designed to the expected standards. New vendors will produce point tools that fit into
this framework. Glue will only be needed for perimeter requirements, such as the conventions used in the commands/user interface.
Application Lifecycle Management is not well understood, by comparison. And even
if it were, there are quite a number of factors that need to be addressed. For one, there is a lot of management data that needs to persist beyond a simple build operation. Problem reports, configuration lineups, file history, etc. There are management processes for each type of object, and workflow spanning types of objects.
Traceability and cross-life-cycle reporting requires that a common repository
is used to reduce complexity. This makes it easy to specify traceability links and to query them. Ideally, your data query language goes beyond relational so that you can directly model the real world without having to first go through a relational data mapping.
Use of a single process engine allows state-flow and workflow to be specified across the lifecycle in a common way. You should be able to identify object states, and transitions between states. You should be able to put rules/triggers and permissions on these transitions. But it's vital that you don't have to use separate workflow tools for each different type of object (i.e. problems, requirements, tasks, changes, etc.).
Start with these two premises and your chances for success improve dramatically. You'll also begin to realize that it's through the common engines that your Multiple Site solutions will have to evolve if those solutions are to deal with all of the life cycle data in a consistent manner.
Horizontal and Vertical Integration
Another key factor is understanding that there are two types of integration: horizontal, or management information integration, and vertical. Vertical integration involves, on the one hand, tools that provide/gather/consume data, and on the other hand, data mining/metric
capabilities. Data gathering tools include editors, word processors, diagramming tools, resource editors, data entry tools, compilers, linkers, make/build tools, testing engines, etc. These are varied.
It's fine that Visual Studio has tried to integrate it's environment to various CM tools. It's fine that Eclipse has tried an even more generic approach. However, these tools have not done enough to provide definitive boundaries for vertical integration. Visual Studio has
assumed a file-based CM tool instead of change-based CM tool, for example. Ever tried to configure it so that you can check-in or delta report on a change instead of a file? It's actually possible, but difficult and somewhat "forced". Eclipse has the opposite problem: no real CM framework - just a generic framework. So each CM tool integration will operate completely differently from an Eclipse perspective.
CM tools and processes are sufficiently varied that it is not obvious what the API should look like that ties together the IDE and the CM/ALM tool. Hence Microsoft decided it should try to put it's