of network was being established, initially as a small network of VAXes and then with workstations added in. With a wider management scope, there was a more concentrated effort at process focus. As a result, there needed to be more flexibility to support a changing process, for Project Management, Problem Tracking, Test Case Management, Document Management and Configuration Management. It supported a more flexible array of
editors, documentation tools, compilers and linkers - a more flexible build process. Again SMS was a very successful project, so much so that we tried to export it to other companies. Although successful, there were challenges as the CM/ALM processes were different, as were the build tools. Some key factors here were:
- It was designed to have a configurable process, at least somewhat.
- All of the management tools shared the common data repository.
- It had a portability layer for migrating from VAX to other platforms
- We had full control over how the tools would work together.
- All of the management tools shared a single common user interface
- All of the development plug-in tools (editors, compilers, etc.) had their own interfaces
From these two projects, I learned that it is a lot easier to integrate tools when all of the information was in the same repository. If you look at today's solutions, the most complex ones are those that try to work across multiple repositories, and the simplest ones are those where the repository spans the solution (and often that's why the solution is not wider in scope).
I also learned that things like traceability and reporting were easier to learn and to do when it was the same for all of the management components. In the first case
(PLS), we had separate tools for problem reporting, for project management and for change/configuration/build management. That simply meant that we didn't, as developers, use the project management or the problem reporting tools. Activities were assigned by word of mouth from the manager. Problems were reported, initially on paper, and eventually through an on-line form. But that was the extent of our exposure to the rest of the process. There was a whole separate world of testing and verification results that we only touched at CRB meetings when we were invited.
But using SMS, with everything under one roof, everyone was aware of the work breakdown structure, could search the problem report data, knew about new documents and changed documents on a regular basis. Even though we were still in the days of command line tools, at least the query capability was consistent and provided for an easy way to navigate traceability links.
I also learned that although we thought that we were building a flexible system from a process perspective, there was a long way still to go. Fortunately, we owned the source code and could adjust the tools as necessary to meet our process requirements.
Drawing the Integration Lines
Tool integration isn't just a matter of taking a bunch of tools and putting them together. First of all, we must understand that certain tools "belong" together. They have to be designed to live together. Otherwise, integration glue is going to provide only a partial
You will get a much better integration of an IDE if the editor(s), compiler(s), linker, debugger and run-time monitoring tools are provided/architected by a single vendor, or by a common (e.g. open source) project. Why? Because there is so much data that needs to
be shared by the various tools, some which create the data, some which consume it. To take ready-made tools and try to integrate them together, with a lot of glue and without a common architecture may result in a system which works, but it will be less functional, and result in lower productivity. The linker defines the format for the compiler output. The debugger identifies data that must be available at run-time.