Automation is at the heart of excellence in the field of Configuration Management. However, unless a wider definition of CM is used, that of Application Lifecycle Management, automation will fall far short of the mark. As we enter this still young millenium and look back at the progress of CM, it's clear that the industry has, for the most part, been creating tools to fight fires and to avoid them. Spot solutions originally dealing with version control, change management, problem/issue/defect tracking, etc. have given way to more integration. Yet as a whole, the industry has fallen far short of the mark required for CM, or ALM, automation.
What does automation mean? To some, it's being able to do nightly compiles/builds automatically so that when the team arrives in the morning all is ready. To others, it's the ability to take the test results and automatically verify that all of the requirements have been met. But automation has to go far beyond these examples, useful as they are.
The concept of a Unified Process is important, so long as it is one that can evolve easily and be molded to the requirements of the organization. A rigid UP or one that requires the organization to line up with its proclamations can actually lead to less automation. Process is important because it basically tells us who is responsible for what, when, and how the responsibilities are to be addressed to ensure that requirements are met. Automation requires well-defined processes. It does not require rigid processes. Processes need to fluid in order to conform to a changing environment, changing standards and changing technology.
There are a few factors which have traditionally inhibited automation:
- The approach to integration of lifecycle applications
- The models used for process automation
- The lack of capability to rapidly evolve the automation of process
- The complexity of views and terminology
Let me give some examples to clarify.
Integrating Lifecycle Applications
The traditional approach to integrating lifecycle applications was to perform three basic functions:
- Identify the means that each application provides for receiving and processing external requests
- Identify the means that each application provides for requesting external information or changes to the information
- Identify the set of data in each application that needed to be exchanged with other applications
When this was completed, the level of integration capability could be determined and could be assessed in light of the requirements of the integrated applications. The key flaws here are that the assessment is always subjective, and usually tainted by other solutions, and the assessment is done at a single point in time which cannot easily take into account evolving process requirements.
This traditional approach to integration fits into the 2nd generation of CM capabilities. What is needed in a 3rd generation of CM capabilites is really nothing more than common middleware, and an underlying common repository, that can serve both applications. If either application does not sit on the same middleware or repository, integration will be limited. Why? Because specifications will have to be developed for what and how they communicate. Because capabilities in one application won't match those in the other.
Middleware is a big component. It has to have all of the bases covered: how process is described and referenced, data query, search and update super layers that go beyond basic SQL capabilities, GUI specification and implementation capabilities so that users don't have to learn two applications (just two application processes), etc. So, for example, when I say that my ALM solution supports multiple site operation, it doesn't mean that one supports it sort-of and the other even better, or even differently. It means that my solution supports multiple site operation - I enable it, it works. It recovers from network outages in the same way for all applications - no complex re-training of my admin staff for each different application.
Models Used For Process Automation
Many think that process modeling basically ends when I've selected a tool that can express processes cleanly and clearly. In fact, it's not even sufficient that such a tool can export process data that can be interpreted by the application. There are some basic needs.
Processes are fluid, ever changing to adapt to newly uncovered requirements and to better way